00:00:00.000 Started by upstream project "autotest-per-patch" build number 132528 00:00:00.000 originally caused by: 00:00:00.000 Started by user sys_sgci 00:00:00.021 Checking out git https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool into /var/jenkins_home/workspace/raid-vg-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4 to read jbp/jenkins/jjb-config/jobs/autotest-downstream/autotest-vg.groovy 00:00:00.022 The recommended git tool is: git 00:00:00.022 using credential 00000000-0000-0000-0000-000000000002 00:00:00.024 > git rev-parse --resolve-git-dir /var/jenkins_home/workspace/raid-vg-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4/jbp/.git # timeout=10 00:00:00.047 Fetching changes from the remote Git repository 00:00:00.049 > git config remote.origin.url https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool # timeout=10 00:00:00.076 Using shallow fetch with depth 1 00:00:00.076 Fetching upstream changes from https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool 00:00:00.076 > git --version # timeout=10 00:00:00.101 > git --version # 'git version 2.39.2' 00:00:00.101 using GIT_ASKPASS to set credentials SPDKCI HTTPS Credentials 00:00:00.116 Setting http proxy: proxy-dmz.intel.com:911 00:00:00.116 > git fetch --tags --force --progress --depth=1 -- https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool refs/heads/master # timeout=5 00:00:05.141 > git rev-parse origin/FETCH_HEAD^{commit} # timeout=10 00:00:05.154 > git rev-parse FETCH_HEAD^{commit} # timeout=10 00:00:05.167 Checking out Revision db4637e8b949f278f369ec13f70585206ccd9507 (FETCH_HEAD) 00:00:05.167 > git config core.sparsecheckout # timeout=10 00:00:05.179 > git read-tree -mu HEAD # timeout=10 00:00:05.196 > git checkout -f db4637e8b949f278f369ec13f70585206ccd9507 # timeout=5 00:00:05.219 Commit message: "jenkins/jjb-config: Add missing SPDK_TEST_NVME_INTERRUPT flag" 00:00:05.220 > git rev-list --no-walk db4637e8b949f278f369ec13f70585206ccd9507 # timeout=10 00:00:05.311 [Pipeline] Start of Pipeline 00:00:05.326 [Pipeline] library 00:00:05.327 Loading library shm_lib@master 00:00:05.328 Library shm_lib@master is cached. Copying from home. 00:00:05.348 [Pipeline] node 00:00:20.368 Still waiting to schedule task 00:00:20.368 Waiting for next available executor on ‘vagrant-vm-host’ 00:20:49.607 Running on VM-host-SM4 in /var/jenkins/workspace/raid-vg-autotest 00:20:49.608 [Pipeline] { 00:20:49.618 [Pipeline] catchError 00:20:49.619 [Pipeline] { 00:20:49.633 [Pipeline] wrap 00:20:49.641 [Pipeline] { 00:20:49.649 [Pipeline] stage 00:20:49.651 [Pipeline] { (Prologue) 00:20:49.672 [Pipeline] echo 00:20:49.675 Node: VM-host-SM4 00:20:49.681 [Pipeline] cleanWs 00:20:49.690 [WS-CLEANUP] Deleting project workspace... 00:20:49.690 [WS-CLEANUP] Deferred wipeout is used... 00:20:49.698 [WS-CLEANUP] done 00:20:49.914 [Pipeline] setCustomBuildProperty 00:20:50.004 [Pipeline] httpRequest 00:20:50.378 [Pipeline] echo 00:20:50.380 Sorcerer 10.211.164.101 is alive 00:20:50.392 [Pipeline] retry 00:20:50.394 [Pipeline] { 00:20:50.417 [Pipeline] httpRequest 00:20:50.422 HttpMethod: GET 00:20:50.423 URL: http://10.211.164.101/packages/jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:20:50.424 Sending request to url: http://10.211.164.101/packages/jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:20:50.425 Response Code: HTTP/1.1 200 OK 00:20:50.426 Success: Status code 200 is in the accepted range: 200,404 00:20:50.426 Saving response body to /var/jenkins/workspace/raid-vg-autotest/jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:20:50.572 [Pipeline] } 00:20:50.590 [Pipeline] // retry 00:20:50.598 [Pipeline] sh 00:20:50.886 + tar --no-same-owner -xf jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:20:50.903 [Pipeline] httpRequest 00:20:51.777 [Pipeline] echo 00:20:51.779 Sorcerer 10.211.164.101 is alive 00:20:51.788 [Pipeline] retry 00:20:51.790 [Pipeline] { 00:20:51.806 [Pipeline] httpRequest 00:20:51.811 HttpMethod: GET 00:20:51.812 URL: http://10.211.164.101/packages/spdk_f7ce15267707aa0a59fa142564fc34607599b496.tar.gz 00:20:51.812 Sending request to url: http://10.211.164.101/packages/spdk_f7ce15267707aa0a59fa142564fc34607599b496.tar.gz 00:20:51.814 Response Code: HTTP/1.1 200 OK 00:20:51.815 Success: Status code 200 is in the accepted range: 200,404 00:20:51.815 Saving response body to /var/jenkins/workspace/raid-vg-autotest/spdk_f7ce15267707aa0a59fa142564fc34607599b496.tar.gz 00:20:59.194 [Pipeline] } 00:20:59.213 [Pipeline] // retry 00:20:59.222 [Pipeline] sh 00:20:59.509 + tar --no-same-owner -xf spdk_f7ce15267707aa0a59fa142564fc34607599b496.tar.gz 00:21:02.102 [Pipeline] sh 00:21:02.376 + git -C spdk log --oneline -n5 00:21:02.377 f7ce15267 bdev: Insert or overwrite metadata using bounce/accel buffer if NVMe PRACT is set 00:21:02.377 aa58c9e0b dif: Add spdk_dif_pi_format_get_size() to use for NVMe PRACT 00:21:02.377 e93f0f941 bdev/malloc: Support accel sequence when DIF is enabled 00:21:02.377 27c6508ea bdev: Add spdk_bdev_io_hide_metadata() for bdev modules 00:21:02.377 c86e5b182 bdev/malloc: Extract internal of verify_pi() for code reuse 00:21:02.395 [Pipeline] writeFile 00:21:02.408 [Pipeline] sh 00:21:02.682 + jbp/jenkins/jjb-config/jobs/scripts/autorun_quirks.sh 00:21:02.693 [Pipeline] sh 00:21:03.000 + cat autorun-spdk.conf 00:21:03.001 SPDK_RUN_FUNCTIONAL_TEST=1 00:21:03.001 SPDK_RUN_ASAN=1 00:21:03.001 SPDK_RUN_UBSAN=1 00:21:03.001 SPDK_TEST_RAID=1 00:21:03.001 SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:21:03.007 RUN_NIGHTLY=0 00:21:03.009 [Pipeline] } 00:21:03.024 [Pipeline] // stage 00:21:03.039 [Pipeline] stage 00:21:03.042 [Pipeline] { (Run VM) 00:21:03.057 [Pipeline] sh 00:21:03.339 + jbp/jenkins/jjb-config/jobs/scripts/prepare_nvme.sh 00:21:03.339 + echo 'Start stage prepare_nvme.sh' 00:21:03.339 Start stage prepare_nvme.sh 00:21:03.339 + [[ -n 8 ]] 00:21:03.339 + disk_prefix=ex8 00:21:03.339 + [[ -n /var/jenkins/workspace/raid-vg-autotest ]] 00:21:03.339 + [[ -e /var/jenkins/workspace/raid-vg-autotest/autorun-spdk.conf ]] 00:21:03.339 + source /var/jenkins/workspace/raid-vg-autotest/autorun-spdk.conf 00:21:03.339 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:21:03.339 ++ SPDK_RUN_ASAN=1 00:21:03.339 ++ SPDK_RUN_UBSAN=1 00:21:03.339 ++ SPDK_TEST_RAID=1 00:21:03.339 ++ SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:21:03.339 ++ RUN_NIGHTLY=0 00:21:03.339 + cd /var/jenkins/workspace/raid-vg-autotest 00:21:03.339 + nvme_files=() 00:21:03.339 + declare -A nvme_files 00:21:03.339 + backend_dir=/var/lib/libvirt/images/backends 00:21:03.339 + nvme_files['nvme.img']=5G 00:21:03.339 + nvme_files['nvme-cmb.img']=5G 00:21:03.339 + nvme_files['nvme-multi0.img']=4G 00:21:03.339 + nvme_files['nvme-multi1.img']=4G 00:21:03.339 + nvme_files['nvme-multi2.img']=4G 00:21:03.339 + nvme_files['nvme-openstack.img']=8G 00:21:03.339 + nvme_files['nvme-zns.img']=5G 00:21:03.339 + (( SPDK_TEST_NVME_PMR == 1 )) 00:21:03.339 + (( SPDK_TEST_FTL == 1 )) 00:21:03.339 + (( SPDK_TEST_NVME_FDP == 1 )) 00:21:03.339 + [[ ! -d /var/lib/libvirt/images/backends ]] 00:21:03.339 + for nvme in "${!nvme_files[@]}" 00:21:03.339 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex8-nvme-multi2.img -s 4G 00:21:03.339 Formatting '/var/lib/libvirt/images/backends/ex8-nvme-multi2.img', fmt=raw size=4294967296 preallocation=falloc 00:21:03.339 + for nvme in "${!nvme_files[@]}" 00:21:03.339 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex8-nvme-cmb.img -s 5G 00:21:03.339 Formatting '/var/lib/libvirt/images/backends/ex8-nvme-cmb.img', fmt=raw size=5368709120 preallocation=falloc 00:21:03.339 + for nvme in "${!nvme_files[@]}" 00:21:03.339 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex8-nvme-openstack.img -s 8G 00:21:03.339 Formatting '/var/lib/libvirt/images/backends/ex8-nvme-openstack.img', fmt=raw size=8589934592 preallocation=falloc 00:21:03.339 + for nvme in "${!nvme_files[@]}" 00:21:03.339 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex8-nvme-zns.img -s 5G 00:21:03.339 Formatting '/var/lib/libvirt/images/backends/ex8-nvme-zns.img', fmt=raw size=5368709120 preallocation=falloc 00:21:03.339 + for nvme in "${!nvme_files[@]}" 00:21:03.339 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex8-nvme-multi1.img -s 4G 00:21:03.597 Formatting '/var/lib/libvirt/images/backends/ex8-nvme-multi1.img', fmt=raw size=4294967296 preallocation=falloc 00:21:03.597 + for nvme in "${!nvme_files[@]}" 00:21:03.597 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex8-nvme-multi0.img -s 4G 00:21:03.597 Formatting '/var/lib/libvirt/images/backends/ex8-nvme-multi0.img', fmt=raw size=4294967296 preallocation=falloc 00:21:03.597 + for nvme in "${!nvme_files[@]}" 00:21:03.597 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex8-nvme.img -s 5G 00:21:04.577 Formatting '/var/lib/libvirt/images/backends/ex8-nvme.img', fmt=raw size=5368709120 preallocation=falloc 00:21:04.577 ++ sudo grep -rl ex8-nvme.img /etc/libvirt/qemu 00:21:04.577 + echo 'End stage prepare_nvme.sh' 00:21:04.577 End stage prepare_nvme.sh 00:21:04.589 [Pipeline] sh 00:21:04.869 + DISTRO=fedora39 CPUS=10 RAM=12288 jbp/jenkins/jjb-config/jobs/scripts/vagrant_create_vm.sh 00:21:04.869 Setup: -n 10 -s 12288 -x http://proxy-dmz.intel.com:911 -p libvirt --qemu-emulator=/usr/local/qemu/vanilla-v8.0.0/bin/qemu-system-x86_64 --nic-model=e1000 -b /var/lib/libvirt/images/backends/ex8-nvme.img -b /var/lib/libvirt/images/backends/ex8-nvme-multi0.img,nvme,/var/lib/libvirt/images/backends/ex8-nvme-multi1.img:/var/lib/libvirt/images/backends/ex8-nvme-multi2.img -H -a -v -f fedora39 00:21:04.869 00:21:04.869 DIR=/var/jenkins/workspace/raid-vg-autotest/spdk/scripts/vagrant 00:21:04.869 SPDK_DIR=/var/jenkins/workspace/raid-vg-autotest/spdk 00:21:04.869 VAGRANT_TARGET=/var/jenkins/workspace/raid-vg-autotest 00:21:04.869 HELP=0 00:21:04.869 DRY_RUN=0 00:21:04.869 NVME_FILE=/var/lib/libvirt/images/backends/ex8-nvme.img,/var/lib/libvirt/images/backends/ex8-nvme-multi0.img, 00:21:04.869 NVME_DISKS_TYPE=nvme,nvme, 00:21:04.869 NVME_AUTO_CREATE=0 00:21:04.869 NVME_DISKS_NAMESPACES=,/var/lib/libvirt/images/backends/ex8-nvme-multi1.img:/var/lib/libvirt/images/backends/ex8-nvme-multi2.img, 00:21:04.869 NVME_CMB=,, 00:21:04.869 NVME_PMR=,, 00:21:04.869 NVME_ZNS=,, 00:21:04.869 NVME_MS=,, 00:21:04.869 NVME_FDP=,, 00:21:04.869 SPDK_VAGRANT_DISTRO=fedora39 00:21:04.869 SPDK_VAGRANT_VMCPU=10 00:21:04.869 SPDK_VAGRANT_VMRAM=12288 00:21:04.869 SPDK_VAGRANT_PROVIDER=libvirt 00:21:04.869 SPDK_VAGRANT_HTTP_PROXY=http://proxy-dmz.intel.com:911 00:21:04.869 SPDK_QEMU_EMULATOR=/usr/local/qemu/vanilla-v8.0.0/bin/qemu-system-x86_64 00:21:04.869 SPDK_OPENSTACK_NETWORK=0 00:21:04.869 VAGRANT_PACKAGE_BOX=0 00:21:04.869 VAGRANTFILE=/var/jenkins/workspace/raid-vg-autotest/spdk/scripts/vagrant/Vagrantfile 00:21:04.869 FORCE_DISTRO=true 00:21:04.869 VAGRANT_BOX_VERSION= 00:21:04.869 EXTRA_VAGRANTFILES= 00:21:04.869 NIC_MODEL=e1000 00:21:04.869 00:21:04.869 mkdir: created directory '/var/jenkins/workspace/raid-vg-autotest/fedora39-libvirt' 00:21:04.869 /var/jenkins/workspace/raid-vg-autotest/fedora39-libvirt /var/jenkins/workspace/raid-vg-autotest 00:21:08.154 Bringing machine 'default' up with 'libvirt' provider... 00:21:08.720 ==> default: Creating image (snapshot of base box volume). 00:21:08.979 ==> default: Creating domain with the following settings... 00:21:08.979 ==> default: -- Name: fedora39-39-1.5-1721788873-2326_default_1732641465_c65a23a44a3a0fce1bbc 00:21:08.979 ==> default: -- Domain type: kvm 00:21:08.979 ==> default: -- Cpus: 10 00:21:08.979 ==> default: -- Feature: acpi 00:21:08.979 ==> default: -- Feature: apic 00:21:08.979 ==> default: -- Feature: pae 00:21:08.979 ==> default: -- Memory: 12288M 00:21:08.979 ==> default: -- Memory Backing: hugepages: 00:21:08.979 ==> default: -- Management MAC: 00:21:08.979 ==> default: -- Loader: 00:21:08.979 ==> default: -- Nvram: 00:21:08.979 ==> default: -- Base box: spdk/fedora39 00:21:08.979 ==> default: -- Storage pool: default 00:21:08.979 ==> default: -- Image: /var/lib/libvirt/images/fedora39-39-1.5-1721788873-2326_default_1732641465_c65a23a44a3a0fce1bbc.img (20G) 00:21:08.979 ==> default: -- Volume Cache: default 00:21:08.979 ==> default: -- Kernel: 00:21:08.979 ==> default: -- Initrd: 00:21:08.979 ==> default: -- Graphics Type: vnc 00:21:08.979 ==> default: -- Graphics Port: -1 00:21:08.979 ==> default: -- Graphics IP: 127.0.0.1 00:21:08.979 ==> default: -- Graphics Password: Not defined 00:21:08.979 ==> default: -- Video Type: cirrus 00:21:08.979 ==> default: -- Video VRAM: 9216 00:21:08.979 ==> default: -- Sound Type: 00:21:08.979 ==> default: -- Keymap: en-us 00:21:08.979 ==> default: -- TPM Path: 00:21:08.979 ==> default: -- INPUT: type=mouse, bus=ps2 00:21:08.979 ==> default: -- Command line args: 00:21:08.979 ==> default: -> value=-device, 00:21:08.979 ==> default: -> value=nvme,id=nvme-0,serial=12340,addr=0x10, 00:21:08.979 ==> default: -> value=-drive, 00:21:08.979 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex8-nvme.img,if=none,id=nvme-0-drive0, 00:21:08.979 ==> default: -> value=-device, 00:21:08.979 ==> default: -> value=nvme-ns,drive=nvme-0-drive0,bus=nvme-0,nsid=1,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:21:08.979 ==> default: -> value=-device, 00:21:08.979 ==> default: -> value=nvme,id=nvme-1,serial=12341,addr=0x11, 00:21:08.979 ==> default: -> value=-drive, 00:21:08.979 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex8-nvme-multi0.img,if=none,id=nvme-1-drive0, 00:21:08.979 ==> default: -> value=-device, 00:21:08.979 ==> default: -> value=nvme-ns,drive=nvme-1-drive0,bus=nvme-1,nsid=1,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:21:08.979 ==> default: -> value=-drive, 00:21:08.979 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex8-nvme-multi1.img,if=none,id=nvme-1-drive1, 00:21:08.979 ==> default: -> value=-device, 00:21:08.979 ==> default: -> value=nvme-ns,drive=nvme-1-drive1,bus=nvme-1,nsid=2,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:21:08.979 ==> default: -> value=-drive, 00:21:08.979 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex8-nvme-multi2.img,if=none,id=nvme-1-drive2, 00:21:08.979 ==> default: -> value=-device, 00:21:08.979 ==> default: -> value=nvme-ns,drive=nvme-1-drive2,bus=nvme-1,nsid=3,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:21:09.237 ==> default: Creating shared folders metadata... 00:21:09.237 ==> default: Starting domain. 00:21:11.139 ==> default: Waiting for domain to get an IP address... 00:21:33.110 ==> default: Waiting for SSH to become available... 00:21:33.110 ==> default: Configuring and enabling network interfaces... 00:21:36.403 default: SSH address: 192.168.121.126:22 00:21:36.403 default: SSH username: vagrant 00:21:36.403 default: SSH auth method: private key 00:21:38.947 ==> default: Rsyncing folder: /mnt/jenkins_nvme/jenkins/workspace/raid-vg-autotest/spdk/ => /home/vagrant/spdk_repo/spdk 00:21:47.054 ==> default: Mounting SSHFS shared folder... 00:21:48.956 ==> default: Mounting folder via SSHFS: /mnt/jenkins_nvme/jenkins/workspace/raid-vg-autotest/fedora39-libvirt/output => /home/vagrant/spdk_repo/output 00:21:48.956 ==> default: Checking Mount.. 00:21:50.334 ==> default: Folder Successfully Mounted! 00:21:50.334 ==> default: Running provisioner: file... 00:21:50.900 default: ~/.gitconfig => .gitconfig 00:21:51.465 00:21:51.465 SUCCESS! 00:21:51.465 00:21:51.465 cd to /var/jenkins/workspace/raid-vg-autotest/fedora39-libvirt and type "vagrant ssh" to use. 00:21:51.465 Use vagrant "suspend" and vagrant "resume" to stop and start. 00:21:51.465 Use vagrant "destroy" followed by "rm -rf /var/jenkins/workspace/raid-vg-autotest/fedora39-libvirt" to destroy all trace of vm. 00:21:51.465 00:21:51.474 [Pipeline] } 00:21:51.489 [Pipeline] // stage 00:21:51.498 [Pipeline] dir 00:21:51.498 Running in /var/jenkins/workspace/raid-vg-autotest/fedora39-libvirt 00:21:51.500 [Pipeline] { 00:21:51.515 [Pipeline] catchError 00:21:51.517 [Pipeline] { 00:21:51.531 [Pipeline] sh 00:21:51.811 + vagrant ssh-config --host vagrant 00:21:51.811 + sed -ne /^Host/,$p 00:21:51.811 + tee ssh_conf 00:21:56.004 Host vagrant 00:21:56.004 HostName 192.168.121.126 00:21:56.004 User vagrant 00:21:56.004 Port 22 00:21:56.004 UserKnownHostsFile /dev/null 00:21:56.004 StrictHostKeyChecking no 00:21:56.004 PasswordAuthentication no 00:21:56.004 IdentityFile /var/lib/libvirt/images/.vagrant.d/boxes/spdk-VAGRANTSLASH-fedora39/39-1.5-1721788873-2326/libvirt/fedora39 00:21:56.004 IdentitiesOnly yes 00:21:56.004 LogLevel FATAL 00:21:56.004 ForwardAgent yes 00:21:56.004 ForwardX11 yes 00:21:56.004 00:21:56.019 [Pipeline] withEnv 00:21:56.022 [Pipeline] { 00:21:56.036 [Pipeline] sh 00:21:56.315 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant #!/bin/bash 00:21:56.315 source /etc/os-release 00:21:56.315 [[ -e /image.version ]] && img=$(< /image.version) 00:21:56.315 # Minimal, systemd-like check. 00:21:56.315 if [[ -e /.dockerenv ]]; then 00:21:56.315 # Clear garbage from the node's name: 00:21:56.315 # agt-er_autotest_547-896 -> autotest_547-896 00:21:56.315 # $HOSTNAME is the actual container id 00:21:56.315 agent=$HOSTNAME@${DOCKER_SWARM_PLUGIN_JENKINS_AGENT_NAME#*_} 00:21:56.315 if grep -q "/etc/hostname" /proc/self/mountinfo; then 00:21:56.315 # We can assume this is a mount from a host where container is running, 00:21:56.315 # so fetch its hostname to easily identify the target swarm worker. 00:21:56.315 container="$(< /etc/hostname) ($agent)" 00:21:56.315 else 00:21:56.315 # Fallback 00:21:56.315 container=$agent 00:21:56.315 fi 00:21:56.315 fi 00:21:56.315 echo "${NAME} ${VERSION_ID}|$(uname -r)|${img:-N/A}|${container:-N/A}" 00:21:56.315 00:21:56.584 [Pipeline] } 00:21:56.601 [Pipeline] // withEnv 00:21:56.610 [Pipeline] setCustomBuildProperty 00:21:56.625 [Pipeline] stage 00:21:56.627 [Pipeline] { (Tests) 00:21:56.645 [Pipeline] sh 00:21:56.925 + scp -F ssh_conf -r /var/jenkins/workspace/raid-vg-autotest/jbp/jenkins/jjb-config/jobs/scripts/autoruner.sh vagrant@vagrant:./ 00:21:57.197 [Pipeline] sh 00:21:57.480 + scp -F ssh_conf -r /var/jenkins/workspace/raid-vg-autotest/jbp/jenkins/jjb-config/jobs/scripts/pkgdep-autoruner.sh vagrant@vagrant:./ 00:21:57.760 [Pipeline] timeout 00:21:57.761 Timeout set to expire in 1 hr 30 min 00:21:57.763 [Pipeline] { 00:21:57.780 [Pipeline] sh 00:21:58.056 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant git -C spdk_repo/spdk reset --hard 00:21:58.623 HEAD is now at f7ce15267 bdev: Insert or overwrite metadata using bounce/accel buffer if NVMe PRACT is set 00:21:58.635 [Pipeline] sh 00:21:58.914 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant sudo chown vagrant:vagrant spdk_repo 00:21:59.186 [Pipeline] sh 00:21:59.467 + scp -F ssh_conf -r /var/jenkins/workspace/raid-vg-autotest/autorun-spdk.conf vagrant@vagrant:spdk_repo 00:21:59.743 [Pipeline] sh 00:22:00.023 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant JOB_BASE_NAME=raid-vg-autotest ./autoruner.sh spdk_repo 00:22:00.281 ++ readlink -f spdk_repo 00:22:00.281 + DIR_ROOT=/home/vagrant/spdk_repo 00:22:00.281 + [[ -n /home/vagrant/spdk_repo ]] 00:22:00.281 + DIR_SPDK=/home/vagrant/spdk_repo/spdk 00:22:00.281 + DIR_OUTPUT=/home/vagrant/spdk_repo/output 00:22:00.281 + [[ -d /home/vagrant/spdk_repo/spdk ]] 00:22:00.281 + [[ ! -d /home/vagrant/spdk_repo/output ]] 00:22:00.282 + [[ -d /home/vagrant/spdk_repo/output ]] 00:22:00.282 + [[ raid-vg-autotest == pkgdep-* ]] 00:22:00.282 + cd /home/vagrant/spdk_repo 00:22:00.282 + source /etc/os-release 00:22:00.282 ++ NAME='Fedora Linux' 00:22:00.282 ++ VERSION='39 (Cloud Edition)' 00:22:00.282 ++ ID=fedora 00:22:00.282 ++ VERSION_ID=39 00:22:00.282 ++ VERSION_CODENAME= 00:22:00.282 ++ PLATFORM_ID=platform:f39 00:22:00.282 ++ PRETTY_NAME='Fedora Linux 39 (Cloud Edition)' 00:22:00.282 ++ ANSI_COLOR='0;38;2;60;110;180' 00:22:00.282 ++ LOGO=fedora-logo-icon 00:22:00.282 ++ CPE_NAME=cpe:/o:fedoraproject:fedora:39 00:22:00.282 ++ HOME_URL=https://fedoraproject.org/ 00:22:00.282 ++ DOCUMENTATION_URL=https://docs.fedoraproject.org/en-US/fedora/f39/system-administrators-guide/ 00:22:00.282 ++ SUPPORT_URL=https://ask.fedoraproject.org/ 00:22:00.282 ++ BUG_REPORT_URL=https://bugzilla.redhat.com/ 00:22:00.282 ++ REDHAT_BUGZILLA_PRODUCT=Fedora 00:22:00.282 ++ REDHAT_BUGZILLA_PRODUCT_VERSION=39 00:22:00.282 ++ REDHAT_SUPPORT_PRODUCT=Fedora 00:22:00.282 ++ REDHAT_SUPPORT_PRODUCT_VERSION=39 00:22:00.282 ++ SUPPORT_END=2024-11-12 00:22:00.282 ++ VARIANT='Cloud Edition' 00:22:00.282 ++ VARIANT_ID=cloud 00:22:00.282 + uname -a 00:22:00.282 Linux fedora39-cloud-1721788873-2326 6.8.9-200.fc39.x86_64 #1 SMP PREEMPT_DYNAMIC Wed Jul 24 03:04:40 UTC 2024 x86_64 GNU/Linux 00:22:00.282 + sudo /home/vagrant/spdk_repo/spdk/scripts/setup.sh status 00:22:00.849 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:22:00.849 Hugepages 00:22:00.849 node hugesize free / total 00:22:00.849 node0 1048576kB 0 / 0 00:22:00.849 node0 2048kB 0 / 0 00:22:00.849 00:22:00.849 Type BDF Vendor Device NUMA Driver Device Block devices 00:22:00.849 virtio 0000:00:03.0 1af4 1001 unknown virtio-pci - vda 00:22:00.849 NVMe 0000:00:10.0 1b36 0010 unknown nvme nvme0 nvme0n1 00:22:00.849 NVMe 0000:00:11.0 1b36 0010 unknown nvme nvme1 nvme1n1 nvme1n2 nvme1n3 00:22:00.849 + rm -f /tmp/spdk-ld-path 00:22:00.849 + source autorun-spdk.conf 00:22:00.849 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:22:00.849 ++ SPDK_RUN_ASAN=1 00:22:00.849 ++ SPDK_RUN_UBSAN=1 00:22:00.849 ++ SPDK_TEST_RAID=1 00:22:00.849 ++ SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:22:00.849 ++ RUN_NIGHTLY=0 00:22:00.849 + (( SPDK_TEST_NVME_CMB == 1 || SPDK_TEST_NVME_PMR == 1 )) 00:22:00.849 + [[ -n '' ]] 00:22:00.849 + sudo git config --global --add safe.directory /home/vagrant/spdk_repo/spdk 00:22:00.849 + for M in /var/spdk/build-*-manifest.txt 00:22:00.849 + [[ -f /var/spdk/build-kernel-manifest.txt ]] 00:22:00.849 + cp /var/spdk/build-kernel-manifest.txt /home/vagrant/spdk_repo/output/ 00:22:00.849 + for M in /var/spdk/build-*-manifest.txt 00:22:00.849 + [[ -f /var/spdk/build-pkg-manifest.txt ]] 00:22:00.849 + cp /var/spdk/build-pkg-manifest.txt /home/vagrant/spdk_repo/output/ 00:22:00.849 + for M in /var/spdk/build-*-manifest.txt 00:22:00.849 + [[ -f /var/spdk/build-repo-manifest.txt ]] 00:22:00.849 + cp /var/spdk/build-repo-manifest.txt /home/vagrant/spdk_repo/output/ 00:22:00.849 ++ uname 00:22:00.849 + [[ Linux == \L\i\n\u\x ]] 00:22:00.849 + sudo dmesg -T 00:22:00.849 + sudo dmesg --clear 00:22:01.109 + dmesg_pid=5254 00:22:01.109 + sudo dmesg -Tw 00:22:01.109 + [[ Fedora Linux == FreeBSD ]] 00:22:01.109 + export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:22:01.109 + UNBIND_ENTIRE_IOMMU_GROUP=yes 00:22:01.109 + [[ -e /var/spdk/dependencies/vhost/spdk_test_image.qcow2 ]] 00:22:01.109 + [[ -x /usr/src/fio-static/fio ]] 00:22:01.109 + export FIO_BIN=/usr/src/fio-static/fio 00:22:01.109 + FIO_BIN=/usr/src/fio-static/fio 00:22:01.109 + [[ '' == \/\q\e\m\u\_\v\f\i\o\/* ]] 00:22:01.109 + [[ ! -v VFIO_QEMU_BIN ]] 00:22:01.109 + [[ -e /usr/local/qemu/vfio-user-latest ]] 00:22:01.109 + export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:22:01.109 + VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:22:01.109 + [[ -e /usr/local/qemu/vanilla-latest ]] 00:22:01.109 + export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:22:01.109 + QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:22:01.109 + spdk/autorun.sh /home/vagrant/spdk_repo/autorun-spdk.conf 00:22:01.109 17:18:38 -- common/autotest_common.sh@1692 -- $ [[ n == y ]] 00:22:01.109 17:18:38 -- spdk/autorun.sh@20 -- $ source /home/vagrant/spdk_repo/autorun-spdk.conf 00:22:01.109 17:18:38 -- spdk_repo/autorun-spdk.conf@1 -- $ SPDK_RUN_FUNCTIONAL_TEST=1 00:22:01.109 17:18:38 -- spdk_repo/autorun-spdk.conf@2 -- $ SPDK_RUN_ASAN=1 00:22:01.109 17:18:38 -- spdk_repo/autorun-spdk.conf@3 -- $ SPDK_RUN_UBSAN=1 00:22:01.109 17:18:38 -- spdk_repo/autorun-spdk.conf@4 -- $ SPDK_TEST_RAID=1 00:22:01.109 17:18:38 -- spdk_repo/autorun-spdk.conf@5 -- $ SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:22:01.109 17:18:38 -- spdk_repo/autorun-spdk.conf@6 -- $ RUN_NIGHTLY=0 00:22:01.109 17:18:38 -- spdk/autorun.sh@22 -- $ trap 'timing_finish || exit 1' EXIT 00:22:01.109 17:18:38 -- spdk/autorun.sh@25 -- $ /home/vagrant/spdk_repo/spdk/autobuild.sh /home/vagrant/spdk_repo/autorun-spdk.conf 00:22:01.109 17:18:38 -- common/autotest_common.sh@1692 -- $ [[ n == y ]] 00:22:01.109 17:18:38 -- common/autobuild_common.sh@15 -- $ source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:22:01.109 17:18:38 -- scripts/common.sh@15 -- $ shopt -s extglob 00:22:01.109 17:18:38 -- scripts/common.sh@544 -- $ [[ -e /bin/wpdk_common.sh ]] 00:22:01.109 17:18:38 -- scripts/common.sh@552 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:22:01.109 17:18:38 -- scripts/common.sh@553 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:22:01.109 17:18:38 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:01.109 17:18:38 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:01.109 17:18:38 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:01.109 17:18:38 -- paths/export.sh@5 -- $ export PATH 00:22:01.109 17:18:38 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:01.109 17:18:38 -- common/autobuild_common.sh@492 -- $ out=/home/vagrant/spdk_repo/spdk/../output 00:22:01.109 17:18:38 -- common/autobuild_common.sh@493 -- $ date +%s 00:22:01.109 17:18:38 -- common/autobuild_common.sh@493 -- $ mktemp -dt spdk_1732641518.XXXXXX 00:22:01.109 17:18:38 -- common/autobuild_common.sh@493 -- $ SPDK_WORKSPACE=/tmp/spdk_1732641518.0pZegW 00:22:01.109 17:18:38 -- common/autobuild_common.sh@495 -- $ [[ -n '' ]] 00:22:01.109 17:18:38 -- common/autobuild_common.sh@499 -- $ '[' -n '' ']' 00:22:01.109 17:18:38 -- common/autobuild_common.sh@502 -- $ scanbuild_exclude='--exclude /home/vagrant/spdk_repo/spdk/dpdk/' 00:22:01.109 17:18:38 -- common/autobuild_common.sh@506 -- $ scanbuild_exclude+=' --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp' 00:22:01.109 17:18:38 -- common/autobuild_common.sh@508 -- $ scanbuild='scan-build -o /home/vagrant/spdk_repo/spdk/../output/scan-build-tmp --exclude /home/vagrant/spdk_repo/spdk/dpdk/ --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp --status-bugs' 00:22:01.109 17:18:38 -- common/autobuild_common.sh@509 -- $ get_config_params 00:22:01.109 17:18:38 -- common/autotest_common.sh@409 -- $ xtrace_disable 00:22:01.109 17:18:38 -- common/autotest_common.sh@10 -- $ set +x 00:22:01.109 17:18:38 -- common/autobuild_common.sh@509 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-asan --enable-coverage --with-ublk --with-raid5f' 00:22:01.109 17:18:38 -- common/autobuild_common.sh@511 -- $ start_monitor_resources 00:22:01.109 17:18:38 -- pm/common@17 -- $ local monitor 00:22:01.109 17:18:38 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:22:01.109 17:18:38 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:22:01.109 17:18:38 -- pm/common@21 -- $ date +%s 00:22:01.109 17:18:38 -- pm/common@25 -- $ sleep 1 00:22:01.109 17:18:38 -- pm/common@21 -- $ date +%s 00:22:01.109 17:18:38 -- pm/common@21 -- $ /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-cpu-load -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autobuild.sh.1732641518 00:22:01.109 17:18:38 -- pm/common@21 -- $ /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-vmstat -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autobuild.sh.1732641518 00:22:01.367 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autobuild.sh.1732641518_collect-cpu-load.pm.log 00:22:01.367 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autobuild.sh.1732641518_collect-vmstat.pm.log 00:22:02.307 17:18:39 -- common/autobuild_common.sh@512 -- $ trap stop_monitor_resources EXIT 00:22:02.307 17:18:39 -- spdk/autobuild.sh@11 -- $ SPDK_TEST_AUTOBUILD= 00:22:02.307 17:18:39 -- spdk/autobuild.sh@12 -- $ umask 022 00:22:02.307 17:18:39 -- spdk/autobuild.sh@13 -- $ cd /home/vagrant/spdk_repo/spdk 00:22:02.307 17:18:39 -- spdk/autobuild.sh@16 -- $ date -u 00:22:02.307 Tue Nov 26 05:18:39 PM UTC 2024 00:22:02.307 17:18:39 -- spdk/autobuild.sh@17 -- $ git describe --tags 00:22:02.307 v25.01-pre-268-gf7ce15267 00:22:02.307 17:18:39 -- spdk/autobuild.sh@19 -- $ '[' 1 -eq 1 ']' 00:22:02.307 17:18:39 -- spdk/autobuild.sh@20 -- $ run_test asan echo 'using asan' 00:22:02.307 17:18:39 -- common/autotest_common.sh@1105 -- $ '[' 3 -le 1 ']' 00:22:02.307 17:18:39 -- common/autotest_common.sh@1111 -- $ xtrace_disable 00:22:02.307 17:18:39 -- common/autotest_common.sh@10 -- $ set +x 00:22:02.307 ************************************ 00:22:02.307 START TEST asan 00:22:02.307 ************************************ 00:22:02.307 using asan 00:22:02.307 17:18:39 asan -- common/autotest_common.sh@1129 -- $ echo 'using asan' 00:22:02.307 00:22:02.307 real 0m0.000s 00:22:02.307 user 0m0.000s 00:22:02.307 sys 0m0.000s 00:22:02.307 17:18:39 asan -- common/autotest_common.sh@1130 -- $ xtrace_disable 00:22:02.307 17:18:39 asan -- common/autotest_common.sh@10 -- $ set +x 00:22:02.307 ************************************ 00:22:02.307 END TEST asan 00:22:02.307 ************************************ 00:22:02.307 17:18:39 -- spdk/autobuild.sh@23 -- $ '[' 1 -eq 1 ']' 00:22:02.307 17:18:39 -- spdk/autobuild.sh@24 -- $ run_test ubsan echo 'using ubsan' 00:22:02.307 17:18:39 -- common/autotest_common.sh@1105 -- $ '[' 3 -le 1 ']' 00:22:02.307 17:18:39 -- common/autotest_common.sh@1111 -- $ xtrace_disable 00:22:02.307 17:18:39 -- common/autotest_common.sh@10 -- $ set +x 00:22:02.307 ************************************ 00:22:02.307 START TEST ubsan 00:22:02.307 ************************************ 00:22:02.307 using ubsan 00:22:02.307 17:18:39 ubsan -- common/autotest_common.sh@1129 -- $ echo 'using ubsan' 00:22:02.307 00:22:02.307 real 0m0.000s 00:22:02.307 user 0m0.000s 00:22:02.307 sys 0m0.000s 00:22:02.307 17:18:39 ubsan -- common/autotest_common.sh@1130 -- $ xtrace_disable 00:22:02.307 17:18:39 ubsan -- common/autotest_common.sh@10 -- $ set +x 00:22:02.307 ************************************ 00:22:02.307 END TEST ubsan 00:22:02.307 ************************************ 00:22:02.307 17:18:39 -- spdk/autobuild.sh@27 -- $ '[' -n '' ']' 00:22:02.307 17:18:39 -- spdk/autobuild.sh@31 -- $ case "$SPDK_TEST_AUTOBUILD" in 00:22:02.307 17:18:39 -- spdk/autobuild.sh@47 -- $ [[ 0 -eq 1 ]] 00:22:02.307 17:18:39 -- spdk/autobuild.sh@51 -- $ [[ 0 -eq 1 ]] 00:22:02.307 17:18:39 -- spdk/autobuild.sh@55 -- $ [[ -n '' ]] 00:22:02.307 17:18:39 -- spdk/autobuild.sh@57 -- $ [[ 0 -eq 1 ]] 00:22:02.307 17:18:39 -- spdk/autobuild.sh@59 -- $ [[ 0 -eq 1 ]] 00:22:02.307 17:18:39 -- spdk/autobuild.sh@62 -- $ [[ 0 -eq 1 ]] 00:22:02.307 17:18:39 -- spdk/autobuild.sh@67 -- $ /home/vagrant/spdk_repo/spdk/configure --enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-asan --enable-coverage --with-ublk --with-raid5f --with-shared 00:22:02.307 Using default SPDK env in /home/vagrant/spdk_repo/spdk/lib/env_dpdk 00:22:02.307 Using default DPDK in /home/vagrant/spdk_repo/spdk/dpdk/build 00:22:02.875 Using 'verbs' RDMA provider 00:22:19.118 Configuring ISA-L (logfile: /home/vagrant/spdk_repo/spdk/.spdk-isal.log)...done. 00:22:31.365 Configuring ISA-L-crypto (logfile: /home/vagrant/spdk_repo/spdk/.spdk-isal-crypto.log)...done. 00:22:31.624 Creating mk/config.mk...done. 00:22:31.624 Creating mk/cc.flags.mk...done. 00:22:31.624 Type 'make' to build. 00:22:31.624 17:19:09 -- spdk/autobuild.sh@70 -- $ run_test make make -j10 00:22:31.624 17:19:09 -- common/autotest_common.sh@1105 -- $ '[' 3 -le 1 ']' 00:22:31.624 17:19:09 -- common/autotest_common.sh@1111 -- $ xtrace_disable 00:22:31.624 17:19:09 -- common/autotest_common.sh@10 -- $ set +x 00:22:31.624 ************************************ 00:22:31.624 START TEST make 00:22:31.624 ************************************ 00:22:31.624 17:19:09 make -- common/autotest_common.sh@1129 -- $ make -j10 00:22:32.190 make[1]: Nothing to be done for 'all'. 00:22:47.057 The Meson build system 00:22:47.057 Version: 1.5.0 00:22:47.057 Source dir: /home/vagrant/spdk_repo/spdk/dpdk 00:22:47.057 Build dir: /home/vagrant/spdk_repo/spdk/dpdk/build-tmp 00:22:47.057 Build type: native build 00:22:47.057 Program cat found: YES (/usr/bin/cat) 00:22:47.057 Project name: DPDK 00:22:47.057 Project version: 24.03.0 00:22:47.057 C compiler for the host machine: cc (gcc 13.3.1 "cc (GCC) 13.3.1 20240522 (Red Hat 13.3.1-1)") 00:22:47.057 C linker for the host machine: cc ld.bfd 2.40-14 00:22:47.057 Host machine cpu family: x86_64 00:22:47.057 Host machine cpu: x86_64 00:22:47.057 Message: ## Building in Developer Mode ## 00:22:47.057 Program pkg-config found: YES (/usr/bin/pkg-config) 00:22:47.057 Program check-symbols.sh found: YES (/home/vagrant/spdk_repo/spdk/dpdk/buildtools/check-symbols.sh) 00:22:47.057 Program options-ibverbs-static.sh found: YES (/home/vagrant/spdk_repo/spdk/dpdk/buildtools/options-ibverbs-static.sh) 00:22:47.057 Program python3 found: YES (/usr/bin/python3) 00:22:47.057 Program cat found: YES (/usr/bin/cat) 00:22:47.057 Compiler for C supports arguments -march=native: YES 00:22:47.057 Checking for size of "void *" : 8 00:22:47.057 Checking for size of "void *" : 8 (cached) 00:22:47.057 Compiler for C supports link arguments -Wl,--undefined-version: YES 00:22:47.057 Library m found: YES 00:22:47.057 Library numa found: YES 00:22:47.057 Has header "numaif.h" : YES 00:22:47.057 Library fdt found: NO 00:22:47.057 Library execinfo found: NO 00:22:47.057 Has header "execinfo.h" : YES 00:22:47.057 Found pkg-config: YES (/usr/bin/pkg-config) 1.9.5 00:22:47.057 Run-time dependency libarchive found: NO (tried pkgconfig) 00:22:47.057 Run-time dependency libbsd found: NO (tried pkgconfig) 00:22:47.057 Run-time dependency jansson found: NO (tried pkgconfig) 00:22:47.057 Run-time dependency openssl found: YES 3.1.1 00:22:47.057 Run-time dependency libpcap found: YES 1.10.4 00:22:47.057 Has header "pcap.h" with dependency libpcap: YES 00:22:47.057 Compiler for C supports arguments -Wcast-qual: YES 00:22:47.057 Compiler for C supports arguments -Wdeprecated: YES 00:22:47.057 Compiler for C supports arguments -Wformat: YES 00:22:47.057 Compiler for C supports arguments -Wformat-nonliteral: NO 00:22:47.057 Compiler for C supports arguments -Wformat-security: NO 00:22:47.057 Compiler for C supports arguments -Wmissing-declarations: YES 00:22:47.057 Compiler for C supports arguments -Wmissing-prototypes: YES 00:22:47.057 Compiler for C supports arguments -Wnested-externs: YES 00:22:47.057 Compiler for C supports arguments -Wold-style-definition: YES 00:22:47.057 Compiler for C supports arguments -Wpointer-arith: YES 00:22:47.057 Compiler for C supports arguments -Wsign-compare: YES 00:22:47.057 Compiler for C supports arguments -Wstrict-prototypes: YES 00:22:47.057 Compiler for C supports arguments -Wundef: YES 00:22:47.057 Compiler for C supports arguments -Wwrite-strings: YES 00:22:47.057 Compiler for C supports arguments -Wno-address-of-packed-member: YES 00:22:47.057 Compiler for C supports arguments -Wno-packed-not-aligned: YES 00:22:47.057 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:22:47.057 Compiler for C supports arguments -Wno-zero-length-bounds: YES 00:22:47.057 Program objdump found: YES (/usr/bin/objdump) 00:22:47.057 Compiler for C supports arguments -mavx512f: YES 00:22:47.057 Checking if "AVX512 checking" compiles: YES 00:22:47.057 Fetching value of define "__SSE4_2__" : 1 00:22:47.057 Fetching value of define "__AES__" : 1 00:22:47.057 Fetching value of define "__AVX__" : 1 00:22:47.057 Fetching value of define "__AVX2__" : 1 00:22:47.057 Fetching value of define "__AVX512BW__" : 1 00:22:47.057 Fetching value of define "__AVX512CD__" : 1 00:22:47.057 Fetching value of define "__AVX512DQ__" : 1 00:22:47.057 Fetching value of define "__AVX512F__" : 1 00:22:47.057 Fetching value of define "__AVX512VL__" : 1 00:22:47.057 Fetching value of define "__PCLMUL__" : 1 00:22:47.057 Fetching value of define "__RDRND__" : 1 00:22:47.057 Fetching value of define "__RDSEED__" : 1 00:22:47.057 Fetching value of define "__VPCLMULQDQ__" : (undefined) 00:22:47.057 Fetching value of define "__znver1__" : (undefined) 00:22:47.057 Fetching value of define "__znver2__" : (undefined) 00:22:47.057 Fetching value of define "__znver3__" : (undefined) 00:22:47.057 Fetching value of define "__znver4__" : (undefined) 00:22:47.057 Library asan found: YES 00:22:47.057 Compiler for C supports arguments -Wno-format-truncation: YES 00:22:47.057 Message: lib/log: Defining dependency "log" 00:22:47.057 Message: lib/kvargs: Defining dependency "kvargs" 00:22:47.057 Message: lib/telemetry: Defining dependency "telemetry" 00:22:47.057 Library rt found: YES 00:22:47.057 Checking for function "getentropy" : NO 00:22:47.057 Message: lib/eal: Defining dependency "eal" 00:22:47.057 Message: lib/ring: Defining dependency "ring" 00:22:47.057 Message: lib/rcu: Defining dependency "rcu" 00:22:47.057 Message: lib/mempool: Defining dependency "mempool" 00:22:47.057 Message: lib/mbuf: Defining dependency "mbuf" 00:22:47.057 Fetching value of define "__PCLMUL__" : 1 (cached) 00:22:47.057 Fetching value of define "__AVX512F__" : 1 (cached) 00:22:47.057 Fetching value of define "__AVX512BW__" : 1 (cached) 00:22:47.057 Fetching value of define "__AVX512DQ__" : 1 (cached) 00:22:47.057 Fetching value of define "__AVX512VL__" : 1 (cached) 00:22:47.057 Fetching value of define "__VPCLMULQDQ__" : (undefined) (cached) 00:22:47.057 Compiler for C supports arguments -mpclmul: YES 00:22:47.057 Compiler for C supports arguments -maes: YES 00:22:47.057 Compiler for C supports arguments -mavx512f: YES (cached) 00:22:47.057 Compiler for C supports arguments -mavx512bw: YES 00:22:47.057 Compiler for C supports arguments -mavx512dq: YES 00:22:47.057 Compiler for C supports arguments -mavx512vl: YES 00:22:47.057 Compiler for C supports arguments -mvpclmulqdq: YES 00:22:47.057 Compiler for C supports arguments -mavx2: YES 00:22:47.057 Compiler for C supports arguments -mavx: YES 00:22:47.057 Message: lib/net: Defining dependency "net" 00:22:47.057 Message: lib/meter: Defining dependency "meter" 00:22:47.057 Message: lib/ethdev: Defining dependency "ethdev" 00:22:47.057 Message: lib/pci: Defining dependency "pci" 00:22:47.057 Message: lib/cmdline: Defining dependency "cmdline" 00:22:47.057 Message: lib/hash: Defining dependency "hash" 00:22:47.057 Message: lib/timer: Defining dependency "timer" 00:22:47.057 Message: lib/compressdev: Defining dependency "compressdev" 00:22:47.057 Message: lib/cryptodev: Defining dependency "cryptodev" 00:22:47.057 Message: lib/dmadev: Defining dependency "dmadev" 00:22:47.057 Compiler for C supports arguments -Wno-cast-qual: YES 00:22:47.057 Message: lib/power: Defining dependency "power" 00:22:47.057 Message: lib/reorder: Defining dependency "reorder" 00:22:47.057 Message: lib/security: Defining dependency "security" 00:22:47.057 Has header "linux/userfaultfd.h" : YES 00:22:47.057 Has header "linux/vduse.h" : YES 00:22:47.057 Message: lib/vhost: Defining dependency "vhost" 00:22:47.057 Compiler for C supports arguments -Wno-format-truncation: YES (cached) 00:22:47.057 Message: drivers/bus/pci: Defining dependency "bus_pci" 00:22:47.057 Message: drivers/bus/vdev: Defining dependency "bus_vdev" 00:22:47.057 Message: drivers/mempool/ring: Defining dependency "mempool_ring" 00:22:47.057 Message: Disabling raw/* drivers: missing internal dependency "rawdev" 00:22:47.057 Message: Disabling regex/* drivers: missing internal dependency "regexdev" 00:22:47.057 Message: Disabling ml/* drivers: missing internal dependency "mldev" 00:22:47.057 Message: Disabling event/* drivers: missing internal dependency "eventdev" 00:22:47.057 Message: Disabling baseband/* drivers: missing internal dependency "bbdev" 00:22:47.057 Message: Disabling gpu/* drivers: missing internal dependency "gpudev" 00:22:47.057 Program doxygen found: YES (/usr/local/bin/doxygen) 00:22:47.057 Configuring doxy-api-html.conf using configuration 00:22:47.057 Configuring doxy-api-man.conf using configuration 00:22:47.057 Program mandb found: YES (/usr/bin/mandb) 00:22:47.057 Program sphinx-build found: NO 00:22:47.057 Configuring rte_build_config.h using configuration 00:22:47.057 Message: 00:22:47.057 ================= 00:22:47.057 Applications Enabled 00:22:47.057 ================= 00:22:47.057 00:22:47.057 apps: 00:22:47.057 00:22:47.057 00:22:47.057 Message: 00:22:47.057 ================= 00:22:47.057 Libraries Enabled 00:22:47.057 ================= 00:22:47.057 00:22:47.057 libs: 00:22:47.057 log, kvargs, telemetry, eal, ring, rcu, mempool, mbuf, 00:22:47.057 net, meter, ethdev, pci, cmdline, hash, timer, compressdev, 00:22:47.057 cryptodev, dmadev, power, reorder, security, vhost, 00:22:47.057 00:22:47.057 Message: 00:22:47.057 =============== 00:22:47.057 Drivers Enabled 00:22:47.057 =============== 00:22:47.057 00:22:47.057 common: 00:22:47.057 00:22:47.057 bus: 00:22:47.057 pci, vdev, 00:22:47.057 mempool: 00:22:47.057 ring, 00:22:47.057 dma: 00:22:47.057 00:22:47.057 net: 00:22:47.057 00:22:47.057 crypto: 00:22:47.058 00:22:47.058 compress: 00:22:47.058 00:22:47.058 vdpa: 00:22:47.058 00:22:47.058 00:22:47.058 Message: 00:22:47.058 ================= 00:22:47.058 Content Skipped 00:22:47.058 ================= 00:22:47.058 00:22:47.058 apps: 00:22:47.058 dumpcap: explicitly disabled via build config 00:22:47.058 graph: explicitly disabled via build config 00:22:47.058 pdump: explicitly disabled via build config 00:22:47.058 proc-info: explicitly disabled via build config 00:22:47.058 test-acl: explicitly disabled via build config 00:22:47.058 test-bbdev: explicitly disabled via build config 00:22:47.058 test-cmdline: explicitly disabled via build config 00:22:47.058 test-compress-perf: explicitly disabled via build config 00:22:47.058 test-crypto-perf: explicitly disabled via build config 00:22:47.058 test-dma-perf: explicitly disabled via build config 00:22:47.058 test-eventdev: explicitly disabled via build config 00:22:47.058 test-fib: explicitly disabled via build config 00:22:47.058 test-flow-perf: explicitly disabled via build config 00:22:47.058 test-gpudev: explicitly disabled via build config 00:22:47.058 test-mldev: explicitly disabled via build config 00:22:47.058 test-pipeline: explicitly disabled via build config 00:22:47.058 test-pmd: explicitly disabled via build config 00:22:47.058 test-regex: explicitly disabled via build config 00:22:47.058 test-sad: explicitly disabled via build config 00:22:47.058 test-security-perf: explicitly disabled via build config 00:22:47.058 00:22:47.058 libs: 00:22:47.058 argparse: explicitly disabled via build config 00:22:47.058 metrics: explicitly disabled via build config 00:22:47.058 acl: explicitly disabled via build config 00:22:47.058 bbdev: explicitly disabled via build config 00:22:47.058 bitratestats: explicitly disabled via build config 00:22:47.058 bpf: explicitly disabled via build config 00:22:47.058 cfgfile: explicitly disabled via build config 00:22:47.058 distributor: explicitly disabled via build config 00:22:47.058 efd: explicitly disabled via build config 00:22:47.058 eventdev: explicitly disabled via build config 00:22:47.058 dispatcher: explicitly disabled via build config 00:22:47.058 gpudev: explicitly disabled via build config 00:22:47.058 gro: explicitly disabled via build config 00:22:47.058 gso: explicitly disabled via build config 00:22:47.058 ip_frag: explicitly disabled via build config 00:22:47.058 jobstats: explicitly disabled via build config 00:22:47.058 latencystats: explicitly disabled via build config 00:22:47.058 lpm: explicitly disabled via build config 00:22:47.058 member: explicitly disabled via build config 00:22:47.058 pcapng: explicitly disabled via build config 00:22:47.058 rawdev: explicitly disabled via build config 00:22:47.058 regexdev: explicitly disabled via build config 00:22:47.058 mldev: explicitly disabled via build config 00:22:47.058 rib: explicitly disabled via build config 00:22:47.058 sched: explicitly disabled via build config 00:22:47.058 stack: explicitly disabled via build config 00:22:47.058 ipsec: explicitly disabled via build config 00:22:47.058 pdcp: explicitly disabled via build config 00:22:47.058 fib: explicitly disabled via build config 00:22:47.058 port: explicitly disabled via build config 00:22:47.058 pdump: explicitly disabled via build config 00:22:47.058 table: explicitly disabled via build config 00:22:47.058 pipeline: explicitly disabled via build config 00:22:47.058 graph: explicitly disabled via build config 00:22:47.058 node: explicitly disabled via build config 00:22:47.058 00:22:47.058 drivers: 00:22:47.058 common/cpt: not in enabled drivers build config 00:22:47.058 common/dpaax: not in enabled drivers build config 00:22:47.058 common/iavf: not in enabled drivers build config 00:22:47.058 common/idpf: not in enabled drivers build config 00:22:47.058 common/ionic: not in enabled drivers build config 00:22:47.058 common/mvep: not in enabled drivers build config 00:22:47.058 common/octeontx: not in enabled drivers build config 00:22:47.058 bus/auxiliary: not in enabled drivers build config 00:22:47.058 bus/cdx: not in enabled drivers build config 00:22:47.058 bus/dpaa: not in enabled drivers build config 00:22:47.058 bus/fslmc: not in enabled drivers build config 00:22:47.058 bus/ifpga: not in enabled drivers build config 00:22:47.058 bus/platform: not in enabled drivers build config 00:22:47.058 bus/uacce: not in enabled drivers build config 00:22:47.058 bus/vmbus: not in enabled drivers build config 00:22:47.058 common/cnxk: not in enabled drivers build config 00:22:47.058 common/mlx5: not in enabled drivers build config 00:22:47.058 common/nfp: not in enabled drivers build config 00:22:47.058 common/nitrox: not in enabled drivers build config 00:22:47.058 common/qat: not in enabled drivers build config 00:22:47.058 common/sfc_efx: not in enabled drivers build config 00:22:47.058 mempool/bucket: not in enabled drivers build config 00:22:47.058 mempool/cnxk: not in enabled drivers build config 00:22:47.058 mempool/dpaa: not in enabled drivers build config 00:22:47.058 mempool/dpaa2: not in enabled drivers build config 00:22:47.058 mempool/octeontx: not in enabled drivers build config 00:22:47.058 mempool/stack: not in enabled drivers build config 00:22:47.058 dma/cnxk: not in enabled drivers build config 00:22:47.058 dma/dpaa: not in enabled drivers build config 00:22:47.058 dma/dpaa2: not in enabled drivers build config 00:22:47.058 dma/hisilicon: not in enabled drivers build config 00:22:47.058 dma/idxd: not in enabled drivers build config 00:22:47.058 dma/ioat: not in enabled drivers build config 00:22:47.058 dma/skeleton: not in enabled drivers build config 00:22:47.058 net/af_packet: not in enabled drivers build config 00:22:47.058 net/af_xdp: not in enabled drivers build config 00:22:47.058 net/ark: not in enabled drivers build config 00:22:47.058 net/atlantic: not in enabled drivers build config 00:22:47.058 net/avp: not in enabled drivers build config 00:22:47.058 net/axgbe: not in enabled drivers build config 00:22:47.058 net/bnx2x: not in enabled drivers build config 00:22:47.058 net/bnxt: not in enabled drivers build config 00:22:47.058 net/bonding: not in enabled drivers build config 00:22:47.058 net/cnxk: not in enabled drivers build config 00:22:47.058 net/cpfl: not in enabled drivers build config 00:22:47.058 net/cxgbe: not in enabled drivers build config 00:22:47.058 net/dpaa: not in enabled drivers build config 00:22:47.058 net/dpaa2: not in enabled drivers build config 00:22:47.058 net/e1000: not in enabled drivers build config 00:22:47.058 net/ena: not in enabled drivers build config 00:22:47.058 net/enetc: not in enabled drivers build config 00:22:47.058 net/enetfec: not in enabled drivers build config 00:22:47.058 net/enic: not in enabled drivers build config 00:22:47.058 net/failsafe: not in enabled drivers build config 00:22:47.058 net/fm10k: not in enabled drivers build config 00:22:47.058 net/gve: not in enabled drivers build config 00:22:47.058 net/hinic: not in enabled drivers build config 00:22:47.058 net/hns3: not in enabled drivers build config 00:22:47.058 net/i40e: not in enabled drivers build config 00:22:47.058 net/iavf: not in enabled drivers build config 00:22:47.058 net/ice: not in enabled drivers build config 00:22:47.058 net/idpf: not in enabled drivers build config 00:22:47.058 net/igc: not in enabled drivers build config 00:22:47.058 net/ionic: not in enabled drivers build config 00:22:47.058 net/ipn3ke: not in enabled drivers build config 00:22:47.058 net/ixgbe: not in enabled drivers build config 00:22:47.058 net/mana: not in enabled drivers build config 00:22:47.058 net/memif: not in enabled drivers build config 00:22:47.058 net/mlx4: not in enabled drivers build config 00:22:47.058 net/mlx5: not in enabled drivers build config 00:22:47.058 net/mvneta: not in enabled drivers build config 00:22:47.058 net/mvpp2: not in enabled drivers build config 00:22:47.058 net/netvsc: not in enabled drivers build config 00:22:47.058 net/nfb: not in enabled drivers build config 00:22:47.058 net/nfp: not in enabled drivers build config 00:22:47.058 net/ngbe: not in enabled drivers build config 00:22:47.058 net/null: not in enabled drivers build config 00:22:47.058 net/octeontx: not in enabled drivers build config 00:22:47.058 net/octeon_ep: not in enabled drivers build config 00:22:47.058 net/pcap: not in enabled drivers build config 00:22:47.058 net/pfe: not in enabled drivers build config 00:22:47.058 net/qede: not in enabled drivers build config 00:22:47.058 net/ring: not in enabled drivers build config 00:22:47.058 net/sfc: not in enabled drivers build config 00:22:47.058 net/softnic: not in enabled drivers build config 00:22:47.058 net/tap: not in enabled drivers build config 00:22:47.058 net/thunderx: not in enabled drivers build config 00:22:47.058 net/txgbe: not in enabled drivers build config 00:22:47.058 net/vdev_netvsc: not in enabled drivers build config 00:22:47.058 net/vhost: not in enabled drivers build config 00:22:47.058 net/virtio: not in enabled drivers build config 00:22:47.058 net/vmxnet3: not in enabled drivers build config 00:22:47.058 raw/*: missing internal dependency, "rawdev" 00:22:47.058 crypto/armv8: not in enabled drivers build config 00:22:47.058 crypto/bcmfs: not in enabled drivers build config 00:22:47.058 crypto/caam_jr: not in enabled drivers build config 00:22:47.058 crypto/ccp: not in enabled drivers build config 00:22:47.058 crypto/cnxk: not in enabled drivers build config 00:22:47.058 crypto/dpaa_sec: not in enabled drivers build config 00:22:47.058 crypto/dpaa2_sec: not in enabled drivers build config 00:22:47.058 crypto/ipsec_mb: not in enabled drivers build config 00:22:47.058 crypto/mlx5: not in enabled drivers build config 00:22:47.058 crypto/mvsam: not in enabled drivers build config 00:22:47.058 crypto/nitrox: not in enabled drivers build config 00:22:47.058 crypto/null: not in enabled drivers build config 00:22:47.058 crypto/octeontx: not in enabled drivers build config 00:22:47.058 crypto/openssl: not in enabled drivers build config 00:22:47.058 crypto/scheduler: not in enabled drivers build config 00:22:47.058 crypto/uadk: not in enabled drivers build config 00:22:47.058 crypto/virtio: not in enabled drivers build config 00:22:47.058 compress/isal: not in enabled drivers build config 00:22:47.058 compress/mlx5: not in enabled drivers build config 00:22:47.058 compress/nitrox: not in enabled drivers build config 00:22:47.058 compress/octeontx: not in enabled drivers build config 00:22:47.058 compress/zlib: not in enabled drivers build config 00:22:47.058 regex/*: missing internal dependency, "regexdev" 00:22:47.058 ml/*: missing internal dependency, "mldev" 00:22:47.058 vdpa/ifc: not in enabled drivers build config 00:22:47.058 vdpa/mlx5: not in enabled drivers build config 00:22:47.058 vdpa/nfp: not in enabled drivers build config 00:22:47.058 vdpa/sfc: not in enabled drivers build config 00:22:47.058 event/*: missing internal dependency, "eventdev" 00:22:47.059 baseband/*: missing internal dependency, "bbdev" 00:22:47.059 gpu/*: missing internal dependency, "gpudev" 00:22:47.059 00:22:47.059 00:22:47.623 Build targets in project: 85 00:22:47.623 00:22:47.623 DPDK 24.03.0 00:22:47.623 00:22:47.623 User defined options 00:22:47.623 buildtype : debug 00:22:47.623 default_library : shared 00:22:47.623 libdir : lib 00:22:47.623 prefix : /home/vagrant/spdk_repo/spdk/dpdk/build 00:22:47.623 b_sanitize : address 00:22:47.623 c_args : -Wno-stringop-overflow -fcommon -Wno-stringop-overread -Wno-array-bounds -fPIC -Werror 00:22:47.623 c_link_args : 00:22:47.623 cpu_instruction_set: native 00:22:47.623 disable_apps : dumpcap,graph,pdump,proc-info,test-acl,test-bbdev,test-cmdline,test-compress-perf,test-crypto-perf,test-dma-perf,test-eventdev,test-fib,test-flow-perf,test-gpudev,test-mldev,test-pipeline,test-pmd,test-regex,test-sad,test-security-perf,test 00:22:47.623 disable_libs : acl,argparse,bbdev,bitratestats,bpf,cfgfile,dispatcher,distributor,efd,eventdev,fib,gpudev,graph,gro,gso,ip_frag,ipsec,jobstats,latencystats,lpm,member,metrics,mldev,node,pcapng,pdcp,pdump,pipeline,port,rawdev,regexdev,rib,sched,stack,table 00:22:47.623 enable_docs : false 00:22:47.623 enable_drivers : bus,bus/pci,bus/vdev,mempool/ring,power/acpi,power/amd_pstate,power/cppc,power/intel_pstate,power/intel_uncore,power/kvm_vm 00:22:47.623 enable_kmods : false 00:22:47.623 max_lcores : 128 00:22:47.623 tests : false 00:22:47.623 00:22:47.623 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:22:48.187 ninja: Entering directory `/home/vagrant/spdk_repo/spdk/dpdk/build-tmp' 00:22:48.445 [1/268] Compiling C object lib/librte_log.a.p/log_log_linux.c.o 00:22:48.445 [2/268] Compiling C object lib/librte_kvargs.a.p/kvargs_rte_kvargs.c.o 00:22:48.445 [3/268] Linking static target lib/librte_kvargs.a 00:22:48.445 [4/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_data.c.o 00:22:48.445 [5/268] Compiling C object lib/librte_log.a.p/log_log.c.o 00:22:48.445 [6/268] Linking static target lib/librte_log.a 00:22:49.008 [7/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_debug.c.o 00:22:49.008 [8/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_legacy.c.o 00:22:49.008 [9/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_config.c.o 00:22:49.008 [10/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_errno.c.o 00:22:49.008 [11/268] Generating lib/kvargs.sym_chk with a custom command (wrapped by meson to capture output) 00:22:49.008 [12/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_bus.c.o 00:22:49.008 [13/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_devargs.c.o 00:22:49.008 [14/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hexdump.c.o 00:22:49.008 [15/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dev.c.o 00:22:49.009 [16/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_class.c.o 00:22:49.265 [17/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry.c.o 00:22:49.265 [18/268] Linking static target lib/librte_telemetry.a 00:22:49.522 [19/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_mcfg.c.o 00:22:49.779 [20/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_launch.c.o 00:22:49.779 [21/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_interrupts.c.o 00:22:49.779 [22/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memalloc.c.o 00:22:49.779 [23/268] Generating lib/log.sym_chk with a custom command (wrapped by meson to capture output) 00:22:49.779 [24/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_string_fns.c.o 00:22:49.779 [25/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_lcore.c.o 00:22:49.779 [26/268] Linking target lib/librte_log.so.24.1 00:22:50.051 [27/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memzone.c.o 00:22:50.051 [28/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_fbarray.c.o 00:22:50.051 [29/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_uuid.c.o 00:22:50.051 [30/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memory.c.o 00:22:50.308 [31/268] Generating symbol file lib/librte_log.so.24.1.p/librte_log.so.24.1.symbols 00:22:50.308 [32/268] Linking target lib/librte_kvargs.so.24.1 00:22:50.308 [33/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_timer.c.o 00:22:50.566 [34/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_tailqs.c.o 00:22:50.566 [35/268] Generating lib/telemetry.sym_chk with a custom command (wrapped by meson to capture output) 00:22:50.566 [36/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_reciprocal.c.o 00:22:50.566 [37/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_thread.c.o 00:22:50.566 [38/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_points.c.o 00:22:50.566 [39/268] Linking target lib/librte_telemetry.so.24.1 00:22:50.566 [40/268] Generating symbol file lib/librte_kvargs.so.24.1.p/librte_kvargs.so.24.1.symbols 00:22:50.566 [41/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_version.c.o 00:22:50.824 [42/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_cpuflags.c.o 00:22:50.824 [43/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hypervisor.c.o 00:22:50.824 [44/268] Generating symbol file lib/librte_telemetry.so.24.1.p/librte_telemetry.so.24.1.symbols 00:22:50.824 [45/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_elem.c.o 00:22:50.824 [46/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_options.c.o 00:22:51.082 [47/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_malloc.c.o 00:22:51.082 [48/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_heap.c.o 00:22:51.083 [49/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_random.c.o 00:22:51.083 [50/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dynmem.c.o 00:22:51.083 [51/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_debug.c.o 00:22:51.341 [52/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_service.c.o 00:22:51.341 [53/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_proc.c.o 00:22:51.341 [54/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace.c.o 00:22:51.341 [55/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_utils.c.o 00:22:51.341 [56/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_keepalive.c.o 00:22:51.598 [57/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_filesystem.c.o 00:22:51.598 [58/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_file.c.o 00:22:51.598 [59/268] Compiling C object lib/librte_eal.a.p/eal_common_hotplug_mp.c.o 00:22:51.598 [60/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_ctf.c.o 00:22:51.864 [61/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_thread.c.o 00:22:51.864 [62/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_cpuflags.c.o 00:22:51.864 [63/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_mp.c.o 00:22:51.864 [64/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_memory.c.o 00:22:51.864 [65/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_firmware.c.o 00:22:52.126 [66/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_alarm.c.o 00:22:52.126 [67/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_timer.c.o 00:22:52.126 [68/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal.c.o 00:22:52.383 [69/268] Compiling C object lib/librte_eal.a.p/eal_unix_rte_thread.c.o 00:22:52.383 [70/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_hugepage_info.c.o 00:22:52.383 [71/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_lcore.c.o 00:22:52.383 [72/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_interrupts.c.o 00:22:52.383 [73/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cpuflags.c.o 00:22:52.383 [74/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_dev.c.o 00:22:52.383 [75/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_hypervisor.c.o 00:22:52.383 [76/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_timer.c.o 00:22:52.641 [77/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_thread.c.o 00:22:52.641 [78/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_spinlock.c.o 00:22:52.641 [79/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memalloc.c.o 00:22:52.641 [80/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memory.c.o 00:22:52.898 [81/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_power_intrinsics.c.o 00:22:53.155 [82/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cycles.c.o 00:22:53.156 [83/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio_mp_sync.c.o 00:22:53.156 [84/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops_default.c.o 00:22:53.156 [85/268] Compiling C object lib/librte_ring.a.p/ring_rte_ring.c.o 00:22:53.156 [86/268] Linking static target lib/librte_ring.a 00:22:53.156 [87/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops.c.o 00:22:53.414 [88/268] Compiling C object lib/librte_rcu.a.p/rcu_rte_rcu_qsbr.c.o 00:22:53.414 [89/268] Linking static target lib/librte_rcu.a 00:22:53.414 [90/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool.c.o 00:22:53.414 [91/268] Compiling C object lib/librte_mempool.a.p/mempool_mempool_trace_points.c.o 00:22:53.414 [92/268] Linking static target lib/librte_mempool.a 00:22:53.414 [93/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio.c.o 00:22:53.414 [94/268] Linking static target lib/librte_eal.a 00:22:53.671 [95/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_pool_ops.c.o 00:22:53.671 [96/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_ptype.c.o 00:22:53.671 [97/268] Generating lib/ring.sym_chk with a custom command (wrapped by meson to capture output) 00:22:53.928 [98/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_dyn.c.o 00:22:53.928 [99/268] Generating lib/rcu.sym_chk with a custom command (wrapped by meson to capture output) 00:22:54.187 [100/268] Compiling C object lib/net/libnet_crc_avx512_lib.a.p/net_crc_avx512.c.o 00:22:54.187 [101/268] Linking static target lib/net/libnet_crc_avx512_lib.a 00:22:54.187 [102/268] Compiling C object lib/librte_net.a.p/net_rte_arp.c.o 00:22:54.187 [103/268] Compiling C object lib/librte_net.a.p/net_rte_ether.c.o 00:22:54.187 [104/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf.c.o 00:22:54.446 [105/268] Linking static target lib/librte_mbuf.a 00:22:54.446 [106/268] Compiling C object lib/librte_net.a.p/net_rte_net.c.o 00:22:54.446 [107/268] Compiling C object lib/librte_net.a.p/net_rte_net_crc.c.o 00:22:54.446 [108/268] Compiling C object lib/librte_meter.a.p/meter_rte_meter.c.o 00:22:54.446 [109/268] Linking static target lib/librte_meter.a 00:22:54.446 [110/268] Compiling C object lib/librte_net.a.p/net_net_crc_sse.c.o 00:22:54.446 [111/268] Linking static target lib/librte_net.a 00:22:54.705 [112/268] Generating lib/mempool.sym_chk with a custom command (wrapped by meson to capture output) 00:22:54.964 [113/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_private.c.o 00:22:54.964 [114/268] Generating lib/meter.sym_chk with a custom command (wrapped by meson to capture output) 00:22:54.964 [115/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_profile.c.o 00:22:54.964 [116/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_driver.c.o 00:22:54.964 [117/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_class_eth.c.o 00:22:55.223 [118/268] Generating lib/net.sym_chk with a custom command (wrapped by meson to capture output) 00:22:55.481 [119/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_trace_points.c.o 00:22:55.481 [120/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_cman.c.o 00:22:55.481 [121/268] Generating lib/mbuf.sym_chk with a custom command (wrapped by meson to capture output) 00:22:56.048 [122/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_telemetry.c.o 00:22:56.048 [123/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_mtr.c.o 00:22:56.048 [124/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_common.c.o 00:22:56.048 [125/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_tm.c.o 00:22:56.048 [126/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_telemetry.c.o 00:22:56.048 [127/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8472.c.o 00:22:56.048 [128/268] Compiling C object lib/librte_pci.a.p/pci_rte_pci.c.o 00:22:56.048 [129/268] Linking static target lib/librte_pci.a 00:22:56.048 [130/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline.c.o 00:22:56.048 [131/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_cirbuf.c.o 00:22:56.307 [132/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse.c.o 00:22:56.307 [133/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8079.c.o 00:22:56.307 [134/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_num.c.o 00:22:56.307 [135/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_portlist.c.o 00:22:56.307 [136/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_string.c.o 00:22:56.307 [137/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_linux_ethtool.c.o 00:22:56.564 [138/268] Generating lib/pci.sym_chk with a custom command (wrapped by meson to capture output) 00:22:56.564 [139/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_socket.c.o 00:22:56.564 [140/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_vt100.c.o 00:22:56.564 [141/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_os_unix.c.o 00:22:56.565 [142/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8636.c.o 00:22:56.565 [143/268] Compiling C object lib/librte_hash.a.p/hash_rte_hash_crc.c.o 00:22:56.565 [144/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_rdline.c.o 00:22:56.565 [145/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_etheraddr.c.o 00:22:56.565 [146/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_ipaddr.c.o 00:22:56.822 [147/268] Linking static target lib/librte_cmdline.a 00:22:57.080 [148/268] Compiling C object lib/librte_hash.a.p/hash_rte_thash_gfni.c.o 00:22:57.080 [149/268] Compiling C object lib/librte_hash.a.p/hash_rte_fbk_hash.c.o 00:22:57.338 [150/268] Compiling C object lib/librte_hash.a.p/hash_rte_thash.c.o 00:22:57.338 [151/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev_pmd.c.o 00:22:57.338 [152/268] Compiling C object lib/librte_timer.a.p/timer_rte_timer.c.o 00:22:57.595 [153/268] Linking static target lib/librte_timer.a 00:22:57.595 [154/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev.c.o 00:22:57.595 [155/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_comp.c.o 00:22:57.595 [156/268] Linking static target lib/librte_compressdev.a 00:22:57.855 [157/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_flow.c.o 00:22:57.855 [158/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_pmd.c.o 00:22:58.113 [159/268] Compiling C object lib/librte_hash.a.p/hash_rte_cuckoo_hash.c.o 00:22:58.113 [160/268] Linking static target lib/librte_hash.a 00:22:58.113 [161/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_trace_points.c.o 00:22:58.113 [162/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev.c.o 00:22:58.370 [163/268] Linking static target lib/librte_ethdev.a 00:22:58.370 [164/268] Generating lib/timer.sym_chk with a custom command (wrapped by meson to capture output) 00:22:58.370 [165/268] Compiling C object lib/librte_power.a.p/power_guest_channel.c.o 00:22:58.370 [166/268] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev_trace_points.c.o 00:22:58.627 [167/268] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev.c.o 00:22:58.627 [168/268] Compiling C object lib/librte_power.a.p/power_power_common.c.o 00:22:58.627 [169/268] Linking static target lib/librte_dmadev.a 00:22:58.627 [170/268] Generating lib/cmdline.sym_chk with a custom command (wrapped by meson to capture output) 00:22:58.886 [171/268] Compiling C object lib/librte_power.a.p/power_power_kvm_vm.c.o 00:22:59.144 [172/268] Generating lib/compressdev.sym_chk with a custom command (wrapped by meson to capture output) 00:22:59.144 [173/268] Compiling C object lib/librte_power.a.p/power_power_acpi_cpufreq.c.o 00:22:59.401 [174/268] Compiling C object lib/librte_power.a.p/power_power_cppc_cpufreq.c.o 00:22:59.401 [175/268] Compiling C object lib/librte_power.a.p/power_power_amd_pstate_cpufreq.c.o 00:22:59.402 [176/268] Compiling C object lib/librte_power.a.p/power_power_intel_uncore.c.o 00:22:59.659 [177/268] Compiling C object lib/librte_power.a.p/power_rte_power.c.o 00:22:59.659 [178/268] Generating lib/hash.sym_chk with a custom command (wrapped by meson to capture output) 00:22:59.659 [179/268] Generating lib/dmadev.sym_chk with a custom command (wrapped by meson to capture output) 00:22:59.659 [180/268] Compiling C object lib/librte_power.a.p/power_rte_power_uncore.c.o 00:22:59.916 [181/268] Compiling C object lib/librte_vhost.a.p/vhost_fd_man.c.o 00:22:59.916 [182/268] Compiling C object lib/librte_power.a.p/power_power_pstate_cpufreq.c.o 00:23:00.174 [183/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_rte_cryptodev.c.o 00:23:00.174 [184/268] Linking static target lib/librte_cryptodev.a 00:23:00.174 [185/268] Compiling C object lib/librte_power.a.p/power_rte_power_pmd_mgmt.c.o 00:23:00.174 [186/268] Linking static target lib/librte_power.a 00:23:00.432 [187/268] Compiling C object lib/librte_vhost.a.p/vhost_iotlb.c.o 00:23:00.689 [188/268] Compiling C object lib/librte_reorder.a.p/reorder_rte_reorder.c.o 00:23:00.689 [189/268] Linking static target lib/librte_reorder.a 00:23:00.689 [190/268] Compiling C object lib/librte_vhost.a.p/vhost_vdpa.c.o 00:23:00.947 [191/268] Compiling C object lib/librte_vhost.a.p/vhost_socket.c.o 00:23:00.947 [192/268] Compiling C object lib/librte_security.a.p/security_rte_security.c.o 00:23:00.947 [193/268] Linking static target lib/librte_security.a 00:23:01.513 [194/268] Generating lib/reorder.sym_chk with a custom command (wrapped by meson to capture output) 00:23:01.859 [195/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost.c.o 00:23:01.859 [196/268] Generating lib/power.sym_chk with a custom command (wrapped by meson to capture output) 00:23:02.120 [197/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost_user.c.o 00:23:02.120 [198/268] Generating lib/security.sym_chk with a custom command (wrapped by meson to capture output) 00:23:02.120 [199/268] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net_ctrl.c.o 00:23:02.120 [200/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_params.c.o 00:23:02.377 [201/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common.c.o 00:23:02.377 [202/268] Compiling C object lib/librte_vhost.a.p/vhost_vduse.c.o 00:23:02.635 [203/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common_uio.c.o 00:23:02.635 [204/268] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev_params.c.o 00:23:02.635 [205/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci.c.o 00:23:02.891 [206/268] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev.c.o 00:23:03.148 [207/268] Linking static target drivers/libtmp_rte_bus_vdev.a 00:23:03.148 [208/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_uio.c.o 00:23:03.148 [209/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_vfio.c.o 00:23:03.148 [210/268] Linking static target drivers/libtmp_rte_bus_pci.a 00:23:03.404 [211/268] Generating drivers/rte_bus_vdev.pmd.c with a custom command 00:23:03.404 [212/268] Generating lib/cryptodev.sym_chk with a custom command (wrapped by meson to capture output) 00:23:03.404 [213/268] Compiling C object drivers/librte_bus_vdev.a.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:23:03.404 [214/268] Compiling C object drivers/librte_bus_vdev.so.24.1.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:23:03.404 [215/268] Linking static target drivers/librte_bus_vdev.a 00:23:03.404 [216/268] Compiling C object drivers/libtmp_rte_mempool_ring.a.p/mempool_ring_rte_mempool_ring.c.o 00:23:03.404 [217/268] Linking static target drivers/libtmp_rte_mempool_ring.a 00:23:03.661 [218/268] Generating drivers/rte_bus_pci.pmd.c with a custom command 00:23:03.661 [219/268] Compiling C object drivers/librte_bus_pci.so.24.1.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:23:03.661 [220/268] Compiling C object drivers/librte_bus_pci.a.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:23:03.661 [221/268] Linking static target drivers/librte_bus_pci.a 00:23:03.661 [222/268] Generating drivers/rte_mempool_ring.pmd.c with a custom command 00:23:03.661 [223/268] Compiling C object drivers/librte_mempool_ring.so.24.1.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:23:03.917 [224/268] Compiling C object drivers/librte_mempool_ring.a.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:23:03.917 [225/268] Linking static target drivers/librte_mempool_ring.a 00:23:03.917 [226/268] Generating drivers/rte_bus_vdev.sym_chk with a custom command (wrapped by meson to capture output) 00:23:04.173 [227/268] Generating drivers/rte_bus_pci.sym_chk with a custom command (wrapped by meson to capture output) 00:23:04.822 [228/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost_crypto.c.o 00:23:06.733 [229/268] Generating lib/eal.sym_chk with a custom command (wrapped by meson to capture output) 00:23:06.990 [230/268] Linking target lib/librte_eal.so.24.1 00:23:07.248 [231/268] Generating symbol file lib/librte_eal.so.24.1.p/librte_eal.so.24.1.symbols 00:23:07.249 [232/268] Linking target lib/librte_ring.so.24.1 00:23:07.249 [233/268] Linking target lib/librte_meter.so.24.1 00:23:07.249 [234/268] Linking target lib/librte_timer.so.24.1 00:23:07.249 [235/268] Linking target drivers/librte_bus_vdev.so.24.1 00:23:07.249 [236/268] Linking target lib/librte_dmadev.so.24.1 00:23:07.249 [237/268] Linking target lib/librte_pci.so.24.1 00:23:07.249 [238/268] Generating symbol file lib/librte_ring.so.24.1.p/librte_ring.so.24.1.symbols 00:23:07.507 [239/268] Generating symbol file lib/librte_meter.so.24.1.p/librte_meter.so.24.1.symbols 00:23:07.507 [240/268] Generating symbol file lib/librte_timer.so.24.1.p/librte_timer.so.24.1.symbols 00:23:07.507 [241/268] Generating symbol file lib/librte_dmadev.so.24.1.p/librte_dmadev.so.24.1.symbols 00:23:07.507 [242/268] Linking target lib/librte_mempool.so.24.1 00:23:07.507 [243/268] Linking target lib/librte_rcu.so.24.1 00:23:07.507 [244/268] Generating symbol file lib/librte_pci.so.24.1.p/librte_pci.so.24.1.symbols 00:23:07.507 [245/268] Linking target drivers/librte_bus_pci.so.24.1 00:23:07.507 [246/268] Generating symbol file lib/librte_mempool.so.24.1.p/librte_mempool.so.24.1.symbols 00:23:07.766 [247/268] Generating symbol file lib/librte_rcu.so.24.1.p/librte_rcu.so.24.1.symbols 00:23:07.766 [248/268] Linking target lib/librte_mbuf.so.24.1 00:23:07.766 [249/268] Linking target drivers/librte_mempool_ring.so.24.1 00:23:07.766 [250/268] Generating lib/ethdev.sym_chk with a custom command (wrapped by meson to capture output) 00:23:08.024 [251/268] Generating symbol file lib/librte_mbuf.so.24.1.p/librte_mbuf.so.24.1.symbols 00:23:08.024 [252/268] Linking target lib/librte_compressdev.so.24.1 00:23:08.024 [253/268] Linking target lib/librte_reorder.so.24.1 00:23:08.024 [254/268] Linking target lib/librte_net.so.24.1 00:23:08.024 [255/268] Linking target lib/librte_cryptodev.so.24.1 00:23:08.282 [256/268] Generating symbol file lib/librte_net.so.24.1.p/librte_net.so.24.1.symbols 00:23:08.282 [257/268] Generating symbol file lib/librte_cryptodev.so.24.1.p/librte_cryptodev.so.24.1.symbols 00:23:08.282 [258/268] Linking target lib/librte_cmdline.so.24.1 00:23:08.282 [259/268] Linking target lib/librte_hash.so.24.1 00:23:08.282 [260/268] Linking target lib/librte_security.so.24.1 00:23:08.282 [261/268] Linking target lib/librte_ethdev.so.24.1 00:23:08.540 [262/268] Generating symbol file lib/librte_hash.so.24.1.p/librte_hash.so.24.1.symbols 00:23:08.540 [263/268] Generating symbol file lib/librte_ethdev.so.24.1.p/librte_ethdev.so.24.1.symbols 00:23:08.540 [264/268] Linking target lib/librte_power.so.24.1 00:23:10.447 [265/268] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net.c.o 00:23:10.447 [266/268] Linking static target lib/librte_vhost.a 00:23:11.841 [267/268] Generating lib/vhost.sym_chk with a custom command (wrapped by meson to capture output) 00:23:11.841 [268/268] Linking target lib/librte_vhost.so.24.1 00:23:11.841 INFO: autodetecting backend as ninja 00:23:11.841 INFO: calculating backend command to run: /usr/local/bin/ninja -C /home/vagrant/spdk_repo/spdk/dpdk/build-tmp -j 10 00:23:33.767 CC lib/ut_mock/mock.o 00:23:33.767 CC lib/ut/ut.o 00:23:33.767 CC lib/log/log.o 00:23:33.767 CC lib/log/log_flags.o 00:23:33.767 CC lib/log/log_deprecated.o 00:23:33.767 LIB libspdk_ut.a 00:23:33.767 LIB libspdk_ut_mock.a 00:23:33.767 SO libspdk_ut.so.2.0 00:23:33.767 SO libspdk_ut_mock.so.6.0 00:23:33.767 LIB libspdk_log.a 00:23:33.767 SYMLINK libspdk_ut.so 00:23:33.767 SO libspdk_log.so.7.1 00:23:33.767 SYMLINK libspdk_ut_mock.so 00:23:33.767 SYMLINK libspdk_log.so 00:23:33.767 CXX lib/trace_parser/trace.o 00:23:33.767 CC lib/util/base64.o 00:23:33.767 CC lib/ioat/ioat.o 00:23:33.767 CC lib/dma/dma.o 00:23:33.767 CC lib/util/bit_array.o 00:23:33.767 CC lib/util/cpuset.o 00:23:33.767 CC lib/util/crc16.o 00:23:33.767 CC lib/util/crc32.o 00:23:33.767 CC lib/util/crc32c.o 00:23:33.767 CC lib/vfio_user/host/vfio_user_pci.o 00:23:33.767 CC lib/vfio_user/host/vfio_user.o 00:23:33.767 CC lib/util/crc32_ieee.o 00:23:33.767 CC lib/util/crc64.o 00:23:33.767 LIB libspdk_dma.a 00:23:33.767 CC lib/util/dif.o 00:23:33.767 SO libspdk_dma.so.5.0 00:23:33.767 CC lib/util/fd.o 00:23:33.767 CC lib/util/fd_group.o 00:23:33.767 SYMLINK libspdk_dma.so 00:23:33.767 CC lib/util/file.o 00:23:33.767 CC lib/util/hexlify.o 00:23:33.767 CC lib/util/iov.o 00:23:33.767 LIB libspdk_ioat.a 00:23:33.767 SO libspdk_ioat.so.7.0 00:23:33.767 CC lib/util/math.o 00:23:33.767 CC lib/util/net.o 00:23:33.767 LIB libspdk_vfio_user.a 00:23:33.767 SYMLINK libspdk_ioat.so 00:23:33.767 CC lib/util/pipe.o 00:23:33.767 CC lib/util/strerror_tls.o 00:23:33.767 SO libspdk_vfio_user.so.5.0 00:23:33.767 CC lib/util/string.o 00:23:33.767 SYMLINK libspdk_vfio_user.so 00:23:33.767 CC lib/util/uuid.o 00:23:33.767 CC lib/util/xor.o 00:23:33.767 CC lib/util/zipf.o 00:23:33.767 CC lib/util/md5.o 00:23:34.703 LIB libspdk_util.a 00:23:34.703 LIB libspdk_trace_parser.a 00:23:34.703 SO libspdk_util.so.10.1 00:23:34.703 SO libspdk_trace_parser.so.6.0 00:23:34.703 SYMLINK libspdk_trace_parser.so 00:23:34.703 SYMLINK libspdk_util.so 00:23:34.961 CC lib/env_dpdk/memory.o 00:23:34.961 CC lib/rdma_utils/rdma_utils.o 00:23:34.961 CC lib/env_dpdk/env.o 00:23:34.961 CC lib/conf/conf.o 00:23:34.961 CC lib/env_dpdk/init.o 00:23:34.961 CC lib/env_dpdk/pci.o 00:23:34.961 CC lib/env_dpdk/threads.o 00:23:34.961 CC lib/json/json_parse.o 00:23:34.961 CC lib/vmd/vmd.o 00:23:34.961 CC lib/idxd/idxd.o 00:23:35.219 CC lib/env_dpdk/pci_ioat.o 00:23:35.477 CC lib/json/json_util.o 00:23:35.477 LIB libspdk_conf.a 00:23:35.477 SO libspdk_conf.so.6.0 00:23:35.477 CC lib/vmd/led.o 00:23:35.477 SYMLINK libspdk_conf.so 00:23:35.477 CC lib/env_dpdk/pci_virtio.o 00:23:35.477 LIB libspdk_rdma_utils.a 00:23:35.735 CC lib/env_dpdk/pci_vmd.o 00:23:35.735 SO libspdk_rdma_utils.so.1.0 00:23:35.735 CC lib/json/json_write.o 00:23:35.735 SYMLINK libspdk_rdma_utils.so 00:23:35.735 CC lib/env_dpdk/pci_idxd.o 00:23:35.735 CC lib/env_dpdk/pci_event.o 00:23:35.735 CC lib/env_dpdk/sigbus_handler.o 00:23:35.993 CC lib/env_dpdk/pci_dpdk.o 00:23:35.993 CC lib/idxd/idxd_user.o 00:23:35.993 CC lib/env_dpdk/pci_dpdk_2207.o 00:23:35.993 CC lib/rdma_provider/common.o 00:23:35.993 CC lib/env_dpdk/pci_dpdk_2211.o 00:23:36.251 CC lib/idxd/idxd_kernel.o 00:23:36.251 CC lib/rdma_provider/rdma_provider_verbs.o 00:23:36.251 LIB libspdk_vmd.a 00:23:36.251 LIB libspdk_json.a 00:23:36.251 SO libspdk_vmd.so.6.0 00:23:36.509 SO libspdk_json.so.6.0 00:23:36.509 SYMLINK libspdk_vmd.so 00:23:36.509 LIB libspdk_idxd.a 00:23:36.509 SYMLINK libspdk_json.so 00:23:36.509 SO libspdk_idxd.so.12.1 00:23:36.509 LIB libspdk_rdma_provider.a 00:23:36.509 SO libspdk_rdma_provider.so.7.0 00:23:36.767 SYMLINK libspdk_idxd.so 00:23:36.767 SYMLINK libspdk_rdma_provider.so 00:23:36.767 CC lib/jsonrpc/jsonrpc_server_tcp.o 00:23:36.767 CC lib/jsonrpc/jsonrpc_server.o 00:23:36.767 CC lib/jsonrpc/jsonrpc_client.o 00:23:36.767 CC lib/jsonrpc/jsonrpc_client_tcp.o 00:23:37.024 LIB libspdk_jsonrpc.a 00:23:37.283 SO libspdk_jsonrpc.so.6.0 00:23:37.283 SYMLINK libspdk_jsonrpc.so 00:23:37.541 CC lib/rpc/rpc.o 00:23:37.799 LIB libspdk_env_dpdk.a 00:23:37.799 SO libspdk_env_dpdk.so.15.1 00:23:38.057 LIB libspdk_rpc.a 00:23:38.057 SO libspdk_rpc.so.6.0 00:23:38.057 SYMLINK libspdk_rpc.so 00:23:38.057 SYMLINK libspdk_env_dpdk.so 00:23:38.314 CC lib/trace/trace.o 00:23:38.314 CC lib/notify/notify.o 00:23:38.314 CC lib/trace/trace_flags.o 00:23:38.314 CC lib/keyring/keyring_rpc.o 00:23:38.314 CC lib/keyring/keyring.o 00:23:38.314 CC lib/notify/notify_rpc.o 00:23:38.314 CC lib/trace/trace_rpc.o 00:23:38.594 LIB libspdk_notify.a 00:23:38.594 SO libspdk_notify.so.6.0 00:23:38.594 LIB libspdk_keyring.a 00:23:38.594 SYMLINK libspdk_notify.so 00:23:38.594 SO libspdk_keyring.so.2.0 00:23:38.875 SYMLINK libspdk_keyring.so 00:23:38.875 LIB libspdk_trace.a 00:23:38.875 SO libspdk_trace.so.11.0 00:23:38.875 SYMLINK libspdk_trace.so 00:23:39.133 CC lib/thread/thread.o 00:23:39.133 CC lib/thread/iobuf.o 00:23:39.133 CC lib/sock/sock.o 00:23:39.133 CC lib/sock/sock_rpc.o 00:23:39.701 LIB libspdk_sock.a 00:23:39.701 SO libspdk_sock.so.10.0 00:23:39.701 SYMLINK libspdk_sock.so 00:23:40.268 CC lib/nvme/nvme_ctrlr_cmd.o 00:23:40.268 CC lib/nvme/nvme_ctrlr.o 00:23:40.268 CC lib/nvme/nvme_ns_cmd.o 00:23:40.268 CC lib/nvme/nvme_ns.o 00:23:40.268 CC lib/nvme/nvme_fabric.o 00:23:40.268 CC lib/nvme/nvme_pcie.o 00:23:40.268 CC lib/nvme/nvme_pcie_common.o 00:23:40.268 CC lib/nvme/nvme_qpair.o 00:23:40.268 CC lib/nvme/nvme.o 00:23:40.834 CC lib/nvme/nvme_quirks.o 00:23:41.092 CC lib/nvme/nvme_transport.o 00:23:41.351 CC lib/nvme/nvme_discovery.o 00:23:41.351 CC lib/nvme/nvme_ctrlr_ocssd_cmd.o 00:23:41.351 CC lib/nvme/nvme_ns_ocssd_cmd.o 00:23:41.351 LIB libspdk_thread.a 00:23:41.681 SO libspdk_thread.so.11.0 00:23:41.681 CC lib/nvme/nvme_tcp.o 00:23:41.681 CC lib/nvme/nvme_opal.o 00:23:41.681 SYMLINK libspdk_thread.so 00:23:41.681 CC lib/nvme/nvme_io_msg.o 00:23:41.681 CC lib/nvme/nvme_poll_group.o 00:23:41.940 CC lib/nvme/nvme_zns.o 00:23:42.199 CC lib/accel/accel.o 00:23:42.199 CC lib/nvme/nvme_stubs.o 00:23:42.199 CC lib/blob/blobstore.o 00:23:42.199 CC lib/init/json_config.o 00:23:42.457 CC lib/accel/accel_rpc.o 00:23:42.457 CC lib/init/subsystem.o 00:23:42.714 CC lib/virtio/virtio.o 00:23:42.714 CC lib/blob/request.o 00:23:42.714 CC lib/blob/zeroes.o 00:23:42.714 CC lib/accel/accel_sw.o 00:23:42.973 CC lib/init/subsystem_rpc.o 00:23:42.973 CC lib/fsdev/fsdev.o 00:23:42.973 CC lib/init/rpc.o 00:23:42.973 CC lib/virtio/virtio_vhost_user.o 00:23:42.973 CC lib/virtio/virtio_vfio_user.o 00:23:43.232 LIB libspdk_init.a 00:23:43.232 SO libspdk_init.so.6.0 00:23:43.232 CC lib/virtio/virtio_pci.o 00:23:43.232 SYMLINK libspdk_init.so 00:23:43.232 CC lib/nvme/nvme_auth.o 00:23:43.490 CC lib/fsdev/fsdev_io.o 00:23:43.749 CC lib/nvme/nvme_cuse.o 00:23:43.749 LIB libspdk_virtio.a 00:23:43.749 CC lib/event/app.o 00:23:43.749 LIB libspdk_accel.a 00:23:43.749 SO libspdk_virtio.so.7.0 00:23:43.749 SO libspdk_accel.so.16.0 00:23:43.749 SYMLINK libspdk_virtio.so 00:23:43.749 CC lib/event/reactor.o 00:23:44.007 CC lib/event/log_rpc.o 00:23:44.007 SYMLINK libspdk_accel.so 00:23:44.007 CC lib/blob/blob_bs_dev.o 00:23:44.007 CC lib/fsdev/fsdev_rpc.o 00:23:44.266 CC lib/nvme/nvme_rdma.o 00:23:44.266 CC lib/event/app_rpc.o 00:23:44.266 LIB libspdk_fsdev.a 00:23:44.266 SO libspdk_fsdev.so.2.0 00:23:44.266 CC lib/bdev/bdev.o 00:23:44.534 SYMLINK libspdk_fsdev.so 00:23:44.534 CC lib/bdev/bdev_rpc.o 00:23:44.534 CC lib/event/scheduler_static.o 00:23:44.534 CC lib/bdev/bdev_zone.o 00:23:44.534 CC lib/bdev/part.o 00:23:44.791 LIB libspdk_event.a 00:23:44.791 SO libspdk_event.so.14.0 00:23:44.791 CC lib/fuse_dispatcher/fuse_dispatcher.o 00:23:44.791 CC lib/bdev/scsi_nvme.o 00:23:44.791 SYMLINK libspdk_event.so 00:23:46.161 LIB libspdk_fuse_dispatcher.a 00:23:46.161 SO libspdk_fuse_dispatcher.so.1.0 00:23:46.161 SYMLINK libspdk_fuse_dispatcher.so 00:23:46.418 LIB libspdk_nvme.a 00:23:46.675 SO libspdk_nvme.so.15.0 00:23:46.932 SYMLINK libspdk_nvme.so 00:23:47.497 LIB libspdk_blob.a 00:23:47.497 SO libspdk_blob.so.12.0 00:23:47.754 SYMLINK libspdk_blob.so 00:23:48.012 CC lib/lvol/lvol.o 00:23:48.012 CC lib/blobfs/blobfs.o 00:23:48.012 CC lib/blobfs/tree.o 00:23:48.270 LIB libspdk_bdev.a 00:23:48.270 SO libspdk_bdev.so.17.0 00:23:48.528 SYMLINK libspdk_bdev.so 00:23:48.786 CC lib/nbd/nbd.o 00:23:48.786 CC lib/nbd/nbd_rpc.o 00:23:48.786 CC lib/nvmf/ctrlr_discovery.o 00:23:48.786 CC lib/scsi/dev.o 00:23:48.786 CC lib/nvmf/ctrlr_bdev.o 00:23:48.786 CC lib/ftl/ftl_core.o 00:23:48.786 CC lib/nvmf/ctrlr.o 00:23:48.786 CC lib/ublk/ublk.o 00:23:49.353 CC lib/ublk/ublk_rpc.o 00:23:49.353 CC lib/scsi/lun.o 00:23:49.353 CC lib/nvmf/subsystem.o 00:23:49.610 LIB libspdk_lvol.a 00:23:49.610 SO libspdk_lvol.so.11.0 00:23:49.610 CC lib/ftl/ftl_init.o 00:23:49.610 LIB libspdk_blobfs.a 00:23:49.610 LIB libspdk_nbd.a 00:23:49.610 SO libspdk_blobfs.so.11.0 00:23:49.610 SO libspdk_nbd.so.7.0 00:23:49.610 SYMLINK libspdk_lvol.so 00:23:49.868 CC lib/ftl/ftl_layout.o 00:23:49.868 SYMLINK libspdk_blobfs.so 00:23:49.868 CC lib/scsi/port.o 00:23:49.868 SYMLINK libspdk_nbd.so 00:23:49.868 CC lib/scsi/scsi.o 00:23:49.868 CC lib/nvmf/nvmf.o 00:23:49.868 LIB libspdk_ublk.a 00:23:49.868 CC lib/nvmf/nvmf_rpc.o 00:23:49.868 SO libspdk_ublk.so.3.0 00:23:50.126 CC lib/ftl/ftl_debug.o 00:23:50.126 CC lib/ftl/ftl_io.o 00:23:50.126 SYMLINK libspdk_ublk.so 00:23:50.126 CC lib/ftl/ftl_sb.o 00:23:50.126 CC lib/scsi/scsi_bdev.o 00:23:50.384 CC lib/ftl/ftl_l2p.o 00:23:50.384 CC lib/ftl/ftl_l2p_flat.o 00:23:50.384 CC lib/ftl/ftl_nv_cache.o 00:23:50.384 CC lib/ftl/ftl_band.o 00:23:50.384 CC lib/nvmf/transport.o 00:23:50.643 CC lib/ftl/ftl_band_ops.o 00:23:50.643 CC lib/nvmf/tcp.o 00:23:50.900 CC lib/ftl/ftl_writer.o 00:23:51.158 CC lib/scsi/scsi_pr.o 00:23:51.158 CC lib/ftl/ftl_rq.o 00:23:51.416 CC lib/ftl/ftl_reloc.o 00:23:51.416 CC lib/scsi/scsi_rpc.o 00:23:51.416 CC lib/ftl/ftl_l2p_cache.o 00:23:51.675 CC lib/ftl/ftl_p2l.o 00:23:51.675 CC lib/ftl/ftl_p2l_log.o 00:23:51.675 CC lib/scsi/task.o 00:23:51.675 CC lib/ftl/mngt/ftl_mngt.o 00:23:51.934 CC lib/nvmf/stubs.o 00:23:52.193 CC lib/ftl/mngt/ftl_mngt_bdev.o 00:23:52.193 LIB libspdk_scsi.a 00:23:52.193 CC lib/ftl/mngt/ftl_mngt_shutdown.o 00:23:52.193 CC lib/ftl/mngt/ftl_mngt_startup.o 00:23:52.193 SO libspdk_scsi.so.9.0 00:23:52.451 CC lib/ftl/mngt/ftl_mngt_md.o 00:23:52.451 CC lib/ftl/mngt/ftl_mngt_misc.o 00:23:52.451 SYMLINK libspdk_scsi.so 00:23:52.451 CC lib/nvmf/mdns_server.o 00:23:52.451 CC lib/ftl/mngt/ftl_mngt_ioch.o 00:23:52.451 CC lib/nvmf/rdma.o 00:23:53.018 CC lib/ftl/mngt/ftl_mngt_l2p.o 00:23:53.018 CC lib/iscsi/conn.o 00:23:53.018 CC lib/vhost/vhost.o 00:23:53.018 CC lib/vhost/vhost_rpc.o 00:23:53.018 CC lib/ftl/mngt/ftl_mngt_band.o 00:23:53.018 CC lib/ftl/mngt/ftl_mngt_self_test.o 00:23:53.018 CC lib/ftl/mngt/ftl_mngt_p2l.o 00:23:53.018 CC lib/nvmf/auth.o 00:23:53.277 CC lib/vhost/vhost_scsi.o 00:23:53.535 CC lib/vhost/vhost_blk.o 00:23:53.535 CC lib/vhost/rte_vhost_user.o 00:23:53.535 CC lib/ftl/mngt/ftl_mngt_recovery.o 00:23:53.794 CC lib/iscsi/init_grp.o 00:23:53.794 CC lib/ftl/mngt/ftl_mngt_upgrade.o 00:23:54.052 CC lib/iscsi/iscsi.o 00:23:54.052 CC lib/ftl/utils/ftl_conf.o 00:23:54.052 CC lib/iscsi/param.o 00:23:54.322 CC lib/iscsi/portal_grp.o 00:23:54.322 CC lib/iscsi/tgt_node.o 00:23:54.322 CC lib/ftl/utils/ftl_md.o 00:23:54.322 CC lib/iscsi/iscsi_subsystem.o 00:23:54.580 CC lib/iscsi/iscsi_rpc.o 00:23:54.580 CC lib/iscsi/task.o 00:23:54.838 CC lib/ftl/utils/ftl_mempool.o 00:23:55.096 CC lib/ftl/utils/ftl_bitmap.o 00:23:55.096 CC lib/ftl/utils/ftl_property.o 00:23:55.096 CC lib/ftl/utils/ftl_layout_tracker_bdev.o 00:23:55.096 CC lib/ftl/upgrade/ftl_layout_upgrade.o 00:23:55.096 CC lib/ftl/upgrade/ftl_sb_upgrade.o 00:23:55.096 CC lib/ftl/upgrade/ftl_p2l_upgrade.o 00:23:55.354 CC lib/ftl/upgrade/ftl_band_upgrade.o 00:23:55.354 CC lib/ftl/upgrade/ftl_chunk_upgrade.o 00:23:55.354 CC lib/ftl/upgrade/ftl_trim_upgrade.o 00:23:55.354 CC lib/ftl/upgrade/ftl_sb_v3.o 00:23:55.354 CC lib/ftl/upgrade/ftl_sb_v5.o 00:23:55.354 CC lib/ftl/nvc/ftl_nvc_dev.o 00:23:55.618 CC lib/ftl/nvc/ftl_nvc_bdev_vss.o 00:23:55.618 CC lib/ftl/nvc/ftl_nvc_bdev_non_vss.o 00:23:55.618 LIB libspdk_vhost.a 00:23:55.618 CC lib/ftl/nvc/ftl_nvc_bdev_common.o 00:23:55.618 CC lib/ftl/base/ftl_base_dev.o 00:23:55.618 SO libspdk_vhost.so.8.0 00:23:55.618 CC lib/ftl/base/ftl_base_bdev.o 00:23:55.618 CC lib/ftl/ftl_trace.o 00:23:55.887 SYMLINK libspdk_vhost.so 00:23:56.144 LIB libspdk_ftl.a 00:23:56.402 LIB libspdk_iscsi.a 00:23:56.402 SO libspdk_ftl.so.9.0 00:23:56.402 LIB libspdk_nvmf.a 00:23:56.660 SO libspdk_iscsi.so.8.0 00:23:56.660 SO libspdk_nvmf.so.20.0 00:23:56.919 SYMLINK libspdk_iscsi.so 00:23:56.919 SYMLINK libspdk_ftl.so 00:23:56.919 SYMLINK libspdk_nvmf.so 00:23:57.484 CC module/env_dpdk/env_dpdk_rpc.o 00:23:57.484 CC module/fsdev/aio/fsdev_aio.o 00:23:57.484 CC module/accel/error/accel_error.o 00:23:57.484 CC module/keyring/file/keyring.o 00:23:57.484 CC module/blob/bdev/blob_bdev.o 00:23:57.484 CC module/accel/ioat/accel_ioat.o 00:23:57.484 CC module/sock/posix/posix.o 00:23:57.484 CC module/accel/iaa/accel_iaa.o 00:23:57.484 CC module/scheduler/dynamic/scheduler_dynamic.o 00:23:57.484 CC module/accel/dsa/accel_dsa.o 00:23:57.742 LIB libspdk_env_dpdk_rpc.a 00:23:57.742 SO libspdk_env_dpdk_rpc.so.6.0 00:23:57.742 SYMLINK libspdk_env_dpdk_rpc.so 00:23:57.742 CC module/accel/iaa/accel_iaa_rpc.o 00:23:57.742 CC module/accel/ioat/accel_ioat_rpc.o 00:23:57.742 CC module/accel/error/accel_error_rpc.o 00:23:57.742 CC module/keyring/file/keyring_rpc.o 00:23:58.000 CC module/accel/dsa/accel_dsa_rpc.o 00:23:58.000 LIB libspdk_blob_bdev.a 00:23:58.000 LIB libspdk_accel_iaa.a 00:23:58.000 LIB libspdk_accel_ioat.a 00:23:58.000 SO libspdk_blob_bdev.so.12.0 00:23:58.000 SO libspdk_accel_iaa.so.3.0 00:23:58.000 SO libspdk_accel_ioat.so.6.0 00:23:58.000 LIB libspdk_keyring_file.a 00:23:58.000 LIB libspdk_scheduler_dynamic.a 00:23:58.000 SYMLINK libspdk_blob_bdev.so 00:23:58.000 SO libspdk_keyring_file.so.2.0 00:23:58.000 SO libspdk_scheduler_dynamic.so.4.0 00:23:58.000 SYMLINK libspdk_accel_iaa.so 00:23:58.000 SYMLINK libspdk_accel_ioat.so 00:23:58.000 CC module/fsdev/aio/fsdev_aio_rpc.o 00:23:58.000 LIB libspdk_accel_error.a 00:23:58.258 SYMLINK libspdk_keyring_file.so 00:23:58.258 SO libspdk_accel_error.so.2.0 00:23:58.258 SYMLINK libspdk_scheduler_dynamic.so 00:23:58.258 LIB libspdk_accel_dsa.a 00:23:58.258 SYMLINK libspdk_accel_error.so 00:23:58.258 SO libspdk_accel_dsa.so.5.0 00:23:58.258 CC module/fsdev/aio/linux_aio_mgr.o 00:23:58.258 CC module/scheduler/gscheduler/gscheduler.o 00:23:58.258 CC module/scheduler/dpdk_governor/dpdk_governor.o 00:23:58.258 CC module/keyring/linux/keyring.o 00:23:58.516 SYMLINK libspdk_accel_dsa.so 00:23:58.516 CC module/keyring/linux/keyring_rpc.o 00:23:58.516 CC module/bdev/gpt/gpt.o 00:23:58.516 CC module/bdev/error/vbdev_error.o 00:23:58.516 CC module/bdev/delay/vbdev_delay.o 00:23:58.516 LIB libspdk_scheduler_gscheduler.a 00:23:58.774 CC module/bdev/delay/vbdev_delay_rpc.o 00:23:58.774 CC module/bdev/gpt/vbdev_gpt.o 00:23:58.774 LIB libspdk_scheduler_dpdk_governor.a 00:23:58.774 SO libspdk_scheduler_gscheduler.so.4.0 00:23:58.774 LIB libspdk_keyring_linux.a 00:23:58.774 SO libspdk_scheduler_dpdk_governor.so.4.0 00:23:58.774 SO libspdk_keyring_linux.so.1.0 00:23:58.774 SYMLINK libspdk_scheduler_gscheduler.so 00:23:58.774 SYMLINK libspdk_scheduler_dpdk_governor.so 00:23:58.774 SYMLINK libspdk_keyring_linux.so 00:23:58.774 CC module/bdev/error/vbdev_error_rpc.o 00:23:59.031 CC module/bdev/malloc/bdev_malloc.o 00:23:59.031 CC module/bdev/lvol/vbdev_lvol.o 00:23:59.031 CC module/bdev/null/bdev_null.o 00:23:59.031 LIB libspdk_bdev_gpt.a 00:23:59.031 LIB libspdk_bdev_error.a 00:23:59.289 SO libspdk_bdev_gpt.so.6.0 00:23:59.289 LIB libspdk_fsdev_aio.a 00:23:59.289 SO libspdk_bdev_error.so.6.0 00:23:59.289 CC module/bdev/nvme/bdev_nvme.o 00:23:59.289 LIB libspdk_bdev_delay.a 00:23:59.289 SYMLINK libspdk_bdev_gpt.so 00:23:59.289 SO libspdk_fsdev_aio.so.1.0 00:23:59.289 CC module/blobfs/bdev/blobfs_bdev.o 00:23:59.289 LIB libspdk_sock_posix.a 00:23:59.289 SYMLINK libspdk_bdev_error.so 00:23:59.289 SO libspdk_bdev_delay.so.6.0 00:23:59.289 SO libspdk_sock_posix.so.6.0 00:23:59.289 CC module/bdev/malloc/bdev_malloc_rpc.o 00:23:59.549 SYMLINK libspdk_fsdev_aio.so 00:23:59.549 SYMLINK libspdk_bdev_delay.so 00:23:59.549 CC module/bdev/passthru/vbdev_passthru.o 00:23:59.549 CC module/bdev/null/bdev_null_rpc.o 00:23:59.549 SYMLINK libspdk_sock_posix.so 00:23:59.549 CC module/blobfs/bdev/blobfs_bdev_rpc.o 00:23:59.808 CC module/bdev/raid/bdev_raid.o 00:23:59.808 CC module/bdev/split/vbdev_split.o 00:23:59.808 CC module/bdev/zone_block/vbdev_zone_block.o 00:23:59.808 CC module/bdev/lvol/vbdev_lvol_rpc.o 00:23:59.808 LIB libspdk_bdev_malloc.a 00:23:59.808 LIB libspdk_bdev_null.a 00:23:59.808 CC module/bdev/aio/bdev_aio.o 00:23:59.808 SO libspdk_bdev_malloc.so.6.0 00:23:59.808 SO libspdk_bdev_null.so.6.0 00:24:00.066 LIB libspdk_blobfs_bdev.a 00:24:00.066 SYMLINK libspdk_bdev_malloc.so 00:24:00.066 CC module/bdev/split/vbdev_split_rpc.o 00:24:00.066 SO libspdk_blobfs_bdev.so.6.0 00:24:00.066 SYMLINK libspdk_bdev_null.so 00:24:00.066 CC module/bdev/raid/bdev_raid_rpc.o 00:24:00.066 SYMLINK libspdk_blobfs_bdev.so 00:24:00.066 CC module/bdev/raid/bdev_raid_sb.o 00:24:00.066 CC module/bdev/passthru/vbdev_passthru_rpc.o 00:24:00.066 CC module/bdev/zone_block/vbdev_zone_block_rpc.o 00:24:00.324 LIB libspdk_bdev_split.a 00:24:00.324 SO libspdk_bdev_split.so.6.0 00:24:00.324 CC module/bdev/aio/bdev_aio_rpc.o 00:24:00.324 SYMLINK libspdk_bdev_split.so 00:24:00.324 LIB libspdk_bdev_passthru.a 00:24:00.324 SO libspdk_bdev_passthru.so.6.0 00:24:00.583 LIB libspdk_bdev_zone_block.a 00:24:00.583 LIB libspdk_bdev_lvol.a 00:24:00.583 SYMLINK libspdk_bdev_passthru.so 00:24:00.583 LIB libspdk_bdev_aio.a 00:24:00.583 CC module/bdev/ftl/bdev_ftl.o 00:24:00.583 CC module/bdev/raid/raid0.o 00:24:00.583 SO libspdk_bdev_lvol.so.6.0 00:24:00.583 SO libspdk_bdev_zone_block.so.6.0 00:24:00.583 SO libspdk_bdev_aio.so.6.0 00:24:00.583 CC module/bdev/iscsi/bdev_iscsi.o 00:24:00.583 CC module/bdev/ftl/bdev_ftl_rpc.o 00:24:00.583 SYMLINK libspdk_bdev_aio.so 00:24:00.842 CC module/bdev/nvme/bdev_nvme_rpc.o 00:24:00.842 SYMLINK libspdk_bdev_zone_block.so 00:24:00.842 SYMLINK libspdk_bdev_lvol.so 00:24:00.842 CC module/bdev/raid/raid1.o 00:24:00.842 CC module/bdev/nvme/nvme_rpc.o 00:24:00.842 CC module/bdev/virtio/bdev_virtio_scsi.o 00:24:00.842 CC module/bdev/nvme/bdev_mdns_client.o 00:24:01.101 LIB libspdk_bdev_ftl.a 00:24:01.101 CC module/bdev/nvme/vbdev_opal.o 00:24:01.101 SO libspdk_bdev_ftl.so.6.0 00:24:01.101 CC module/bdev/nvme/vbdev_opal_rpc.o 00:24:01.101 CC module/bdev/nvme/bdev_nvme_cuse_rpc.o 00:24:01.101 CC module/bdev/virtio/bdev_virtio_blk.o 00:24:01.361 SYMLINK libspdk_bdev_ftl.so 00:24:01.361 CC module/bdev/iscsi/bdev_iscsi_rpc.o 00:24:01.361 CC module/bdev/virtio/bdev_virtio_rpc.o 00:24:01.361 CC module/bdev/raid/concat.o 00:24:01.361 CC module/bdev/raid/raid5f.o 00:24:01.620 LIB libspdk_bdev_iscsi.a 00:24:01.620 SO libspdk_bdev_iscsi.so.6.0 00:24:01.620 SYMLINK libspdk_bdev_iscsi.so 00:24:01.878 LIB libspdk_bdev_virtio.a 00:24:01.878 SO libspdk_bdev_virtio.so.6.0 00:24:02.136 SYMLINK libspdk_bdev_virtio.so 00:24:02.136 LIB libspdk_bdev_raid.a 00:24:02.395 SO libspdk_bdev_raid.so.6.0 00:24:02.395 SYMLINK libspdk_bdev_raid.so 00:24:03.768 LIB libspdk_bdev_nvme.a 00:24:03.768 SO libspdk_bdev_nvme.so.7.1 00:24:04.026 SYMLINK libspdk_bdev_nvme.so 00:24:04.592 CC module/event/subsystems/iobuf/iobuf.o 00:24:04.592 CC module/event/subsystems/iobuf/iobuf_rpc.o 00:24:04.592 CC module/event/subsystems/scheduler/scheduler.o 00:24:04.592 CC module/event/subsystems/vmd/vmd.o 00:24:04.592 CC module/event/subsystems/vmd/vmd_rpc.o 00:24:04.592 CC module/event/subsystems/sock/sock.o 00:24:04.592 CC module/event/subsystems/vhost_blk/vhost_blk.o 00:24:04.592 CC module/event/subsystems/fsdev/fsdev.o 00:24:04.592 CC module/event/subsystems/keyring/keyring.o 00:24:04.592 LIB libspdk_event_fsdev.a 00:24:04.592 LIB libspdk_event_keyring.a 00:24:04.592 LIB libspdk_event_vhost_blk.a 00:24:04.592 LIB libspdk_event_scheduler.a 00:24:04.592 SO libspdk_event_fsdev.so.1.0 00:24:04.592 SO libspdk_event_keyring.so.1.0 00:24:04.592 LIB libspdk_event_sock.a 00:24:04.886 SO libspdk_event_vhost_blk.so.3.0 00:24:04.886 LIB libspdk_event_iobuf.a 00:24:04.886 SO libspdk_event_scheduler.so.4.0 00:24:04.886 LIB libspdk_event_vmd.a 00:24:04.886 SO libspdk_event_sock.so.5.0 00:24:04.886 SO libspdk_event_iobuf.so.3.0 00:24:04.886 SYMLINK libspdk_event_keyring.so 00:24:04.886 SYMLINK libspdk_event_fsdev.so 00:24:04.886 SO libspdk_event_vmd.so.6.0 00:24:04.886 SYMLINK libspdk_event_vhost_blk.so 00:24:04.886 SYMLINK libspdk_event_scheduler.so 00:24:04.886 SYMLINK libspdk_event_sock.so 00:24:04.886 SYMLINK libspdk_event_iobuf.so 00:24:04.886 SYMLINK libspdk_event_vmd.so 00:24:05.145 CC module/event/subsystems/accel/accel.o 00:24:05.404 LIB libspdk_event_accel.a 00:24:05.404 SO libspdk_event_accel.so.6.0 00:24:05.404 SYMLINK libspdk_event_accel.so 00:24:05.662 CC module/event/subsystems/bdev/bdev.o 00:24:05.920 LIB libspdk_event_bdev.a 00:24:05.920 SO libspdk_event_bdev.so.6.0 00:24:06.178 SYMLINK libspdk_event_bdev.so 00:24:06.438 CC module/event/subsystems/ublk/ublk.o 00:24:06.438 CC module/event/subsystems/nbd/nbd.o 00:24:06.438 CC module/event/subsystems/scsi/scsi.o 00:24:06.438 CC module/event/subsystems/nvmf/nvmf_rpc.o 00:24:06.438 CC module/event/subsystems/nvmf/nvmf_tgt.o 00:24:06.696 LIB libspdk_event_nbd.a 00:24:06.696 LIB libspdk_event_ublk.a 00:24:06.696 LIB libspdk_event_scsi.a 00:24:06.696 SO libspdk_event_nbd.so.6.0 00:24:06.696 SO libspdk_event_ublk.so.3.0 00:24:06.696 SO libspdk_event_scsi.so.6.0 00:24:06.696 SYMLINK libspdk_event_ublk.so 00:24:06.696 SYMLINK libspdk_event_nbd.so 00:24:06.696 SYMLINK libspdk_event_scsi.so 00:24:06.696 LIB libspdk_event_nvmf.a 00:24:06.954 SO libspdk_event_nvmf.so.6.0 00:24:06.954 SYMLINK libspdk_event_nvmf.so 00:24:06.954 CC module/event/subsystems/iscsi/iscsi.o 00:24:06.954 CC module/event/subsystems/vhost_scsi/vhost_scsi.o 00:24:07.212 LIB libspdk_event_iscsi.a 00:24:07.212 SO libspdk_event_iscsi.so.6.0 00:24:07.212 LIB libspdk_event_vhost_scsi.a 00:24:07.470 SO libspdk_event_vhost_scsi.so.3.0 00:24:07.470 SYMLINK libspdk_event_iscsi.so 00:24:07.470 SYMLINK libspdk_event_vhost_scsi.so 00:24:07.470 SO libspdk.so.6.0 00:24:07.470 SYMLINK libspdk.so 00:24:07.728 CC app/trace_record/trace_record.o 00:24:07.728 CXX app/trace/trace.o 00:24:07.728 CC app/spdk_lspci/spdk_lspci.o 00:24:07.728 CC app/spdk_nvme_identify/identify.o 00:24:07.728 CC app/spdk_nvme_perf/perf.o 00:24:07.987 CC app/iscsi_tgt/iscsi_tgt.o 00:24:07.988 CC app/nvmf_tgt/nvmf_main.o 00:24:07.988 CC app/spdk_tgt/spdk_tgt.o 00:24:07.988 CC examples/util/zipf/zipf.o 00:24:07.988 CC test/thread/poller_perf/poller_perf.o 00:24:07.988 LINK spdk_lspci 00:24:08.246 LINK nvmf_tgt 00:24:08.246 LINK poller_perf 00:24:08.246 LINK zipf 00:24:08.246 LINK iscsi_tgt 00:24:08.246 LINK spdk_trace_record 00:24:08.506 LINK spdk_tgt 00:24:08.506 LINK spdk_trace 00:24:08.763 CC app/spdk_nvme_discover/discovery_aer.o 00:24:08.763 CC examples/ioat/perf/perf.o 00:24:08.763 TEST_HEADER include/spdk/accel.h 00:24:08.763 TEST_HEADER include/spdk/accel_module.h 00:24:08.763 TEST_HEADER include/spdk/assert.h 00:24:08.763 TEST_HEADER include/spdk/barrier.h 00:24:08.763 TEST_HEADER include/spdk/base64.h 00:24:08.763 TEST_HEADER include/spdk/bdev.h 00:24:08.763 TEST_HEADER include/spdk/bdev_module.h 00:24:08.763 TEST_HEADER include/spdk/bdev_zone.h 00:24:08.763 TEST_HEADER include/spdk/bit_array.h 00:24:08.763 TEST_HEADER include/spdk/bit_pool.h 00:24:08.763 TEST_HEADER include/spdk/blob_bdev.h 00:24:09.109 TEST_HEADER include/spdk/blobfs_bdev.h 00:24:09.109 TEST_HEADER include/spdk/blobfs.h 00:24:09.109 TEST_HEADER include/spdk/blob.h 00:24:09.109 TEST_HEADER include/spdk/conf.h 00:24:09.109 TEST_HEADER include/spdk/config.h 00:24:09.109 TEST_HEADER include/spdk/cpuset.h 00:24:09.109 TEST_HEADER include/spdk/crc16.h 00:24:09.109 TEST_HEADER include/spdk/crc32.h 00:24:09.109 TEST_HEADER include/spdk/crc64.h 00:24:09.109 TEST_HEADER include/spdk/dif.h 00:24:09.109 TEST_HEADER include/spdk/dma.h 00:24:09.109 TEST_HEADER include/spdk/endian.h 00:24:09.109 TEST_HEADER include/spdk/env_dpdk.h 00:24:09.109 TEST_HEADER include/spdk/env.h 00:24:09.109 TEST_HEADER include/spdk/event.h 00:24:09.109 TEST_HEADER include/spdk/fd_group.h 00:24:09.109 TEST_HEADER include/spdk/fd.h 00:24:09.109 TEST_HEADER include/spdk/file.h 00:24:09.109 TEST_HEADER include/spdk/fsdev.h 00:24:09.109 TEST_HEADER include/spdk/fsdev_module.h 00:24:09.109 TEST_HEADER include/spdk/ftl.h 00:24:09.109 TEST_HEADER include/spdk/fuse_dispatcher.h 00:24:09.109 TEST_HEADER include/spdk/gpt_spec.h 00:24:09.109 TEST_HEADER include/spdk/hexlify.h 00:24:09.109 TEST_HEADER include/spdk/histogram_data.h 00:24:09.109 TEST_HEADER include/spdk/idxd.h 00:24:09.109 TEST_HEADER include/spdk/idxd_spec.h 00:24:09.109 TEST_HEADER include/spdk/init.h 00:24:09.109 TEST_HEADER include/spdk/ioat.h 00:24:09.109 TEST_HEADER include/spdk/ioat_spec.h 00:24:09.109 TEST_HEADER include/spdk/iscsi_spec.h 00:24:09.109 CC examples/interrupt_tgt/interrupt_tgt.o 00:24:09.109 TEST_HEADER include/spdk/json.h 00:24:09.109 TEST_HEADER include/spdk/jsonrpc.h 00:24:09.109 TEST_HEADER include/spdk/keyring.h 00:24:09.109 CC test/dma/test_dma/test_dma.o 00:24:09.109 TEST_HEADER include/spdk/keyring_module.h 00:24:09.109 TEST_HEADER include/spdk/likely.h 00:24:09.109 TEST_HEADER include/spdk/log.h 00:24:09.109 TEST_HEADER include/spdk/lvol.h 00:24:09.109 TEST_HEADER include/spdk/md5.h 00:24:09.109 TEST_HEADER include/spdk/memory.h 00:24:09.109 TEST_HEADER include/spdk/mmio.h 00:24:09.109 TEST_HEADER include/spdk/nbd.h 00:24:09.109 CC test/app/bdev_svc/bdev_svc.o 00:24:09.109 TEST_HEADER include/spdk/net.h 00:24:09.109 TEST_HEADER include/spdk/notify.h 00:24:09.109 TEST_HEADER include/spdk/nvme.h 00:24:09.109 TEST_HEADER include/spdk/nvme_intel.h 00:24:09.109 TEST_HEADER include/spdk/nvme_ocssd.h 00:24:09.109 TEST_HEADER include/spdk/nvme_ocssd_spec.h 00:24:09.109 TEST_HEADER include/spdk/nvme_spec.h 00:24:09.109 TEST_HEADER include/spdk/nvme_zns.h 00:24:09.109 TEST_HEADER include/spdk/nvmf_cmd.h 00:24:09.109 TEST_HEADER include/spdk/nvmf_fc_spec.h 00:24:09.109 TEST_HEADER include/spdk/nvmf.h 00:24:09.109 TEST_HEADER include/spdk/nvmf_spec.h 00:24:09.109 TEST_HEADER include/spdk/nvmf_transport.h 00:24:09.109 TEST_HEADER include/spdk/opal.h 00:24:09.109 TEST_HEADER include/spdk/opal_spec.h 00:24:09.109 TEST_HEADER include/spdk/pci_ids.h 00:24:09.109 TEST_HEADER include/spdk/pipe.h 00:24:09.109 TEST_HEADER include/spdk/queue.h 00:24:09.109 TEST_HEADER include/spdk/reduce.h 00:24:09.109 TEST_HEADER include/spdk/rpc.h 00:24:09.109 TEST_HEADER include/spdk/scheduler.h 00:24:09.109 TEST_HEADER include/spdk/scsi.h 00:24:09.109 TEST_HEADER include/spdk/scsi_spec.h 00:24:09.109 TEST_HEADER include/spdk/sock.h 00:24:09.109 TEST_HEADER include/spdk/stdinc.h 00:24:09.109 TEST_HEADER include/spdk/string.h 00:24:09.109 TEST_HEADER include/spdk/thread.h 00:24:09.109 TEST_HEADER include/spdk/trace.h 00:24:09.109 TEST_HEADER include/spdk/trace_parser.h 00:24:09.109 TEST_HEADER include/spdk/tree.h 00:24:09.109 TEST_HEADER include/spdk/ublk.h 00:24:09.109 TEST_HEADER include/spdk/util.h 00:24:09.109 TEST_HEADER include/spdk/uuid.h 00:24:09.109 LINK spdk_nvme_discover 00:24:09.109 TEST_HEADER include/spdk/version.h 00:24:09.109 TEST_HEADER include/spdk/vfio_user_pci.h 00:24:09.109 TEST_HEADER include/spdk/vfio_user_spec.h 00:24:09.109 TEST_HEADER include/spdk/vhost.h 00:24:09.392 CC examples/thread/thread/thread_ex.o 00:24:09.392 TEST_HEADER include/spdk/vmd.h 00:24:09.392 TEST_HEADER include/spdk/xor.h 00:24:09.392 TEST_HEADER include/spdk/zipf.h 00:24:09.392 CXX test/cpp_headers/accel.o 00:24:09.392 CC examples/sock/hello_world/hello_sock.o 00:24:09.392 LINK ioat_perf 00:24:09.392 LINK interrupt_tgt 00:24:09.392 LINK bdev_svc 00:24:09.651 CXX test/cpp_headers/accel_module.o 00:24:09.651 LINK spdk_nvme_perf 00:24:09.651 CC app/spdk_top/spdk_top.o 00:24:09.651 CC examples/ioat/verify/verify.o 00:24:09.651 LINK thread 00:24:09.908 LINK hello_sock 00:24:09.908 LINK spdk_nvme_identify 00:24:09.908 CXX test/cpp_headers/assert.o 00:24:09.908 LINK test_dma 00:24:09.908 CC app/vhost/vhost.o 00:24:10.166 LINK verify 00:24:10.166 CXX test/cpp_headers/barrier.o 00:24:10.166 CC test/app/fuzz/iscsi_fuzz/iscsi_fuzz.o 00:24:10.166 CXX test/cpp_headers/base64.o 00:24:10.166 CC test/app/fuzz/nvme_fuzz/nvme_fuzz.o 00:24:10.166 CC app/spdk_dd/spdk_dd.o 00:24:10.423 LINK vhost 00:24:10.423 CXX test/cpp_headers/bdev.o 00:24:10.423 CC test/env/vtophys/vtophys.o 00:24:10.680 CC test/env/mem_callbacks/mem_callbacks.o 00:24:10.680 CC examples/vmd/lsvmd/lsvmd.o 00:24:10.680 CC test/event/event_perf/event_perf.o 00:24:10.680 CXX test/cpp_headers/bdev_module.o 00:24:10.680 LINK vtophys 00:24:10.938 CC examples/vmd/led/led.o 00:24:10.938 LINK lsvmd 00:24:10.938 LINK event_perf 00:24:10.938 LINK spdk_dd 00:24:10.938 LINK nvme_fuzz 00:24:11.196 CXX test/cpp_headers/bdev_zone.o 00:24:11.196 LINK led 00:24:11.196 CC test/app/fuzz/vhost_fuzz/vhost_fuzz_rpc.o 00:24:11.477 CC test/app/histogram_perf/histogram_perf.o 00:24:11.477 CC test/event/reactor/reactor.o 00:24:11.477 CC test/event/reactor_perf/reactor_perf.o 00:24:11.477 CC test/app/fuzz/vhost_fuzz/vhost_fuzz.o 00:24:11.477 CC test/app/jsoncat/jsoncat.o 00:24:11.478 LINK mem_callbacks 00:24:11.478 CXX test/cpp_headers/bit_array.o 00:24:11.478 LINK histogram_perf 00:24:11.478 LINK reactor 00:24:11.737 LINK spdk_top 00:24:11.737 LINK reactor_perf 00:24:11.737 CC examples/idxd/perf/perf.o 00:24:11.737 CC test/env/env_dpdk_post_init/env_dpdk_post_init.o 00:24:11.737 LINK jsoncat 00:24:11.994 CXX test/cpp_headers/bit_pool.o 00:24:11.994 CXX test/cpp_headers/blob_bdev.o 00:24:12.253 LINK env_dpdk_post_init 00:24:12.253 CC test/event/app_repeat/app_repeat.o 00:24:12.253 CC test/nvme/aer/aer.o 00:24:12.253 CC examples/fsdev/hello_world/hello_fsdev.o 00:24:12.253 CC app/fio/nvme/fio_plugin.o 00:24:12.253 CXX test/cpp_headers/blobfs_bdev.o 00:24:12.253 LINK vhost_fuzz 00:24:12.512 CC test/event/scheduler/scheduler.o 00:24:12.512 LINK idxd_perf 00:24:12.512 LINK app_repeat 00:24:12.512 CXX test/cpp_headers/blobfs.o 00:24:12.771 LINK aer 00:24:12.771 CC test/env/memory/memory_ut.o 00:24:12.771 CXX test/cpp_headers/blob.o 00:24:12.771 LINK hello_fsdev 00:24:12.771 CC test/env/pci/pci_ut.o 00:24:13.031 CC test/rpc_client/rpc_client_test.o 00:24:13.031 LINK scheduler 00:24:13.031 CC app/fio/bdev/fio_plugin.o 00:24:13.031 CXX test/cpp_headers/conf.o 00:24:13.031 CC test/nvme/reset/reset.o 00:24:13.289 LINK rpc_client_test 00:24:13.289 CXX test/cpp_headers/config.o 00:24:13.547 LINK spdk_nvme 00:24:13.547 CXX test/cpp_headers/cpuset.o 00:24:13.547 CC examples/accel/perf/accel_perf.o 00:24:13.547 CXX test/cpp_headers/crc16.o 00:24:13.547 CXX test/cpp_headers/crc32.o 00:24:13.806 LINK reset 00:24:13.806 LINK pci_ut 00:24:13.806 CC examples/blob/hello_world/hello_blob.o 00:24:13.806 LINK iscsi_fuzz 00:24:13.806 LINK spdk_bdev 00:24:14.065 CXX test/cpp_headers/crc64.o 00:24:14.065 CC examples/nvme/hello_world/hello_world.o 00:24:14.065 CC test/nvme/sgl/sgl.o 00:24:14.065 CC test/nvme/e2edp/nvme_dp.o 00:24:14.065 LINK hello_blob 00:24:14.323 CC test/nvme/overhead/overhead.o 00:24:14.323 CXX test/cpp_headers/dif.o 00:24:14.323 CC examples/nvme/reconnect/reconnect.o 00:24:14.323 CC test/app/stub/stub.o 00:24:14.581 LINK hello_world 00:24:14.581 LINK sgl 00:24:14.581 LINK accel_perf 00:24:14.581 CXX test/cpp_headers/dma.o 00:24:14.581 CC examples/blob/cli/blobcli.o 00:24:14.581 LINK stub 00:24:14.839 LINK nvme_dp 00:24:14.839 CXX test/cpp_headers/endian.o 00:24:14.839 LINK overhead 00:24:14.839 CC test/nvme/err_injection/err_injection.o 00:24:15.097 LINK memory_ut 00:24:15.097 CXX test/cpp_headers/env_dpdk.o 00:24:15.097 LINK reconnect 00:24:15.097 CC test/nvme/startup/startup.o 00:24:15.355 CC test/accel/dif/dif.o 00:24:15.355 CC test/nvme/reserve/reserve.o 00:24:15.355 LINK err_injection 00:24:15.355 CC test/blobfs/mkfs/mkfs.o 00:24:15.355 CXX test/cpp_headers/env.o 00:24:15.355 LINK blobcli 00:24:15.355 CC test/nvme/simple_copy/simple_copy.o 00:24:15.613 LINK startup 00:24:15.613 CC examples/nvme/nvme_manage/nvme_manage.o 00:24:15.613 CXX test/cpp_headers/event.o 00:24:15.613 CC test/lvol/esnap/esnap.o 00:24:15.613 LINK reserve 00:24:15.613 LINK mkfs 00:24:15.870 CXX test/cpp_headers/fd_group.o 00:24:15.870 CC test/nvme/connect_stress/connect_stress.o 00:24:15.870 CC test/nvme/boot_partition/boot_partition.o 00:24:15.870 LINK simple_copy 00:24:16.128 CXX test/cpp_headers/fd.o 00:24:16.128 CC examples/nvme/arbitration/arbitration.o 00:24:16.385 LINK connect_stress 00:24:16.385 CC examples/nvme/cmb_copy/cmb_copy.o 00:24:16.385 CC examples/nvme/hotplug/hotplug.o 00:24:16.385 LINK boot_partition 00:24:16.385 CXX test/cpp_headers/file.o 00:24:16.385 CC examples/nvme/abort/abort.o 00:24:16.385 LINK dif 00:24:16.644 CXX test/cpp_headers/fsdev.o 00:24:16.644 LINK cmb_copy 00:24:16.644 CC examples/nvme/pmr_persistence/pmr_persistence.o 00:24:16.644 LINK nvme_manage 00:24:16.644 LINK arbitration 00:24:16.902 CC test/nvme/compliance/nvme_compliance.o 00:24:16.902 LINK hotplug 00:24:16.902 CXX test/cpp_headers/fsdev_module.o 00:24:16.902 CC test/nvme/fused_ordering/fused_ordering.o 00:24:16.902 CXX test/cpp_headers/ftl.o 00:24:17.168 CXX test/cpp_headers/fuse_dispatcher.o 00:24:17.168 LINK pmr_persistence 00:24:17.168 CXX test/cpp_headers/gpt_spec.o 00:24:17.168 LINK fused_ordering 00:24:17.427 CXX test/cpp_headers/hexlify.o 00:24:17.427 LINK abort 00:24:17.427 CC test/bdev/bdevio/bdevio.o 00:24:17.427 CXX test/cpp_headers/histogram_data.o 00:24:17.427 CC examples/bdev/hello_world/hello_bdev.o 00:24:17.427 CXX test/cpp_headers/idxd.o 00:24:17.427 CXX test/cpp_headers/idxd_spec.o 00:24:17.427 CC examples/bdev/bdevperf/bdevperf.o 00:24:17.427 LINK nvme_compliance 00:24:17.686 CXX test/cpp_headers/init.o 00:24:17.686 CXX test/cpp_headers/ioat.o 00:24:17.686 CXX test/cpp_headers/ioat_spec.o 00:24:17.686 CXX test/cpp_headers/iscsi_spec.o 00:24:17.686 CXX test/cpp_headers/json.o 00:24:17.945 CC test/nvme/doorbell_aers/doorbell_aers.o 00:24:17.945 LINK hello_bdev 00:24:17.945 LINK bdevio 00:24:17.945 CC test/nvme/fdp/fdp.o 00:24:17.945 CXX test/cpp_headers/jsonrpc.o 00:24:17.945 CXX test/cpp_headers/keyring.o 00:24:18.203 CXX test/cpp_headers/keyring_module.o 00:24:18.203 LINK doorbell_aers 00:24:18.203 CC test/nvme/cuse/cuse.o 00:24:18.204 CXX test/cpp_headers/likely.o 00:24:18.204 CXX test/cpp_headers/log.o 00:24:18.204 CXX test/cpp_headers/lvol.o 00:24:18.462 CXX test/cpp_headers/md5.o 00:24:18.462 CXX test/cpp_headers/memory.o 00:24:18.462 CXX test/cpp_headers/mmio.o 00:24:18.462 CXX test/cpp_headers/nbd.o 00:24:18.462 CXX test/cpp_headers/net.o 00:24:18.462 CXX test/cpp_headers/notify.o 00:24:18.721 CXX test/cpp_headers/nvme.o 00:24:18.721 LINK bdevperf 00:24:18.721 CXX test/cpp_headers/nvme_intel.o 00:24:18.721 CXX test/cpp_headers/nvme_ocssd.o 00:24:18.721 CXX test/cpp_headers/nvme_ocssd_spec.o 00:24:18.721 CXX test/cpp_headers/nvme_spec.o 00:24:18.721 CXX test/cpp_headers/nvme_zns.o 00:24:18.979 LINK fdp 00:24:18.979 CXX test/cpp_headers/nvmf_cmd.o 00:24:18.979 CXX test/cpp_headers/nvmf_fc_spec.o 00:24:18.979 CXX test/cpp_headers/nvmf.o 00:24:18.979 CXX test/cpp_headers/nvmf_spec.o 00:24:18.979 CXX test/cpp_headers/nvmf_transport.o 00:24:19.238 CXX test/cpp_headers/opal.o 00:24:19.238 CXX test/cpp_headers/opal_spec.o 00:24:19.238 CXX test/cpp_headers/pci_ids.o 00:24:19.238 CXX test/cpp_headers/pipe.o 00:24:19.238 CXX test/cpp_headers/queue.o 00:24:19.238 CXX test/cpp_headers/reduce.o 00:24:19.238 CC examples/nvmf/nvmf/nvmf.o 00:24:19.238 CXX test/cpp_headers/rpc.o 00:24:19.497 CXX test/cpp_headers/scheduler.o 00:24:19.497 CXX test/cpp_headers/scsi.o 00:24:19.497 CXX test/cpp_headers/scsi_spec.o 00:24:19.497 CXX test/cpp_headers/sock.o 00:24:19.497 CXX test/cpp_headers/stdinc.o 00:24:19.497 CXX test/cpp_headers/string.o 00:24:19.497 CXX test/cpp_headers/thread.o 00:24:19.754 CXX test/cpp_headers/trace.o 00:24:19.754 CXX test/cpp_headers/trace_parser.o 00:24:19.754 CXX test/cpp_headers/tree.o 00:24:19.754 LINK nvmf 00:24:19.754 CXX test/cpp_headers/ublk.o 00:24:19.754 CXX test/cpp_headers/util.o 00:24:19.754 CXX test/cpp_headers/uuid.o 00:24:19.754 LINK cuse 00:24:19.754 CXX test/cpp_headers/version.o 00:24:19.754 CXX test/cpp_headers/vfio_user_pci.o 00:24:19.754 CXX test/cpp_headers/vfio_user_spec.o 00:24:19.754 CXX test/cpp_headers/vhost.o 00:24:19.754 CXX test/cpp_headers/vmd.o 00:24:20.012 CXX test/cpp_headers/xor.o 00:24:20.012 CXX test/cpp_headers/zipf.o 00:24:24.200 LINK esnap 00:24:24.769 ************************************ 00:24:24.769 END TEST make 00:24:24.769 ************************************ 00:24:24.769 00:24:24.769 real 1m52.905s 00:24:24.769 user 10m31.655s 00:24:24.769 sys 2m17.947s 00:24:24.769 17:21:01 make -- common/autotest_common.sh@1130 -- $ xtrace_disable 00:24:24.769 17:21:01 make -- common/autotest_common.sh@10 -- $ set +x 00:24:24.769 17:21:01 -- spdk/autobuild.sh@1 -- $ stop_monitor_resources 00:24:24.769 17:21:01 -- pm/common@29 -- $ signal_monitor_resources TERM 00:24:24.769 17:21:01 -- pm/common@40 -- $ local monitor pid pids signal=TERM 00:24:24.769 17:21:01 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:24:24.769 17:21:01 -- pm/common@43 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/power/collect-cpu-load.pid ]] 00:24:24.769 17:21:01 -- pm/common@44 -- $ pid=5296 00:24:24.769 17:21:01 -- pm/common@50 -- $ kill -TERM 5296 00:24:24.769 17:21:01 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:24:24.769 17:21:01 -- pm/common@43 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/power/collect-vmstat.pid ]] 00:24:24.769 17:21:01 -- pm/common@44 -- $ pid=5298 00:24:24.769 17:21:01 -- pm/common@50 -- $ kill -TERM 5298 00:24:24.769 17:21:01 -- spdk/autorun.sh@26 -- $ (( SPDK_TEST_UNITTEST == 1 || SPDK_RUN_FUNCTIONAL_TEST == 1 )) 00:24:24.769 17:21:01 -- spdk/autorun.sh@27 -- $ sudo -E /home/vagrant/spdk_repo/spdk/autotest.sh /home/vagrant/spdk_repo/autorun-spdk.conf 00:24:24.769 17:21:02 -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:24:24.769 17:21:02 -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:24:24.769 17:21:02 -- common/autotest_common.sh@1693 -- # lcov --version 00:24:24.769 17:21:02 -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:24:24.769 17:21:02 -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:24:24.769 17:21:02 -- scripts/common.sh@333 -- # local ver1 ver1_l 00:24:24.769 17:21:02 -- scripts/common.sh@334 -- # local ver2 ver2_l 00:24:24.769 17:21:02 -- scripts/common.sh@336 -- # IFS=.-: 00:24:24.769 17:21:02 -- scripts/common.sh@336 -- # read -ra ver1 00:24:24.769 17:21:02 -- scripts/common.sh@337 -- # IFS=.-: 00:24:24.769 17:21:02 -- scripts/common.sh@337 -- # read -ra ver2 00:24:24.769 17:21:02 -- scripts/common.sh@338 -- # local 'op=<' 00:24:24.769 17:21:02 -- scripts/common.sh@340 -- # ver1_l=2 00:24:24.769 17:21:02 -- scripts/common.sh@341 -- # ver2_l=1 00:24:24.769 17:21:02 -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:24:24.769 17:21:02 -- scripts/common.sh@344 -- # case "$op" in 00:24:24.769 17:21:02 -- scripts/common.sh@345 -- # : 1 00:24:24.769 17:21:02 -- scripts/common.sh@364 -- # (( v = 0 )) 00:24:24.769 17:21:02 -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:24:24.769 17:21:02 -- scripts/common.sh@365 -- # decimal 1 00:24:24.769 17:21:02 -- scripts/common.sh@353 -- # local d=1 00:24:24.769 17:21:02 -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:24:24.769 17:21:02 -- scripts/common.sh@355 -- # echo 1 00:24:24.769 17:21:02 -- scripts/common.sh@365 -- # ver1[v]=1 00:24:24.769 17:21:02 -- scripts/common.sh@366 -- # decimal 2 00:24:24.769 17:21:02 -- scripts/common.sh@353 -- # local d=2 00:24:24.769 17:21:02 -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:24:24.769 17:21:02 -- scripts/common.sh@355 -- # echo 2 00:24:24.769 17:21:02 -- scripts/common.sh@366 -- # ver2[v]=2 00:24:24.769 17:21:02 -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:24:24.769 17:21:02 -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:24:24.769 17:21:02 -- scripts/common.sh@368 -- # return 0 00:24:24.769 17:21:02 -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:24:24.769 17:21:02 -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:24:24.769 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:24.769 --rc genhtml_branch_coverage=1 00:24:24.769 --rc genhtml_function_coverage=1 00:24:24.769 --rc genhtml_legend=1 00:24:24.769 --rc geninfo_all_blocks=1 00:24:24.769 --rc geninfo_unexecuted_blocks=1 00:24:24.769 00:24:24.769 ' 00:24:24.769 17:21:02 -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:24:24.769 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:24.769 --rc genhtml_branch_coverage=1 00:24:24.769 --rc genhtml_function_coverage=1 00:24:24.769 --rc genhtml_legend=1 00:24:24.769 --rc geninfo_all_blocks=1 00:24:24.769 --rc geninfo_unexecuted_blocks=1 00:24:24.769 00:24:24.769 ' 00:24:24.769 17:21:02 -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:24:24.769 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:24.769 --rc genhtml_branch_coverage=1 00:24:24.769 --rc genhtml_function_coverage=1 00:24:24.769 --rc genhtml_legend=1 00:24:24.769 --rc geninfo_all_blocks=1 00:24:24.769 --rc geninfo_unexecuted_blocks=1 00:24:24.769 00:24:24.769 ' 00:24:24.769 17:21:02 -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:24:24.769 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:24.769 --rc genhtml_branch_coverage=1 00:24:24.769 --rc genhtml_function_coverage=1 00:24:24.769 --rc genhtml_legend=1 00:24:24.769 --rc geninfo_all_blocks=1 00:24:24.769 --rc geninfo_unexecuted_blocks=1 00:24:24.769 00:24:24.769 ' 00:24:24.769 17:21:02 -- spdk/autotest.sh@25 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:24:24.769 17:21:02 -- nvmf/common.sh@7 -- # uname -s 00:24:24.769 17:21:02 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:24:24.769 17:21:02 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:24:24.769 17:21:02 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:24:24.769 17:21:02 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:24:24.770 17:21:02 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:24:24.770 17:21:02 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:24:24.770 17:21:02 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:24:24.770 17:21:02 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:24:24.770 17:21:02 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:24:24.770 17:21:02 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:24:24.770 17:21:02 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:d9aa832d-f5ae-44cc-9119-911c3264b49a 00:24:24.770 17:21:02 -- nvmf/common.sh@18 -- # NVME_HOSTID=d9aa832d-f5ae-44cc-9119-911c3264b49a 00:24:24.770 17:21:02 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:24:24.770 17:21:02 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:24:24.770 17:21:02 -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:24:24.770 17:21:02 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:24:24.770 17:21:02 -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:24:24.770 17:21:02 -- scripts/common.sh@15 -- # shopt -s extglob 00:24:24.770 17:21:02 -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:24:24.770 17:21:02 -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:24:24.770 17:21:02 -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:24:24.770 17:21:02 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:24.770 17:21:02 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:24.770 17:21:02 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:24.770 17:21:02 -- paths/export.sh@5 -- # export PATH 00:24:24.770 17:21:02 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:24.770 17:21:02 -- nvmf/common.sh@51 -- # : 0 00:24:24.770 17:21:02 -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:24:24.770 17:21:02 -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:24:24.770 17:21:02 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:24:24.770 17:21:02 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:24:24.770 17:21:02 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:24:24.770 17:21:02 -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:24:24.770 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:24:24.770 17:21:02 -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:24:24.770 17:21:02 -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:24:24.770 17:21:02 -- nvmf/common.sh@55 -- # have_pci_nics=0 00:24:24.770 17:21:02 -- spdk/autotest.sh@27 -- # '[' 0 -ne 0 ']' 00:24:24.770 17:21:02 -- spdk/autotest.sh@32 -- # uname -s 00:24:24.770 17:21:02 -- spdk/autotest.sh@32 -- # '[' Linux = Linux ']' 00:24:24.770 17:21:02 -- spdk/autotest.sh@33 -- # old_core_pattern='|/usr/lib/systemd/systemd-coredump %P %u %g %s %t %c %h' 00:24:24.770 17:21:02 -- spdk/autotest.sh@34 -- # mkdir -p /home/vagrant/spdk_repo/spdk/../output/coredumps 00:24:24.770 17:21:02 -- spdk/autotest.sh@39 -- # echo '|/home/vagrant/spdk_repo/spdk/scripts/core-collector.sh %P %s %t' 00:24:24.770 17:21:02 -- spdk/autotest.sh@40 -- # echo /home/vagrant/spdk_repo/spdk/../output/coredumps 00:24:24.770 17:21:02 -- spdk/autotest.sh@44 -- # modprobe nbd 00:24:25.029 17:21:02 -- spdk/autotest.sh@46 -- # type -P udevadm 00:24:25.029 17:21:02 -- spdk/autotest.sh@46 -- # udevadm=/usr/sbin/udevadm 00:24:25.029 17:21:02 -- spdk/autotest.sh@48 -- # udevadm_pid=54518 00:24:25.029 17:21:02 -- spdk/autotest.sh@47 -- # /usr/sbin/udevadm monitor --property 00:24:25.029 17:21:02 -- spdk/autotest.sh@53 -- # start_monitor_resources 00:24:25.029 17:21:02 -- pm/common@17 -- # local monitor 00:24:25.029 17:21:02 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:24:25.029 17:21:02 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:24:25.029 17:21:02 -- pm/common@25 -- # sleep 1 00:24:25.029 17:21:02 -- pm/common@21 -- # date +%s 00:24:25.029 17:21:02 -- pm/common@21 -- # date +%s 00:24:25.029 17:21:02 -- pm/common@21 -- # /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-cpu-load -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autotest.sh.1732641662 00:24:25.029 17:21:02 -- pm/common@21 -- # /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-vmstat -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autotest.sh.1732641662 00:24:25.029 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autotest.sh.1732641662_collect-cpu-load.pm.log 00:24:25.029 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autotest.sh.1732641662_collect-vmstat.pm.log 00:24:26.034 17:21:03 -- spdk/autotest.sh@55 -- # trap 'autotest_cleanup || :; exit 1' SIGINT SIGTERM EXIT 00:24:26.034 17:21:03 -- spdk/autotest.sh@57 -- # timing_enter autotest 00:24:26.034 17:21:03 -- common/autotest_common.sh@726 -- # xtrace_disable 00:24:26.034 17:21:03 -- common/autotest_common.sh@10 -- # set +x 00:24:26.034 17:21:03 -- spdk/autotest.sh@59 -- # create_test_list 00:24:26.034 17:21:03 -- common/autotest_common.sh@752 -- # xtrace_disable 00:24:26.034 17:21:03 -- common/autotest_common.sh@10 -- # set +x 00:24:26.034 17:21:03 -- spdk/autotest.sh@61 -- # dirname /home/vagrant/spdk_repo/spdk/autotest.sh 00:24:26.034 17:21:03 -- spdk/autotest.sh@61 -- # readlink -f /home/vagrant/spdk_repo/spdk 00:24:26.034 17:21:03 -- spdk/autotest.sh@61 -- # src=/home/vagrant/spdk_repo/spdk 00:24:26.034 17:21:03 -- spdk/autotest.sh@62 -- # out=/home/vagrant/spdk_repo/spdk/../output 00:24:26.034 17:21:03 -- spdk/autotest.sh@63 -- # cd /home/vagrant/spdk_repo/spdk 00:24:26.034 17:21:03 -- spdk/autotest.sh@65 -- # freebsd_update_contigmem_mod 00:24:26.034 17:21:03 -- common/autotest_common.sh@1457 -- # uname 00:24:26.034 17:21:03 -- common/autotest_common.sh@1457 -- # '[' Linux = FreeBSD ']' 00:24:26.034 17:21:03 -- spdk/autotest.sh@66 -- # freebsd_set_maxsock_buf 00:24:26.034 17:21:03 -- common/autotest_common.sh@1477 -- # uname 00:24:26.034 17:21:03 -- common/autotest_common.sh@1477 -- # [[ Linux = FreeBSD ]] 00:24:26.034 17:21:03 -- spdk/autotest.sh@68 -- # [[ y == y ]] 00:24:26.034 17:21:03 -- spdk/autotest.sh@70 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 --version 00:24:26.034 lcov: LCOV version 1.15 00:24:26.034 17:21:03 -- spdk/autotest.sh@72 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -c --no-external -i -t Baseline -d /home/vagrant/spdk_repo/spdk -o /home/vagrant/spdk_repo/spdk/../output/cov_base.info 00:24:48.045 /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_stubs.gcno:no functions found 00:24:48.045 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_stubs.gcno 00:25:06.120 17:21:42 -- spdk/autotest.sh@76 -- # timing_enter pre_cleanup 00:25:06.120 17:21:42 -- common/autotest_common.sh@726 -- # xtrace_disable 00:25:06.120 17:21:42 -- common/autotest_common.sh@10 -- # set +x 00:25:06.120 17:21:42 -- spdk/autotest.sh@78 -- # rm -f 00:25:06.120 17:21:42 -- spdk/autotest.sh@81 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:25:06.120 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:25:06.120 0000:00:11.0 (1b36 0010): Already using the nvme driver 00:25:06.120 0000:00:10.0 (1b36 0010): Already using the nvme driver 00:25:06.120 17:21:43 -- spdk/autotest.sh@83 -- # get_zoned_devs 00:25:06.120 17:21:43 -- common/autotest_common.sh@1657 -- # zoned_devs=() 00:25:06.120 17:21:43 -- common/autotest_common.sh@1657 -- # local -gA zoned_devs 00:25:06.120 17:21:43 -- common/autotest_common.sh@1658 -- # local nvme bdf 00:25:06.120 17:21:43 -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:25:06.120 17:21:43 -- common/autotest_common.sh@1661 -- # is_block_zoned nvme0n1 00:25:06.120 17:21:43 -- common/autotest_common.sh@1650 -- # local device=nvme0n1 00:25:06.120 17:21:43 -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:25:06.120 17:21:43 -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:25:06.120 17:21:43 -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:25:06.120 17:21:43 -- common/autotest_common.sh@1661 -- # is_block_zoned nvme1n1 00:25:06.120 17:21:43 -- common/autotest_common.sh@1650 -- # local device=nvme1n1 00:25:06.120 17:21:43 -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:25:06.120 17:21:43 -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:25:06.120 17:21:43 -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:25:06.120 17:21:43 -- common/autotest_common.sh@1661 -- # is_block_zoned nvme1n2 00:25:06.120 17:21:43 -- common/autotest_common.sh@1650 -- # local device=nvme1n2 00:25:06.120 17:21:43 -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme1n2/queue/zoned ]] 00:25:06.120 17:21:43 -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:25:06.120 17:21:43 -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:25:06.120 17:21:43 -- common/autotest_common.sh@1661 -- # is_block_zoned nvme1n3 00:25:06.120 17:21:43 -- common/autotest_common.sh@1650 -- # local device=nvme1n3 00:25:06.120 17:21:43 -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme1n3/queue/zoned ]] 00:25:06.120 17:21:43 -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:25:06.120 17:21:43 -- spdk/autotest.sh@85 -- # (( 0 > 0 )) 00:25:06.120 17:21:43 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:25:06.120 17:21:43 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:25:06.120 17:21:43 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme0n1 00:25:06.120 17:21:43 -- scripts/common.sh@381 -- # local block=/dev/nvme0n1 pt 00:25:06.120 17:21:43 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme0n1 00:25:06.120 No valid GPT data, bailing 00:25:06.120 17:21:43 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:25:06.120 17:21:43 -- scripts/common.sh@394 -- # pt= 00:25:06.120 17:21:43 -- scripts/common.sh@395 -- # return 1 00:25:06.120 17:21:43 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme0n1 bs=1M count=1 00:25:06.120 1+0 records in 00:25:06.120 1+0 records out 00:25:06.120 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00551471 s, 190 MB/s 00:25:06.120 17:21:43 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:25:06.120 17:21:43 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:25:06.120 17:21:43 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme1n1 00:25:06.120 17:21:43 -- scripts/common.sh@381 -- # local block=/dev/nvme1n1 pt 00:25:06.120 17:21:43 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme1n1 00:25:06.120 No valid GPT data, bailing 00:25:06.120 17:21:43 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme1n1 00:25:06.120 17:21:43 -- scripts/common.sh@394 -- # pt= 00:25:06.120 17:21:43 -- scripts/common.sh@395 -- # return 1 00:25:06.120 17:21:43 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme1n1 bs=1M count=1 00:25:06.120 1+0 records in 00:25:06.120 1+0 records out 00:25:06.120 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00496257 s, 211 MB/s 00:25:06.120 17:21:43 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:25:06.120 17:21:43 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:25:06.120 17:21:43 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme1n2 00:25:06.120 17:21:43 -- scripts/common.sh@381 -- # local block=/dev/nvme1n2 pt 00:25:06.120 17:21:43 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme1n2 00:25:06.120 No valid GPT data, bailing 00:25:06.120 17:21:43 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme1n2 00:25:06.120 17:21:43 -- scripts/common.sh@394 -- # pt= 00:25:06.120 17:21:43 -- scripts/common.sh@395 -- # return 1 00:25:06.120 17:21:43 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme1n2 bs=1M count=1 00:25:06.120 1+0 records in 00:25:06.120 1+0 records out 00:25:06.120 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00510845 s, 205 MB/s 00:25:06.120 17:21:43 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:25:06.120 17:21:43 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:25:06.120 17:21:43 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme1n3 00:25:06.120 17:21:43 -- scripts/common.sh@381 -- # local block=/dev/nvme1n3 pt 00:25:06.120 17:21:43 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme1n3 00:25:06.120 No valid GPT data, bailing 00:25:06.120 17:21:43 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme1n3 00:25:06.120 17:21:43 -- scripts/common.sh@394 -- # pt= 00:25:06.120 17:21:43 -- scripts/common.sh@395 -- # return 1 00:25:06.120 17:21:43 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme1n3 bs=1M count=1 00:25:06.120 1+0 records in 00:25:06.120 1+0 records out 00:25:06.120 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00489102 s, 214 MB/s 00:25:06.120 17:21:43 -- spdk/autotest.sh@105 -- # sync 00:25:06.120 17:21:43 -- spdk/autotest.sh@107 -- # xtrace_disable_per_cmd reap_spdk_processes 00:25:06.121 17:21:43 -- common/autotest_common.sh@22 -- # eval 'reap_spdk_processes 12> /dev/null' 00:25:06.121 17:21:43 -- common/autotest_common.sh@22 -- # reap_spdk_processes 00:25:08.022 17:21:45 -- spdk/autotest.sh@111 -- # uname -s 00:25:08.022 17:21:45 -- spdk/autotest.sh@111 -- # [[ Linux == Linux ]] 00:25:08.022 17:21:45 -- spdk/autotest.sh@111 -- # [[ 0 -eq 1 ]] 00:25:08.022 17:21:45 -- spdk/autotest.sh@115 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh status 00:25:08.670 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:25:08.670 Hugepages 00:25:08.670 node hugesize free / total 00:25:08.670 node0 1048576kB 0 / 0 00:25:08.670 node0 2048kB 0 / 0 00:25:08.670 00:25:08.670 Type BDF Vendor Device NUMA Driver Device Block devices 00:25:08.929 virtio 0000:00:03.0 1af4 1001 unknown virtio-pci - vda 00:25:08.929 NVMe 0000:00:10.0 1b36 0010 unknown nvme nvme0 nvme0n1 00:25:08.929 NVMe 0000:00:11.0 1b36 0010 unknown nvme nvme1 nvme1n1 nvme1n2 nvme1n3 00:25:08.929 17:21:46 -- spdk/autotest.sh@117 -- # uname -s 00:25:08.929 17:21:46 -- spdk/autotest.sh@117 -- # [[ Linux == Linux ]] 00:25:08.929 17:21:46 -- spdk/autotest.sh@119 -- # nvme_namespace_revert 00:25:08.929 17:21:46 -- common/autotest_common.sh@1516 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:25:09.865 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:25:09.865 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:25:09.865 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:25:09.865 17:21:47 -- common/autotest_common.sh@1517 -- # sleep 1 00:25:10.802 17:21:48 -- common/autotest_common.sh@1518 -- # bdfs=() 00:25:10.802 17:21:48 -- common/autotest_common.sh@1518 -- # local bdfs 00:25:10.802 17:21:48 -- common/autotest_common.sh@1520 -- # bdfs=($(get_nvme_bdfs)) 00:25:10.802 17:21:48 -- common/autotest_common.sh@1520 -- # get_nvme_bdfs 00:25:10.802 17:21:48 -- common/autotest_common.sh@1498 -- # bdfs=() 00:25:10.802 17:21:48 -- common/autotest_common.sh@1498 -- # local bdfs 00:25:10.802 17:21:48 -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:25:11.061 17:21:48 -- common/autotest_common.sh@1499 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:25:11.061 17:21:48 -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:25:11.061 17:21:48 -- common/autotest_common.sh@1500 -- # (( 2 == 0 )) 00:25:11.061 17:21:48 -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 00:25:11.061 17:21:48 -- common/autotest_common.sh@1522 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:25:11.318 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:25:11.318 Waiting for block devices as requested 00:25:11.318 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:25:11.611 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:25:11.611 17:21:48 -- common/autotest_common.sh@1524 -- # for bdf in "${bdfs[@]}" 00:25:11.611 17:21:48 -- common/autotest_common.sh@1525 -- # get_nvme_ctrlr_from_bdf 0000:00:10.0 00:25:11.611 17:21:48 -- common/autotest_common.sh@1487 -- # readlink -f /sys/class/nvme/nvme0 /sys/class/nvme/nvme1 00:25:11.611 17:21:48 -- common/autotest_common.sh@1487 -- # grep 0000:00:10.0/nvme/nvme 00:25:11.611 17:21:48 -- common/autotest_common.sh@1487 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:10.0/nvme/nvme1 00:25:11.611 17:21:48 -- common/autotest_common.sh@1488 -- # [[ -z /sys/devices/pci0000:00/0000:00:10.0/nvme/nvme1 ]] 00:25:11.611 17:21:48 -- common/autotest_common.sh@1492 -- # basename /sys/devices/pci0000:00/0000:00:10.0/nvme/nvme1 00:25:11.611 17:21:48 -- common/autotest_common.sh@1492 -- # printf '%s\n' nvme1 00:25:11.611 17:21:48 -- common/autotest_common.sh@1525 -- # nvme_ctrlr=/dev/nvme1 00:25:11.611 17:21:48 -- common/autotest_common.sh@1526 -- # [[ -z /dev/nvme1 ]] 00:25:11.611 17:21:48 -- common/autotest_common.sh@1531 -- # nvme id-ctrl /dev/nvme1 00:25:11.611 17:21:48 -- common/autotest_common.sh@1531 -- # cut -d: -f2 00:25:11.611 17:21:48 -- common/autotest_common.sh@1531 -- # grep oacs 00:25:11.611 17:21:48 -- common/autotest_common.sh@1531 -- # oacs=' 0x12a' 00:25:11.611 17:21:48 -- common/autotest_common.sh@1532 -- # oacs_ns_manage=8 00:25:11.611 17:21:48 -- common/autotest_common.sh@1534 -- # [[ 8 -ne 0 ]] 00:25:11.611 17:21:48 -- common/autotest_common.sh@1540 -- # grep unvmcap 00:25:11.611 17:21:48 -- common/autotest_common.sh@1540 -- # nvme id-ctrl /dev/nvme1 00:25:11.611 17:21:48 -- common/autotest_common.sh@1540 -- # cut -d: -f2 00:25:11.611 17:21:49 -- common/autotest_common.sh@1540 -- # unvmcap=' 0' 00:25:11.611 17:21:49 -- common/autotest_common.sh@1541 -- # [[ 0 -eq 0 ]] 00:25:11.611 17:21:49 -- common/autotest_common.sh@1543 -- # continue 00:25:11.611 17:21:49 -- common/autotest_common.sh@1524 -- # for bdf in "${bdfs[@]}" 00:25:11.611 17:21:49 -- common/autotest_common.sh@1525 -- # get_nvme_ctrlr_from_bdf 0000:00:11.0 00:25:11.611 17:21:49 -- common/autotest_common.sh@1487 -- # readlink -f /sys/class/nvme/nvme0 /sys/class/nvme/nvme1 00:25:11.611 17:21:49 -- common/autotest_common.sh@1487 -- # grep 0000:00:11.0/nvme/nvme 00:25:11.611 17:21:49 -- common/autotest_common.sh@1487 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:11.0/nvme/nvme0 00:25:11.611 17:21:49 -- common/autotest_common.sh@1488 -- # [[ -z /sys/devices/pci0000:00/0000:00:11.0/nvme/nvme0 ]] 00:25:11.611 17:21:49 -- common/autotest_common.sh@1492 -- # basename /sys/devices/pci0000:00/0000:00:11.0/nvme/nvme0 00:25:11.611 17:21:49 -- common/autotest_common.sh@1492 -- # printf '%s\n' nvme0 00:25:11.611 17:21:49 -- common/autotest_common.sh@1525 -- # nvme_ctrlr=/dev/nvme0 00:25:11.611 17:21:49 -- common/autotest_common.sh@1526 -- # [[ -z /dev/nvme0 ]] 00:25:11.611 17:21:49 -- common/autotest_common.sh@1531 -- # nvme id-ctrl /dev/nvme0 00:25:11.611 17:21:49 -- common/autotest_common.sh@1531 -- # grep oacs 00:25:11.611 17:21:49 -- common/autotest_common.sh@1531 -- # cut -d: -f2 00:25:11.611 17:21:49 -- common/autotest_common.sh@1531 -- # oacs=' 0x12a' 00:25:11.611 17:21:49 -- common/autotest_common.sh@1532 -- # oacs_ns_manage=8 00:25:11.611 17:21:49 -- common/autotest_common.sh@1534 -- # [[ 8 -ne 0 ]] 00:25:11.611 17:21:49 -- common/autotest_common.sh@1540 -- # nvme id-ctrl /dev/nvme0 00:25:11.611 17:21:49 -- common/autotest_common.sh@1540 -- # grep unvmcap 00:25:11.611 17:21:49 -- common/autotest_common.sh@1540 -- # cut -d: -f2 00:25:11.611 17:21:49 -- common/autotest_common.sh@1540 -- # unvmcap=' 0' 00:25:11.611 17:21:49 -- common/autotest_common.sh@1541 -- # [[ 0 -eq 0 ]] 00:25:11.611 17:21:49 -- common/autotest_common.sh@1543 -- # continue 00:25:11.611 17:21:49 -- spdk/autotest.sh@122 -- # timing_exit pre_cleanup 00:25:11.611 17:21:49 -- common/autotest_common.sh@732 -- # xtrace_disable 00:25:11.611 17:21:49 -- common/autotest_common.sh@10 -- # set +x 00:25:11.870 17:21:49 -- spdk/autotest.sh@125 -- # timing_enter afterboot 00:25:11.870 17:21:49 -- common/autotest_common.sh@726 -- # xtrace_disable 00:25:11.870 17:21:49 -- common/autotest_common.sh@10 -- # set +x 00:25:11.870 17:21:49 -- spdk/autotest.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:25:12.436 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:25:12.694 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:25:12.694 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:25:12.694 17:21:50 -- spdk/autotest.sh@127 -- # timing_exit afterboot 00:25:12.694 17:21:50 -- common/autotest_common.sh@732 -- # xtrace_disable 00:25:12.694 17:21:50 -- common/autotest_common.sh@10 -- # set +x 00:25:12.694 17:21:50 -- spdk/autotest.sh@131 -- # opal_revert_cleanup 00:25:12.694 17:21:50 -- common/autotest_common.sh@1578 -- # mapfile -t bdfs 00:25:12.694 17:21:50 -- common/autotest_common.sh@1578 -- # get_nvme_bdfs_by_id 0x0a54 00:25:12.694 17:21:50 -- common/autotest_common.sh@1563 -- # bdfs=() 00:25:12.694 17:21:50 -- common/autotest_common.sh@1563 -- # _bdfs=() 00:25:12.694 17:21:50 -- common/autotest_common.sh@1563 -- # local bdfs _bdfs 00:25:12.694 17:21:50 -- common/autotest_common.sh@1564 -- # _bdfs=($(get_nvme_bdfs)) 00:25:12.694 17:21:50 -- common/autotest_common.sh@1564 -- # get_nvme_bdfs 00:25:12.694 17:21:50 -- common/autotest_common.sh@1498 -- # bdfs=() 00:25:12.694 17:21:50 -- common/autotest_common.sh@1498 -- # local bdfs 00:25:12.694 17:21:50 -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:25:12.694 17:21:50 -- common/autotest_common.sh@1499 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:25:12.694 17:21:50 -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:25:12.951 17:21:50 -- common/autotest_common.sh@1500 -- # (( 2 == 0 )) 00:25:12.951 17:21:50 -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 00:25:12.951 17:21:50 -- common/autotest_common.sh@1565 -- # for bdf in "${_bdfs[@]}" 00:25:12.951 17:21:50 -- common/autotest_common.sh@1566 -- # cat /sys/bus/pci/devices/0000:00:10.0/device 00:25:12.951 17:21:50 -- common/autotest_common.sh@1566 -- # device=0x0010 00:25:12.951 17:21:50 -- common/autotest_common.sh@1567 -- # [[ 0x0010 == \0\x\0\a\5\4 ]] 00:25:12.951 17:21:50 -- common/autotest_common.sh@1565 -- # for bdf in "${_bdfs[@]}" 00:25:12.951 17:21:50 -- common/autotest_common.sh@1566 -- # cat /sys/bus/pci/devices/0000:00:11.0/device 00:25:12.951 17:21:50 -- common/autotest_common.sh@1566 -- # device=0x0010 00:25:12.951 17:21:50 -- common/autotest_common.sh@1567 -- # [[ 0x0010 == \0\x\0\a\5\4 ]] 00:25:12.951 17:21:50 -- common/autotest_common.sh@1572 -- # (( 0 > 0 )) 00:25:12.951 17:21:50 -- common/autotest_common.sh@1572 -- # return 0 00:25:12.951 17:21:50 -- common/autotest_common.sh@1579 -- # [[ -z '' ]] 00:25:12.951 17:21:50 -- common/autotest_common.sh@1580 -- # return 0 00:25:12.951 17:21:50 -- spdk/autotest.sh@137 -- # '[' 0 -eq 1 ']' 00:25:12.951 17:21:50 -- spdk/autotest.sh@141 -- # '[' 1 -eq 1 ']' 00:25:12.951 17:21:50 -- spdk/autotest.sh@142 -- # [[ 0 -eq 1 ]] 00:25:12.951 17:21:50 -- spdk/autotest.sh@142 -- # [[ 0 -eq 1 ]] 00:25:12.951 17:21:50 -- spdk/autotest.sh@149 -- # timing_enter lib 00:25:12.951 17:21:50 -- common/autotest_common.sh@726 -- # xtrace_disable 00:25:12.951 17:21:50 -- common/autotest_common.sh@10 -- # set +x 00:25:12.951 17:21:50 -- spdk/autotest.sh@151 -- # [[ 0 -eq 1 ]] 00:25:12.951 17:21:50 -- spdk/autotest.sh@155 -- # run_test env /home/vagrant/spdk_repo/spdk/test/env/env.sh 00:25:12.951 17:21:50 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:25:12.951 17:21:50 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:25:12.951 17:21:50 -- common/autotest_common.sh@10 -- # set +x 00:25:12.951 ************************************ 00:25:12.951 START TEST env 00:25:12.951 ************************************ 00:25:12.951 17:21:50 env -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/env/env.sh 00:25:12.951 * Looking for test storage... 00:25:12.951 * Found test storage at /home/vagrant/spdk_repo/spdk/test/env 00:25:12.951 17:21:50 env -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:25:12.951 17:21:50 env -- common/autotest_common.sh@1693 -- # lcov --version 00:25:12.951 17:21:50 env -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:25:12.951 17:21:50 env -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:25:12.951 17:21:50 env -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:25:12.951 17:21:50 env -- scripts/common.sh@333 -- # local ver1 ver1_l 00:25:12.951 17:21:50 env -- scripts/common.sh@334 -- # local ver2 ver2_l 00:25:12.951 17:21:50 env -- scripts/common.sh@336 -- # IFS=.-: 00:25:12.951 17:21:50 env -- scripts/common.sh@336 -- # read -ra ver1 00:25:12.951 17:21:50 env -- scripts/common.sh@337 -- # IFS=.-: 00:25:12.951 17:21:50 env -- scripts/common.sh@337 -- # read -ra ver2 00:25:12.951 17:21:50 env -- scripts/common.sh@338 -- # local 'op=<' 00:25:12.951 17:21:50 env -- scripts/common.sh@340 -- # ver1_l=2 00:25:12.951 17:21:50 env -- scripts/common.sh@341 -- # ver2_l=1 00:25:12.951 17:21:50 env -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:25:12.951 17:21:50 env -- scripts/common.sh@344 -- # case "$op" in 00:25:12.951 17:21:50 env -- scripts/common.sh@345 -- # : 1 00:25:12.951 17:21:50 env -- scripts/common.sh@364 -- # (( v = 0 )) 00:25:12.951 17:21:50 env -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:25:12.951 17:21:50 env -- scripts/common.sh@365 -- # decimal 1 00:25:12.951 17:21:50 env -- scripts/common.sh@353 -- # local d=1 00:25:12.951 17:21:50 env -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:25:12.951 17:21:50 env -- scripts/common.sh@355 -- # echo 1 00:25:12.951 17:21:50 env -- scripts/common.sh@365 -- # ver1[v]=1 00:25:12.951 17:21:50 env -- scripts/common.sh@366 -- # decimal 2 00:25:12.951 17:21:50 env -- scripts/common.sh@353 -- # local d=2 00:25:12.951 17:21:50 env -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:25:12.951 17:21:50 env -- scripts/common.sh@355 -- # echo 2 00:25:12.951 17:21:50 env -- scripts/common.sh@366 -- # ver2[v]=2 00:25:12.951 17:21:50 env -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:25:12.951 17:21:50 env -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:25:12.951 17:21:50 env -- scripts/common.sh@368 -- # return 0 00:25:12.951 17:21:50 env -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:25:12.951 17:21:50 env -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:25:12.951 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:12.951 --rc genhtml_branch_coverage=1 00:25:12.951 --rc genhtml_function_coverage=1 00:25:12.951 --rc genhtml_legend=1 00:25:12.951 --rc geninfo_all_blocks=1 00:25:12.951 --rc geninfo_unexecuted_blocks=1 00:25:12.951 00:25:12.951 ' 00:25:12.951 17:21:50 env -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:25:12.951 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:12.951 --rc genhtml_branch_coverage=1 00:25:12.951 --rc genhtml_function_coverage=1 00:25:12.951 --rc genhtml_legend=1 00:25:12.951 --rc geninfo_all_blocks=1 00:25:12.951 --rc geninfo_unexecuted_blocks=1 00:25:12.951 00:25:12.951 ' 00:25:12.951 17:21:50 env -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:25:12.951 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:12.951 --rc genhtml_branch_coverage=1 00:25:12.951 --rc genhtml_function_coverage=1 00:25:12.951 --rc genhtml_legend=1 00:25:12.951 --rc geninfo_all_blocks=1 00:25:12.951 --rc geninfo_unexecuted_blocks=1 00:25:12.951 00:25:12.951 ' 00:25:12.951 17:21:50 env -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:25:12.951 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:12.951 --rc genhtml_branch_coverage=1 00:25:12.951 --rc genhtml_function_coverage=1 00:25:12.951 --rc genhtml_legend=1 00:25:12.951 --rc geninfo_all_blocks=1 00:25:12.951 --rc geninfo_unexecuted_blocks=1 00:25:12.951 00:25:12.951 ' 00:25:12.951 17:21:50 env -- env/env.sh@10 -- # run_test env_memory /home/vagrant/spdk_repo/spdk/test/env/memory/memory_ut 00:25:12.951 17:21:50 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:25:12.951 17:21:50 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:25:12.951 17:21:50 env -- common/autotest_common.sh@10 -- # set +x 00:25:12.951 ************************************ 00:25:12.951 START TEST env_memory 00:25:12.951 ************************************ 00:25:12.952 17:21:50 env.env_memory -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/env/memory/memory_ut 00:25:13.208 00:25:13.208 00:25:13.208 CUnit - A unit testing framework for C - Version 2.1-3 00:25:13.208 http://cunit.sourceforge.net/ 00:25:13.208 00:25:13.208 00:25:13.208 Suite: memory 00:25:13.208 Test: alloc and free memory map ...[2024-11-26 17:21:50.465151] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 283:spdk_mem_map_alloc: *ERROR*: Initial mem_map notify failed 00:25:13.208 passed 00:25:13.208 Test: mem map translation ...[2024-11-26 17:21:50.541563] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 595:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=2097152 len=1234 00:25:13.208 [2024-11-26 17:21:50.541678] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 595:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=1234 len=2097152 00:25:13.208 [2024-11-26 17:21:50.541799] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 589:spdk_mem_map_set_translation: *ERROR*: invalid usermode virtual address 281474976710656 00:25:13.208 [2024-11-26 17:21:50.541854] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 605:spdk_mem_map_set_translation: *ERROR*: could not get 0xffffffe00000 map 00:25:13.208 passed 00:25:13.208 Test: mem map registration ...[2024-11-26 17:21:50.653248] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 347:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=200000 len=1234 00:25:13.208 [2024-11-26 17:21:50.653360] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 347:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=4d2 len=2097152 00:25:13.466 passed 00:25:13.466 Test: mem map adjacent registrations ...passed 00:25:13.466 00:25:13.466 Run Summary: Type Total Ran Passed Failed Inactive 00:25:13.466 suites 1 1 n/a 0 0 00:25:13.466 tests 4 4 4 0 0 00:25:13.466 asserts 152 152 152 0 n/a 00:25:13.466 00:25:13.466 Elapsed time = 0.424 seconds 00:25:13.466 00:25:13.466 real 0m0.468s 00:25:13.466 user 0m0.426s 00:25:13.466 sys 0m0.033s 00:25:13.466 17:21:50 env.env_memory -- common/autotest_common.sh@1130 -- # xtrace_disable 00:25:13.466 17:21:50 env.env_memory -- common/autotest_common.sh@10 -- # set +x 00:25:13.466 ************************************ 00:25:13.466 END TEST env_memory 00:25:13.466 ************************************ 00:25:13.466 17:21:50 env -- env/env.sh@11 -- # run_test env_vtophys /home/vagrant/spdk_repo/spdk/test/env/vtophys/vtophys 00:25:13.466 17:21:50 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:25:13.466 17:21:50 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:25:13.466 17:21:50 env -- common/autotest_common.sh@10 -- # set +x 00:25:13.466 ************************************ 00:25:13.466 START TEST env_vtophys 00:25:13.466 ************************************ 00:25:13.466 17:21:50 env.env_vtophys -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/env/vtophys/vtophys 00:25:13.723 EAL: lib.eal log level changed from notice to debug 00:25:13.723 EAL: Detected lcore 0 as core 0 on socket 0 00:25:13.723 EAL: Detected lcore 1 as core 0 on socket 0 00:25:13.723 EAL: Detected lcore 2 as core 0 on socket 0 00:25:13.723 EAL: Detected lcore 3 as core 0 on socket 0 00:25:13.723 EAL: Detected lcore 4 as core 0 on socket 0 00:25:13.723 EAL: Detected lcore 5 as core 0 on socket 0 00:25:13.723 EAL: Detected lcore 6 as core 0 on socket 0 00:25:13.723 EAL: Detected lcore 7 as core 0 on socket 0 00:25:13.723 EAL: Detected lcore 8 as core 0 on socket 0 00:25:13.723 EAL: Detected lcore 9 as core 0 on socket 0 00:25:13.723 EAL: Maximum logical cores by configuration: 128 00:25:13.723 EAL: Detected CPU lcores: 10 00:25:13.723 EAL: Detected NUMA nodes: 1 00:25:13.723 EAL: Checking presence of .so 'librte_eal.so.24.1' 00:25:13.723 EAL: Detected shared linkage of DPDK 00:25:13.723 EAL: No shared files mode enabled, IPC will be disabled 00:25:13.723 EAL: Selected IOVA mode 'PA' 00:25:13.723 EAL: Probing VFIO support... 00:25:13.723 EAL: Module /sys/module/vfio not found! error 2 (No such file or directory) 00:25:13.723 EAL: VFIO modules not loaded, skipping VFIO support... 00:25:13.723 EAL: Ask a virtual area of 0x2e000 bytes 00:25:13.723 EAL: Virtual area found at 0x200000000000 (size = 0x2e000) 00:25:13.723 EAL: Setting up physically contiguous memory... 00:25:13.723 EAL: Setting maximum number of open files to 524288 00:25:13.723 EAL: Detected memory type: socket_id:0 hugepage_sz:2097152 00:25:13.723 EAL: Creating 4 segment lists: n_segs:8192 socket_id:0 hugepage_sz:2097152 00:25:13.723 EAL: Ask a virtual area of 0x61000 bytes 00:25:13.723 EAL: Virtual area found at 0x20000002e000 (size = 0x61000) 00:25:13.723 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:25:13.723 EAL: Ask a virtual area of 0x400000000 bytes 00:25:13.723 EAL: Virtual area found at 0x200000200000 (size = 0x400000000) 00:25:13.723 EAL: VA reserved for memseg list at 0x200000200000, size 400000000 00:25:13.723 EAL: Ask a virtual area of 0x61000 bytes 00:25:13.723 EAL: Virtual area found at 0x200400200000 (size = 0x61000) 00:25:13.723 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:25:13.723 EAL: Ask a virtual area of 0x400000000 bytes 00:25:13.723 EAL: Virtual area found at 0x200400400000 (size = 0x400000000) 00:25:13.723 EAL: VA reserved for memseg list at 0x200400400000, size 400000000 00:25:13.723 EAL: Ask a virtual area of 0x61000 bytes 00:25:13.723 EAL: Virtual area found at 0x200800400000 (size = 0x61000) 00:25:13.723 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:25:13.723 EAL: Ask a virtual area of 0x400000000 bytes 00:25:13.723 EAL: Virtual area found at 0x200800600000 (size = 0x400000000) 00:25:13.723 EAL: VA reserved for memseg list at 0x200800600000, size 400000000 00:25:13.723 EAL: Ask a virtual area of 0x61000 bytes 00:25:13.723 EAL: Virtual area found at 0x200c00600000 (size = 0x61000) 00:25:13.723 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:25:13.723 EAL: Ask a virtual area of 0x400000000 bytes 00:25:13.723 EAL: Virtual area found at 0x200c00800000 (size = 0x400000000) 00:25:13.723 EAL: VA reserved for memseg list at 0x200c00800000, size 400000000 00:25:13.723 EAL: Hugepages will be freed exactly as allocated. 00:25:13.723 EAL: No shared files mode enabled, IPC is disabled 00:25:13.723 EAL: No shared files mode enabled, IPC is disabled 00:25:13.723 EAL: TSC frequency is ~2100000 KHz 00:25:13.723 EAL: Main lcore 0 is ready (tid=7fc667caca40;cpuset=[0]) 00:25:13.723 EAL: Trying to obtain current memory policy. 00:25:13.723 EAL: Setting policy MPOL_PREFERRED for socket 0 00:25:13.723 EAL: Restoring previous memory policy: 0 00:25:13.723 EAL: request: mp_malloc_sync 00:25:13.723 EAL: No shared files mode enabled, IPC is disabled 00:25:13.723 EAL: Heap on socket 0 was expanded by 2MB 00:25:13.723 EAL: Module /sys/module/vfio not found! error 2 (No such file or directory) 00:25:13.723 EAL: No PCI address specified using 'addr=' in: bus=pci 00:25:13.723 EAL: Mem event callback 'spdk:(nil)' registered 00:25:13.723 EAL: Module /sys/module/vfio_pci not found! error 2 (No such file or directory) 00:25:13.723 00:25:13.723 00:25:13.723 CUnit - A unit testing framework for C - Version 2.1-3 00:25:13.723 http://cunit.sourceforge.net/ 00:25:13.723 00:25:13.723 00:25:13.723 Suite: components_suite 00:25:14.656 Test: vtophys_malloc_test ...passed 00:25:14.656 Test: vtophys_spdk_malloc_test ...EAL: Trying to obtain current memory policy. 00:25:14.656 EAL: Setting policy MPOL_PREFERRED for socket 0 00:25:14.656 EAL: Restoring previous memory policy: 4 00:25:14.656 EAL: Calling mem event callback 'spdk:(nil)' 00:25:14.656 EAL: request: mp_malloc_sync 00:25:14.656 EAL: No shared files mode enabled, IPC is disabled 00:25:14.656 EAL: Heap on socket 0 was expanded by 4MB 00:25:14.656 EAL: Calling mem event callback 'spdk:(nil)' 00:25:14.656 EAL: request: mp_malloc_sync 00:25:14.656 EAL: No shared files mode enabled, IPC is disabled 00:25:14.656 EAL: Heap on socket 0 was shrunk by 4MB 00:25:14.656 EAL: Trying to obtain current memory policy. 00:25:14.656 EAL: Setting policy MPOL_PREFERRED for socket 0 00:25:14.656 EAL: Restoring previous memory policy: 4 00:25:14.656 EAL: Calling mem event callback 'spdk:(nil)' 00:25:14.656 EAL: request: mp_malloc_sync 00:25:14.656 EAL: No shared files mode enabled, IPC is disabled 00:25:14.656 EAL: Heap on socket 0 was expanded by 6MB 00:25:14.656 EAL: Calling mem event callback 'spdk:(nil)' 00:25:14.656 EAL: request: mp_malloc_sync 00:25:14.656 EAL: No shared files mode enabled, IPC is disabled 00:25:14.656 EAL: Heap on socket 0 was shrunk by 6MB 00:25:14.656 EAL: Trying to obtain current memory policy. 00:25:14.656 EAL: Setting policy MPOL_PREFERRED for socket 0 00:25:14.656 EAL: Restoring previous memory policy: 4 00:25:14.656 EAL: Calling mem event callback 'spdk:(nil)' 00:25:14.656 EAL: request: mp_malloc_sync 00:25:14.656 EAL: No shared files mode enabled, IPC is disabled 00:25:14.656 EAL: Heap on socket 0 was expanded by 10MB 00:25:14.656 EAL: Calling mem event callback 'spdk:(nil)' 00:25:14.656 EAL: request: mp_malloc_sync 00:25:14.656 EAL: No shared files mode enabled, IPC is disabled 00:25:14.656 EAL: Heap on socket 0 was shrunk by 10MB 00:25:14.656 EAL: Trying to obtain current memory policy. 00:25:14.656 EAL: Setting policy MPOL_PREFERRED for socket 0 00:25:14.656 EAL: Restoring previous memory policy: 4 00:25:14.656 EAL: Calling mem event callback 'spdk:(nil)' 00:25:14.656 EAL: request: mp_malloc_sync 00:25:14.656 EAL: No shared files mode enabled, IPC is disabled 00:25:14.656 EAL: Heap on socket 0 was expanded by 18MB 00:25:14.656 EAL: Calling mem event callback 'spdk:(nil)' 00:25:14.656 EAL: request: mp_malloc_sync 00:25:14.656 EAL: No shared files mode enabled, IPC is disabled 00:25:14.656 EAL: Heap on socket 0 was shrunk by 18MB 00:25:14.656 EAL: Trying to obtain current memory policy. 00:25:14.656 EAL: Setting policy MPOL_PREFERRED for socket 0 00:25:14.656 EAL: Restoring previous memory policy: 4 00:25:14.656 EAL: Calling mem event callback 'spdk:(nil)' 00:25:14.656 EAL: request: mp_malloc_sync 00:25:14.656 EAL: No shared files mode enabled, IPC is disabled 00:25:14.656 EAL: Heap on socket 0 was expanded by 34MB 00:25:14.656 EAL: Calling mem event callback 'spdk:(nil)' 00:25:14.656 EAL: request: mp_malloc_sync 00:25:14.656 EAL: No shared files mode enabled, IPC is disabled 00:25:14.656 EAL: Heap on socket 0 was shrunk by 34MB 00:25:14.656 EAL: Trying to obtain current memory policy. 00:25:14.656 EAL: Setting policy MPOL_PREFERRED for socket 0 00:25:14.656 EAL: Restoring previous memory policy: 4 00:25:14.656 EAL: Calling mem event callback 'spdk:(nil)' 00:25:14.656 EAL: request: mp_malloc_sync 00:25:14.656 EAL: No shared files mode enabled, IPC is disabled 00:25:14.656 EAL: Heap on socket 0 was expanded by 66MB 00:25:14.914 EAL: Calling mem event callback 'spdk:(nil)' 00:25:14.914 EAL: request: mp_malloc_sync 00:25:14.914 EAL: No shared files mode enabled, IPC is disabled 00:25:14.914 EAL: Heap on socket 0 was shrunk by 66MB 00:25:14.914 EAL: Trying to obtain current memory policy. 00:25:14.914 EAL: Setting policy MPOL_PREFERRED for socket 0 00:25:15.172 EAL: Restoring previous memory policy: 4 00:25:15.172 EAL: Calling mem event callback 'spdk:(nil)' 00:25:15.172 EAL: request: mp_malloc_sync 00:25:15.172 EAL: No shared files mode enabled, IPC is disabled 00:25:15.172 EAL: Heap on socket 0 was expanded by 130MB 00:25:15.429 EAL: Calling mem event callback 'spdk:(nil)' 00:25:15.429 EAL: request: mp_malloc_sync 00:25:15.429 EAL: No shared files mode enabled, IPC is disabled 00:25:15.429 EAL: Heap on socket 0 was shrunk by 130MB 00:25:15.688 EAL: Trying to obtain current memory policy. 00:25:15.688 EAL: Setting policy MPOL_PREFERRED for socket 0 00:25:15.688 EAL: Restoring previous memory policy: 4 00:25:15.688 EAL: Calling mem event callback 'spdk:(nil)' 00:25:15.688 EAL: request: mp_malloc_sync 00:25:15.688 EAL: No shared files mode enabled, IPC is disabled 00:25:15.688 EAL: Heap on socket 0 was expanded by 258MB 00:25:16.253 EAL: Calling mem event callback 'spdk:(nil)' 00:25:16.253 EAL: request: mp_malloc_sync 00:25:16.253 EAL: No shared files mode enabled, IPC is disabled 00:25:16.253 EAL: Heap on socket 0 was shrunk by 258MB 00:25:16.820 EAL: Trying to obtain current memory policy. 00:25:16.820 EAL: Setting policy MPOL_PREFERRED for socket 0 00:25:16.820 EAL: Restoring previous memory policy: 4 00:25:16.820 EAL: Calling mem event callback 'spdk:(nil)' 00:25:16.820 EAL: request: mp_malloc_sync 00:25:16.820 EAL: No shared files mode enabled, IPC is disabled 00:25:16.820 EAL: Heap on socket 0 was expanded by 514MB 00:25:18.196 EAL: Calling mem event callback 'spdk:(nil)' 00:25:18.196 EAL: request: mp_malloc_sync 00:25:18.196 EAL: No shared files mode enabled, IPC is disabled 00:25:18.196 EAL: Heap on socket 0 was shrunk by 514MB 00:25:19.132 EAL: Trying to obtain current memory policy. 00:25:19.132 EAL: Setting policy MPOL_PREFERRED for socket 0 00:25:19.390 EAL: Restoring previous memory policy: 4 00:25:19.390 EAL: Calling mem event callback 'spdk:(nil)' 00:25:19.390 EAL: request: mp_malloc_sync 00:25:19.390 EAL: No shared files mode enabled, IPC is disabled 00:25:19.390 EAL: Heap on socket 0 was expanded by 1026MB 00:25:21.922 EAL: Calling mem event callback 'spdk:(nil)' 00:25:21.922 EAL: request: mp_malloc_sync 00:25:21.922 EAL: No shared files mode enabled, IPC is disabled 00:25:21.922 EAL: Heap on socket 0 was shrunk by 1026MB 00:25:23.823 passed 00:25:23.823 00:25:23.823 Run Summary: Type Total Ran Passed Failed Inactive 00:25:23.823 suites 1 1 n/a 0 0 00:25:23.823 tests 2 2 2 0 0 00:25:23.823 asserts 5635 5635 5635 0 n/a 00:25:23.823 00:25:23.823 Elapsed time = 9.559 seconds 00:25:23.823 EAL: Calling mem event callback 'spdk:(nil)' 00:25:23.823 EAL: request: mp_malloc_sync 00:25:23.823 EAL: No shared files mode enabled, IPC is disabled 00:25:23.823 EAL: Heap on socket 0 was shrunk by 2MB 00:25:23.823 EAL: No shared files mode enabled, IPC is disabled 00:25:23.823 EAL: No shared files mode enabled, IPC is disabled 00:25:23.823 EAL: No shared files mode enabled, IPC is disabled 00:25:23.823 00:25:23.823 real 0m9.938s 00:25:23.823 user 0m8.738s 00:25:23.823 sys 0m1.013s 00:25:23.823 17:22:00 env.env_vtophys -- common/autotest_common.sh@1130 -- # xtrace_disable 00:25:23.823 17:22:00 env.env_vtophys -- common/autotest_common.sh@10 -- # set +x 00:25:23.823 ************************************ 00:25:23.823 END TEST env_vtophys 00:25:23.823 ************************************ 00:25:23.823 17:22:00 env -- env/env.sh@12 -- # run_test env_pci /home/vagrant/spdk_repo/spdk/test/env/pci/pci_ut 00:25:23.823 17:22:00 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:25:23.823 17:22:00 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:25:23.823 17:22:00 env -- common/autotest_common.sh@10 -- # set +x 00:25:23.823 ************************************ 00:25:23.823 START TEST env_pci 00:25:23.823 ************************************ 00:25:23.823 17:22:00 env.env_pci -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/env/pci/pci_ut 00:25:23.823 00:25:23.823 00:25:23.823 CUnit - A unit testing framework for C - Version 2.1-3 00:25:23.823 http://cunit.sourceforge.net/ 00:25:23.823 00:25:23.823 00:25:23.823 Suite: pci 00:25:23.823 Test: pci_hook ...[2024-11-26 17:22:00.932033] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/pci.c:1117:spdk_pci_device_claim: *ERROR*: Cannot create lock on device /var/tmp/spdk_pci_lock_10000:00:01.0, probably process 56936 has claimed it 00:25:23.823 passed 00:25:23.823 00:25:23.823 Run Summary: Type Total Ran Passed Failed Inactive 00:25:23.823 suites 1 1 n/a 0 0 00:25:23.823 tests 1 1 1 0 0 00:25:23.823 asserts 25 25 25 0 n/a 00:25:23.823 00:25:23.823 Elapsed time = 0.010 seconds 00:25:23.823 EAL: Cannot find device (10000:00:01.0) 00:25:23.823 EAL: Failed to attach device on primary process 00:25:23.823 00:25:23.823 real 0m0.107s 00:25:23.823 user 0m0.046s 00:25:23.823 sys 0m0.060s 00:25:23.823 17:22:00 env.env_pci -- common/autotest_common.sh@1130 -- # xtrace_disable 00:25:23.823 17:22:00 env.env_pci -- common/autotest_common.sh@10 -- # set +x 00:25:23.823 ************************************ 00:25:23.823 END TEST env_pci 00:25:23.823 ************************************ 00:25:23.823 17:22:01 env -- env/env.sh@14 -- # argv='-c 0x1 ' 00:25:23.823 17:22:01 env -- env/env.sh@15 -- # uname 00:25:23.823 17:22:01 env -- env/env.sh@15 -- # '[' Linux = Linux ']' 00:25:23.823 17:22:01 env -- env/env.sh@22 -- # argv+=--base-virtaddr=0x200000000000 00:25:23.823 17:22:01 env -- env/env.sh@24 -- # run_test env_dpdk_post_init /home/vagrant/spdk_repo/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:25:23.823 17:22:01 env -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:25:23.823 17:22:01 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:25:23.823 17:22:01 env -- common/autotest_common.sh@10 -- # set +x 00:25:23.823 ************************************ 00:25:23.823 START TEST env_dpdk_post_init 00:25:23.823 ************************************ 00:25:23.823 17:22:01 env.env_dpdk_post_init -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:25:23.823 EAL: Detected CPU lcores: 10 00:25:23.823 EAL: Detected NUMA nodes: 1 00:25:23.823 EAL: Detected shared linkage of DPDK 00:25:23.823 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:25:23.823 EAL: Selected IOVA mode 'PA' 00:25:24.082 TELEMETRY: No legacy callbacks, legacy socket not created 00:25:24.082 EAL: Probe PCI driver: spdk_nvme (1b36:0010) device: 0000:00:10.0 (socket -1) 00:25:24.082 EAL: Probe PCI driver: spdk_nvme (1b36:0010) device: 0000:00:11.0 (socket -1) 00:25:24.082 Starting DPDK initialization... 00:25:24.082 Starting SPDK post initialization... 00:25:24.082 SPDK NVMe probe 00:25:24.082 Attaching to 0000:00:10.0 00:25:24.082 Attaching to 0000:00:11.0 00:25:24.082 Attached to 0000:00:10.0 00:25:24.082 Attached to 0000:00:11.0 00:25:24.082 Cleaning up... 00:25:24.082 00:25:24.082 real 0m0.344s 00:25:24.082 user 0m0.124s 00:25:24.082 sys 0m0.120s 00:25:24.082 ************************************ 00:25:24.082 END TEST env_dpdk_post_init 00:25:24.082 ************************************ 00:25:24.082 17:22:01 env.env_dpdk_post_init -- common/autotest_common.sh@1130 -- # xtrace_disable 00:25:24.082 17:22:01 env.env_dpdk_post_init -- common/autotest_common.sh@10 -- # set +x 00:25:24.082 17:22:01 env -- env/env.sh@26 -- # uname 00:25:24.082 17:22:01 env -- env/env.sh@26 -- # '[' Linux = Linux ']' 00:25:24.082 17:22:01 env -- env/env.sh@29 -- # run_test env_mem_callbacks /home/vagrant/spdk_repo/spdk/test/env/mem_callbacks/mem_callbacks 00:25:24.082 17:22:01 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:25:24.082 17:22:01 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:25:24.082 17:22:01 env -- common/autotest_common.sh@10 -- # set +x 00:25:24.082 ************************************ 00:25:24.082 START TEST env_mem_callbacks 00:25:24.082 ************************************ 00:25:24.082 17:22:01 env.env_mem_callbacks -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/env/mem_callbacks/mem_callbacks 00:25:24.082 EAL: Detected CPU lcores: 10 00:25:24.082 EAL: Detected NUMA nodes: 1 00:25:24.340 EAL: Detected shared linkage of DPDK 00:25:24.340 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:25:24.340 EAL: Selected IOVA mode 'PA' 00:25:24.340 TELEMETRY: No legacy callbacks, legacy socket not created 00:25:24.340 00:25:24.340 00:25:24.340 CUnit - A unit testing framework for C - Version 2.1-3 00:25:24.340 http://cunit.sourceforge.net/ 00:25:24.340 00:25:24.340 00:25:24.340 Suite: memory 00:25:24.340 Test: test ... 00:25:24.340 register 0x200000200000 2097152 00:25:24.340 malloc 3145728 00:25:24.340 register 0x200000400000 4194304 00:25:24.340 buf 0x2000004fffc0 len 3145728 PASSED 00:25:24.340 malloc 64 00:25:24.340 buf 0x2000004ffec0 len 64 PASSED 00:25:24.340 malloc 4194304 00:25:24.340 register 0x200000800000 6291456 00:25:24.340 buf 0x2000009fffc0 len 4194304 PASSED 00:25:24.340 free 0x2000004fffc0 3145728 00:25:24.340 free 0x2000004ffec0 64 00:25:24.340 unregister 0x200000400000 4194304 PASSED 00:25:24.340 free 0x2000009fffc0 4194304 00:25:24.340 unregister 0x200000800000 6291456 PASSED 00:25:24.340 malloc 8388608 00:25:24.340 register 0x200000400000 10485760 00:25:24.340 buf 0x2000005fffc0 len 8388608 PASSED 00:25:24.340 free 0x2000005fffc0 8388608 00:25:24.340 unregister 0x200000400000 10485760 PASSED 00:25:24.340 passed 00:25:24.340 00:25:24.340 Run Summary: Type Total Ran Passed Failed Inactive 00:25:24.340 suites 1 1 n/a 0 0 00:25:24.340 tests 1 1 1 0 0 00:25:24.340 asserts 15 15 15 0 n/a 00:25:24.340 00:25:24.340 Elapsed time = 0.107 seconds 00:25:24.634 00:25:24.634 real 0m0.340s 00:25:24.634 user 0m0.148s 00:25:24.634 sys 0m0.088s 00:25:24.634 17:22:01 env.env_mem_callbacks -- common/autotest_common.sh@1130 -- # xtrace_disable 00:25:24.634 17:22:01 env.env_mem_callbacks -- common/autotest_common.sh@10 -- # set +x 00:25:24.634 ************************************ 00:25:24.634 END TEST env_mem_callbacks 00:25:24.634 ************************************ 00:25:24.634 ************************************ 00:25:24.634 END TEST env 00:25:24.634 ************************************ 00:25:24.634 00:25:24.634 real 0m11.666s 00:25:24.634 user 0m9.686s 00:25:24.634 sys 0m1.589s 00:25:24.634 17:22:01 env -- common/autotest_common.sh@1130 -- # xtrace_disable 00:25:24.634 17:22:01 env -- common/autotest_common.sh@10 -- # set +x 00:25:24.634 17:22:01 -- spdk/autotest.sh@156 -- # run_test rpc /home/vagrant/spdk_repo/spdk/test/rpc/rpc.sh 00:25:24.634 17:22:01 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:25:24.634 17:22:01 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:25:24.634 17:22:01 -- common/autotest_common.sh@10 -- # set +x 00:25:24.634 ************************************ 00:25:24.634 START TEST rpc 00:25:24.634 ************************************ 00:25:24.634 17:22:01 rpc -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/rpc/rpc.sh 00:25:24.634 * Looking for test storage... 00:25:24.634 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc 00:25:24.634 17:22:01 rpc -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:25:24.634 17:22:02 rpc -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:25:24.634 17:22:02 rpc -- common/autotest_common.sh@1693 -- # lcov --version 00:25:24.634 17:22:02 rpc -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:25:24.635 17:22:02 rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:25:24.635 17:22:02 rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:25:24.635 17:22:02 rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:25:24.635 17:22:02 rpc -- scripts/common.sh@336 -- # IFS=.-: 00:25:24.635 17:22:02 rpc -- scripts/common.sh@336 -- # read -ra ver1 00:25:24.635 17:22:02 rpc -- scripts/common.sh@337 -- # IFS=.-: 00:25:24.635 17:22:02 rpc -- scripts/common.sh@337 -- # read -ra ver2 00:25:24.635 17:22:02 rpc -- scripts/common.sh@338 -- # local 'op=<' 00:25:24.635 17:22:02 rpc -- scripts/common.sh@340 -- # ver1_l=2 00:25:24.635 17:22:02 rpc -- scripts/common.sh@341 -- # ver2_l=1 00:25:24.635 17:22:02 rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:25:24.635 17:22:02 rpc -- scripts/common.sh@344 -- # case "$op" in 00:25:24.635 17:22:02 rpc -- scripts/common.sh@345 -- # : 1 00:25:24.635 17:22:02 rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:25:24.635 17:22:02 rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:25:24.635 17:22:02 rpc -- scripts/common.sh@365 -- # decimal 1 00:25:24.635 17:22:02 rpc -- scripts/common.sh@353 -- # local d=1 00:25:24.635 17:22:02 rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:25:24.635 17:22:02 rpc -- scripts/common.sh@355 -- # echo 1 00:25:24.892 17:22:02 rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:25:24.892 17:22:02 rpc -- scripts/common.sh@366 -- # decimal 2 00:25:24.892 17:22:02 rpc -- scripts/common.sh@353 -- # local d=2 00:25:24.892 17:22:02 rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:25:24.892 17:22:02 rpc -- scripts/common.sh@355 -- # echo 2 00:25:24.892 17:22:02 rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:25:24.892 17:22:02 rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:25:24.892 17:22:02 rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:25:24.892 17:22:02 rpc -- scripts/common.sh@368 -- # return 0 00:25:24.892 17:22:02 rpc -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:25:24.892 17:22:02 rpc -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:25:24.892 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:24.892 --rc genhtml_branch_coverage=1 00:25:24.892 --rc genhtml_function_coverage=1 00:25:24.892 --rc genhtml_legend=1 00:25:24.892 --rc geninfo_all_blocks=1 00:25:24.892 --rc geninfo_unexecuted_blocks=1 00:25:24.892 00:25:24.892 ' 00:25:24.892 17:22:02 rpc -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:25:24.892 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:24.892 --rc genhtml_branch_coverage=1 00:25:24.892 --rc genhtml_function_coverage=1 00:25:24.892 --rc genhtml_legend=1 00:25:24.892 --rc geninfo_all_blocks=1 00:25:24.892 --rc geninfo_unexecuted_blocks=1 00:25:24.892 00:25:24.892 ' 00:25:24.892 17:22:02 rpc -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:25:24.892 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:24.892 --rc genhtml_branch_coverage=1 00:25:24.892 --rc genhtml_function_coverage=1 00:25:24.892 --rc genhtml_legend=1 00:25:24.892 --rc geninfo_all_blocks=1 00:25:24.892 --rc geninfo_unexecuted_blocks=1 00:25:24.892 00:25:24.892 ' 00:25:24.892 17:22:02 rpc -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:25:24.892 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:24.892 --rc genhtml_branch_coverage=1 00:25:24.892 --rc genhtml_function_coverage=1 00:25:24.892 --rc genhtml_legend=1 00:25:24.892 --rc geninfo_all_blocks=1 00:25:24.892 --rc geninfo_unexecuted_blocks=1 00:25:24.892 00:25:24.892 ' 00:25:24.892 17:22:02 rpc -- rpc/rpc.sh@65 -- # spdk_pid=57063 00:25:24.892 17:22:02 rpc -- rpc/rpc.sh@66 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:25:24.892 17:22:02 rpc -- rpc/rpc.sh@64 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -e bdev 00:25:24.892 17:22:02 rpc -- rpc/rpc.sh@67 -- # waitforlisten 57063 00:25:24.892 17:22:02 rpc -- common/autotest_common.sh@835 -- # '[' -z 57063 ']' 00:25:24.892 17:22:02 rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:24.892 17:22:02 rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:25:24.892 17:22:02 rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:24.892 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:24.892 17:22:02 rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:25:24.893 17:22:02 rpc -- common/autotest_common.sh@10 -- # set +x 00:25:24.893 [2024-11-26 17:22:02.208214] Starting SPDK v25.01-pre git sha1 f7ce15267 / DPDK 24.03.0 initialization... 00:25:24.893 [2024-11-26 17:22:02.208549] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57063 ] 00:25:25.151 [2024-11-26 17:22:02.394542] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:25.151 [2024-11-26 17:22:02.580973] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask bdev specified. 00:25:25.151 [2024-11-26 17:22:02.581321] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s spdk_tgt -p 57063' to capture a snapshot of events at runtime. 00:25:25.151 [2024-11-26 17:22:02.581528] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:25:25.151 [2024-11-26 17:22:02.581687] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:25:25.151 [2024-11-26 17:22:02.581740] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/spdk_tgt_trace.pid57063 for offline analysis/debug. 00:25:25.151 [2024-11-26 17:22:02.583937] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:25:26.527 17:22:03 rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:25:26.527 17:22:03 rpc -- common/autotest_common.sh@868 -- # return 0 00:25:26.527 17:22:03 rpc -- rpc/rpc.sh@69 -- # export PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/test/rpc 00:25:26.527 17:22:03 rpc -- rpc/rpc.sh@69 -- # PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/test/rpc 00:25:26.527 17:22:03 rpc -- rpc/rpc.sh@72 -- # rpc=rpc_cmd 00:25:26.527 17:22:03 rpc -- rpc/rpc.sh@73 -- # run_test rpc_integrity rpc_integrity 00:25:26.527 17:22:03 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:25:26.527 17:22:03 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:25:26.527 17:22:03 rpc -- common/autotest_common.sh@10 -- # set +x 00:25:26.527 ************************************ 00:25:26.527 START TEST rpc_integrity 00:25:26.527 ************************************ 00:25:26.527 17:22:03 rpc.rpc_integrity -- common/autotest_common.sh@1129 -- # rpc_integrity 00:25:26.527 17:22:03 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:25:26.527 17:22:03 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:26.527 17:22:03 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:25:26.527 17:22:03 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:26.527 17:22:03 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:25:26.527 17:22:03 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # jq length 00:25:26.527 17:22:03 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:25:26.527 17:22:03 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:25:26.527 17:22:03 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:26.527 17:22:03 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:25:26.527 17:22:03 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:26.527 17:22:03 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc0 00:25:26.527 17:22:03 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:25:26.527 17:22:03 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:26.527 17:22:03 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:25:26.527 17:22:03 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:26.527 17:22:03 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:25:26.527 { 00:25:26.527 "name": "Malloc0", 00:25:26.527 "aliases": [ 00:25:26.527 "631a206b-9603-44ad-afc2-ba52000c40ee" 00:25:26.527 ], 00:25:26.527 "product_name": "Malloc disk", 00:25:26.527 "block_size": 512, 00:25:26.527 "num_blocks": 16384, 00:25:26.527 "uuid": "631a206b-9603-44ad-afc2-ba52000c40ee", 00:25:26.527 "assigned_rate_limits": { 00:25:26.527 "rw_ios_per_sec": 0, 00:25:26.527 "rw_mbytes_per_sec": 0, 00:25:26.527 "r_mbytes_per_sec": 0, 00:25:26.527 "w_mbytes_per_sec": 0 00:25:26.527 }, 00:25:26.527 "claimed": false, 00:25:26.527 "zoned": false, 00:25:26.527 "supported_io_types": { 00:25:26.527 "read": true, 00:25:26.527 "write": true, 00:25:26.527 "unmap": true, 00:25:26.527 "flush": true, 00:25:26.527 "reset": true, 00:25:26.527 "nvme_admin": false, 00:25:26.527 "nvme_io": false, 00:25:26.527 "nvme_io_md": false, 00:25:26.527 "write_zeroes": true, 00:25:26.527 "zcopy": true, 00:25:26.527 "get_zone_info": false, 00:25:26.527 "zone_management": false, 00:25:26.527 "zone_append": false, 00:25:26.527 "compare": false, 00:25:26.527 "compare_and_write": false, 00:25:26.527 "abort": true, 00:25:26.527 "seek_hole": false, 00:25:26.527 "seek_data": false, 00:25:26.527 "copy": true, 00:25:26.527 "nvme_iov_md": false 00:25:26.527 }, 00:25:26.527 "memory_domains": [ 00:25:26.527 { 00:25:26.527 "dma_device_id": "system", 00:25:26.527 "dma_device_type": 1 00:25:26.527 }, 00:25:26.527 { 00:25:26.527 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:25:26.527 "dma_device_type": 2 00:25:26.527 } 00:25:26.527 ], 00:25:26.527 "driver_specific": {} 00:25:26.527 } 00:25:26.527 ]' 00:25:26.527 17:22:03 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # jq length 00:25:26.527 17:22:03 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:25:26.527 17:22:03 rpc.rpc_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc0 -p Passthru0 00:25:26.527 17:22:03 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:26.527 17:22:03 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:25:26.527 [2024-11-26 17:22:03.775062] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc0 00:25:26.527 [2024-11-26 17:22:03.775571] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:25:26.527 [2024-11-26 17:22:03.775614] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007880 00:25:26.527 [2024-11-26 17:22:03.775637] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:25:26.527 [2024-11-26 17:22:03.778683] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:25:26.527 Passthru0 00:25:26.527 [2024-11-26 17:22:03.778853] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:25:26.527 17:22:03 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:26.527 17:22:03 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:25:26.527 17:22:03 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:26.527 17:22:03 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:25:26.527 17:22:03 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:26.527 17:22:03 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:25:26.527 { 00:25:26.527 "name": "Malloc0", 00:25:26.527 "aliases": [ 00:25:26.527 "631a206b-9603-44ad-afc2-ba52000c40ee" 00:25:26.527 ], 00:25:26.527 "product_name": "Malloc disk", 00:25:26.527 "block_size": 512, 00:25:26.527 "num_blocks": 16384, 00:25:26.527 "uuid": "631a206b-9603-44ad-afc2-ba52000c40ee", 00:25:26.527 "assigned_rate_limits": { 00:25:26.527 "rw_ios_per_sec": 0, 00:25:26.527 "rw_mbytes_per_sec": 0, 00:25:26.527 "r_mbytes_per_sec": 0, 00:25:26.527 "w_mbytes_per_sec": 0 00:25:26.527 }, 00:25:26.527 "claimed": true, 00:25:26.527 "claim_type": "exclusive_write", 00:25:26.527 "zoned": false, 00:25:26.527 "supported_io_types": { 00:25:26.527 "read": true, 00:25:26.527 "write": true, 00:25:26.527 "unmap": true, 00:25:26.527 "flush": true, 00:25:26.527 "reset": true, 00:25:26.527 "nvme_admin": false, 00:25:26.527 "nvme_io": false, 00:25:26.527 "nvme_io_md": false, 00:25:26.527 "write_zeroes": true, 00:25:26.527 "zcopy": true, 00:25:26.527 "get_zone_info": false, 00:25:26.527 "zone_management": false, 00:25:26.527 "zone_append": false, 00:25:26.527 "compare": false, 00:25:26.527 "compare_and_write": false, 00:25:26.527 "abort": true, 00:25:26.527 "seek_hole": false, 00:25:26.527 "seek_data": false, 00:25:26.527 "copy": true, 00:25:26.527 "nvme_iov_md": false 00:25:26.527 }, 00:25:26.527 "memory_domains": [ 00:25:26.527 { 00:25:26.527 "dma_device_id": "system", 00:25:26.527 "dma_device_type": 1 00:25:26.527 }, 00:25:26.527 { 00:25:26.527 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:25:26.527 "dma_device_type": 2 00:25:26.527 } 00:25:26.527 ], 00:25:26.527 "driver_specific": {} 00:25:26.527 }, 00:25:26.527 { 00:25:26.527 "name": "Passthru0", 00:25:26.527 "aliases": [ 00:25:26.527 "68c8df82-1ab5-56a6-89b7-99abe5ce2c58" 00:25:26.527 ], 00:25:26.527 "product_name": "passthru", 00:25:26.527 "block_size": 512, 00:25:26.527 "num_blocks": 16384, 00:25:26.527 "uuid": "68c8df82-1ab5-56a6-89b7-99abe5ce2c58", 00:25:26.527 "assigned_rate_limits": { 00:25:26.527 "rw_ios_per_sec": 0, 00:25:26.527 "rw_mbytes_per_sec": 0, 00:25:26.527 "r_mbytes_per_sec": 0, 00:25:26.527 "w_mbytes_per_sec": 0 00:25:26.527 }, 00:25:26.527 "claimed": false, 00:25:26.527 "zoned": false, 00:25:26.527 "supported_io_types": { 00:25:26.527 "read": true, 00:25:26.527 "write": true, 00:25:26.527 "unmap": true, 00:25:26.527 "flush": true, 00:25:26.527 "reset": true, 00:25:26.527 "nvme_admin": false, 00:25:26.527 "nvme_io": false, 00:25:26.527 "nvme_io_md": false, 00:25:26.527 "write_zeroes": true, 00:25:26.527 "zcopy": true, 00:25:26.527 "get_zone_info": false, 00:25:26.527 "zone_management": false, 00:25:26.527 "zone_append": false, 00:25:26.527 "compare": false, 00:25:26.527 "compare_and_write": false, 00:25:26.527 "abort": true, 00:25:26.527 "seek_hole": false, 00:25:26.527 "seek_data": false, 00:25:26.527 "copy": true, 00:25:26.527 "nvme_iov_md": false 00:25:26.527 }, 00:25:26.527 "memory_domains": [ 00:25:26.527 { 00:25:26.527 "dma_device_id": "system", 00:25:26.527 "dma_device_type": 1 00:25:26.527 }, 00:25:26.527 { 00:25:26.527 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:25:26.527 "dma_device_type": 2 00:25:26.527 } 00:25:26.527 ], 00:25:26.527 "driver_specific": { 00:25:26.527 "passthru": { 00:25:26.527 "name": "Passthru0", 00:25:26.527 "base_bdev_name": "Malloc0" 00:25:26.527 } 00:25:26.527 } 00:25:26.527 } 00:25:26.527 ]' 00:25:26.527 17:22:03 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # jq length 00:25:26.527 17:22:03 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:25:26.527 17:22:03 rpc.rpc_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:25:26.527 17:22:03 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:26.527 17:22:03 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:25:26.528 17:22:03 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:26.528 17:22:03 rpc.rpc_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc0 00:25:26.528 17:22:03 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:26.528 17:22:03 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:25:26.528 17:22:03 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:26.528 17:22:03 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:25:26.528 17:22:03 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:26.528 17:22:03 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:25:26.528 17:22:03 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:26.528 17:22:03 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:25:26.528 17:22:03 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # jq length 00:25:26.786 ************************************ 00:25:26.786 END TEST rpc_integrity 00:25:26.786 ************************************ 00:25:26.786 17:22:03 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:25:26.786 00:25:26.786 real 0m0.355s 00:25:26.786 user 0m0.199s 00:25:26.786 sys 0m0.048s 00:25:26.786 17:22:03 rpc.rpc_integrity -- common/autotest_common.sh@1130 -- # xtrace_disable 00:25:26.786 17:22:03 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:25:26.786 17:22:04 rpc -- rpc/rpc.sh@74 -- # run_test rpc_plugins rpc_plugins 00:25:26.786 17:22:04 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:25:26.786 17:22:04 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:25:26.786 17:22:04 rpc -- common/autotest_common.sh@10 -- # set +x 00:25:26.786 ************************************ 00:25:26.786 START TEST rpc_plugins 00:25:26.786 ************************************ 00:25:26.786 17:22:04 rpc.rpc_plugins -- common/autotest_common.sh@1129 -- # rpc_plugins 00:25:26.786 17:22:04 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # rpc_cmd --plugin rpc_plugin create_malloc 00:25:26.786 17:22:04 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:26.786 17:22:04 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:25:26.786 17:22:04 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:26.786 17:22:04 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # malloc=Malloc1 00:25:26.786 17:22:04 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # rpc_cmd bdev_get_bdevs 00:25:26.786 17:22:04 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:26.786 17:22:04 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:25:26.786 17:22:04 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:26.786 17:22:04 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # bdevs='[ 00:25:26.786 { 00:25:26.786 "name": "Malloc1", 00:25:26.786 "aliases": [ 00:25:26.786 "758b7acf-5f23-4f5b-a4a2-1713f7f30222" 00:25:26.786 ], 00:25:26.786 "product_name": "Malloc disk", 00:25:26.786 "block_size": 4096, 00:25:26.786 "num_blocks": 256, 00:25:26.786 "uuid": "758b7acf-5f23-4f5b-a4a2-1713f7f30222", 00:25:26.786 "assigned_rate_limits": { 00:25:26.786 "rw_ios_per_sec": 0, 00:25:26.786 "rw_mbytes_per_sec": 0, 00:25:26.786 "r_mbytes_per_sec": 0, 00:25:26.786 "w_mbytes_per_sec": 0 00:25:26.786 }, 00:25:26.786 "claimed": false, 00:25:26.786 "zoned": false, 00:25:26.786 "supported_io_types": { 00:25:26.786 "read": true, 00:25:26.786 "write": true, 00:25:26.786 "unmap": true, 00:25:26.786 "flush": true, 00:25:26.786 "reset": true, 00:25:26.786 "nvme_admin": false, 00:25:26.786 "nvme_io": false, 00:25:26.786 "nvme_io_md": false, 00:25:26.786 "write_zeroes": true, 00:25:26.786 "zcopy": true, 00:25:26.786 "get_zone_info": false, 00:25:26.786 "zone_management": false, 00:25:26.786 "zone_append": false, 00:25:26.786 "compare": false, 00:25:26.786 "compare_and_write": false, 00:25:26.786 "abort": true, 00:25:26.786 "seek_hole": false, 00:25:26.786 "seek_data": false, 00:25:26.786 "copy": true, 00:25:26.786 "nvme_iov_md": false 00:25:26.786 }, 00:25:26.786 "memory_domains": [ 00:25:26.786 { 00:25:26.786 "dma_device_id": "system", 00:25:26.786 "dma_device_type": 1 00:25:26.786 }, 00:25:26.786 { 00:25:26.786 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:25:26.786 "dma_device_type": 2 00:25:26.786 } 00:25:26.786 ], 00:25:26.786 "driver_specific": {} 00:25:26.786 } 00:25:26.786 ]' 00:25:26.786 17:22:04 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # jq length 00:25:26.786 17:22:04 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # '[' 1 == 1 ']' 00:25:26.786 17:22:04 rpc.rpc_plugins -- rpc/rpc.sh@34 -- # rpc_cmd --plugin rpc_plugin delete_malloc Malloc1 00:25:26.786 17:22:04 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:26.786 17:22:04 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:25:26.786 17:22:04 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:26.786 17:22:04 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # rpc_cmd bdev_get_bdevs 00:25:26.786 17:22:04 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:26.786 17:22:04 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:25:26.786 17:22:04 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:26.786 17:22:04 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # bdevs='[]' 00:25:26.786 17:22:04 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # jq length 00:25:26.786 ************************************ 00:25:26.786 END TEST rpc_plugins 00:25:26.786 ************************************ 00:25:26.786 17:22:04 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # '[' 0 == 0 ']' 00:25:26.786 00:25:26.786 real 0m0.162s 00:25:26.786 user 0m0.095s 00:25:26.786 sys 0m0.026s 00:25:26.786 17:22:04 rpc.rpc_plugins -- common/autotest_common.sh@1130 -- # xtrace_disable 00:25:26.787 17:22:04 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:25:27.045 17:22:04 rpc -- rpc/rpc.sh@75 -- # run_test rpc_trace_cmd_test rpc_trace_cmd_test 00:25:27.045 17:22:04 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:25:27.045 17:22:04 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:25:27.045 17:22:04 rpc -- common/autotest_common.sh@10 -- # set +x 00:25:27.045 ************************************ 00:25:27.045 START TEST rpc_trace_cmd_test 00:25:27.045 ************************************ 00:25:27.045 17:22:04 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1129 -- # rpc_trace_cmd_test 00:25:27.045 17:22:04 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@40 -- # local info 00:25:27.045 17:22:04 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # rpc_cmd trace_get_info 00:25:27.045 17:22:04 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:27.045 17:22:04 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:25:27.045 17:22:04 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:27.045 17:22:04 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # info='{ 00:25:27.045 "tpoint_shm_path": "/dev/shm/spdk_tgt_trace.pid57063", 00:25:27.045 "tpoint_group_mask": "0x8", 00:25:27.045 "iscsi_conn": { 00:25:27.045 "mask": "0x2", 00:25:27.045 "tpoint_mask": "0x0" 00:25:27.045 }, 00:25:27.045 "scsi": { 00:25:27.045 "mask": "0x4", 00:25:27.045 "tpoint_mask": "0x0" 00:25:27.045 }, 00:25:27.045 "bdev": { 00:25:27.045 "mask": "0x8", 00:25:27.045 "tpoint_mask": "0xffffffffffffffff" 00:25:27.045 }, 00:25:27.045 "nvmf_rdma": { 00:25:27.045 "mask": "0x10", 00:25:27.045 "tpoint_mask": "0x0" 00:25:27.045 }, 00:25:27.045 "nvmf_tcp": { 00:25:27.045 "mask": "0x20", 00:25:27.045 "tpoint_mask": "0x0" 00:25:27.045 }, 00:25:27.045 "ftl": { 00:25:27.045 "mask": "0x40", 00:25:27.045 "tpoint_mask": "0x0" 00:25:27.045 }, 00:25:27.045 "blobfs": { 00:25:27.045 "mask": "0x80", 00:25:27.045 "tpoint_mask": "0x0" 00:25:27.045 }, 00:25:27.045 "dsa": { 00:25:27.045 "mask": "0x200", 00:25:27.045 "tpoint_mask": "0x0" 00:25:27.045 }, 00:25:27.045 "thread": { 00:25:27.045 "mask": "0x400", 00:25:27.045 "tpoint_mask": "0x0" 00:25:27.045 }, 00:25:27.045 "nvme_pcie": { 00:25:27.045 "mask": "0x800", 00:25:27.045 "tpoint_mask": "0x0" 00:25:27.045 }, 00:25:27.045 "iaa": { 00:25:27.045 "mask": "0x1000", 00:25:27.045 "tpoint_mask": "0x0" 00:25:27.045 }, 00:25:27.045 "nvme_tcp": { 00:25:27.045 "mask": "0x2000", 00:25:27.045 "tpoint_mask": "0x0" 00:25:27.045 }, 00:25:27.045 "bdev_nvme": { 00:25:27.045 "mask": "0x4000", 00:25:27.045 "tpoint_mask": "0x0" 00:25:27.045 }, 00:25:27.045 "sock": { 00:25:27.045 "mask": "0x8000", 00:25:27.045 "tpoint_mask": "0x0" 00:25:27.045 }, 00:25:27.045 "blob": { 00:25:27.045 "mask": "0x10000", 00:25:27.045 "tpoint_mask": "0x0" 00:25:27.045 }, 00:25:27.045 "bdev_raid": { 00:25:27.045 "mask": "0x20000", 00:25:27.045 "tpoint_mask": "0x0" 00:25:27.045 }, 00:25:27.045 "scheduler": { 00:25:27.045 "mask": "0x40000", 00:25:27.045 "tpoint_mask": "0x0" 00:25:27.045 } 00:25:27.045 }' 00:25:27.045 17:22:04 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # jq length 00:25:27.045 17:22:04 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # '[' 19 -gt 2 ']' 00:25:27.045 17:22:04 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # jq 'has("tpoint_group_mask")' 00:25:27.045 17:22:04 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # '[' true = true ']' 00:25:27.045 17:22:04 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # jq 'has("tpoint_shm_path")' 00:25:27.045 17:22:04 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # '[' true = true ']' 00:25:27.045 17:22:04 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # jq 'has("bdev")' 00:25:27.045 17:22:04 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # '[' true = true ']' 00:25:27.045 17:22:04 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # jq -r .bdev.tpoint_mask 00:25:27.304 ************************************ 00:25:27.304 END TEST rpc_trace_cmd_test 00:25:27.304 ************************************ 00:25:27.304 17:22:04 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # '[' 0xffffffffffffffff '!=' 0x0 ']' 00:25:27.304 00:25:27.304 real 0m0.256s 00:25:27.304 user 0m0.205s 00:25:27.304 sys 0m0.041s 00:25:27.304 17:22:04 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:25:27.304 17:22:04 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:25:27.304 17:22:04 rpc -- rpc/rpc.sh@76 -- # [[ 0 -eq 1 ]] 00:25:27.304 17:22:04 rpc -- rpc/rpc.sh@80 -- # rpc=rpc_cmd 00:25:27.304 17:22:04 rpc -- rpc/rpc.sh@81 -- # run_test rpc_daemon_integrity rpc_integrity 00:25:27.304 17:22:04 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:25:27.304 17:22:04 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:25:27.304 17:22:04 rpc -- common/autotest_common.sh@10 -- # set +x 00:25:27.304 ************************************ 00:25:27.304 START TEST rpc_daemon_integrity 00:25:27.304 ************************************ 00:25:27.304 17:22:04 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1129 -- # rpc_integrity 00:25:27.304 17:22:04 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:25:27.304 17:22:04 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:27.304 17:22:04 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:25:27.304 17:22:04 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:27.304 17:22:04 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:25:27.304 17:22:04 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # jq length 00:25:27.304 17:22:04 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:25:27.304 17:22:04 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:25:27.304 17:22:04 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:27.304 17:22:04 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:25:27.304 17:22:04 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:27.304 17:22:04 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc2 00:25:27.305 17:22:04 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:25:27.305 17:22:04 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:27.305 17:22:04 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:25:27.305 17:22:04 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:27.305 17:22:04 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:25:27.305 { 00:25:27.305 "name": "Malloc2", 00:25:27.305 "aliases": [ 00:25:27.305 "5facb36c-5720-47d6-8bb7-e215759773aa" 00:25:27.305 ], 00:25:27.305 "product_name": "Malloc disk", 00:25:27.305 "block_size": 512, 00:25:27.305 "num_blocks": 16384, 00:25:27.305 "uuid": "5facb36c-5720-47d6-8bb7-e215759773aa", 00:25:27.305 "assigned_rate_limits": { 00:25:27.305 "rw_ios_per_sec": 0, 00:25:27.305 "rw_mbytes_per_sec": 0, 00:25:27.305 "r_mbytes_per_sec": 0, 00:25:27.305 "w_mbytes_per_sec": 0 00:25:27.305 }, 00:25:27.305 "claimed": false, 00:25:27.305 "zoned": false, 00:25:27.305 "supported_io_types": { 00:25:27.305 "read": true, 00:25:27.305 "write": true, 00:25:27.305 "unmap": true, 00:25:27.305 "flush": true, 00:25:27.305 "reset": true, 00:25:27.305 "nvme_admin": false, 00:25:27.305 "nvme_io": false, 00:25:27.305 "nvme_io_md": false, 00:25:27.305 "write_zeroes": true, 00:25:27.305 "zcopy": true, 00:25:27.305 "get_zone_info": false, 00:25:27.305 "zone_management": false, 00:25:27.305 "zone_append": false, 00:25:27.305 "compare": false, 00:25:27.305 "compare_and_write": false, 00:25:27.305 "abort": true, 00:25:27.305 "seek_hole": false, 00:25:27.305 "seek_data": false, 00:25:27.305 "copy": true, 00:25:27.305 "nvme_iov_md": false 00:25:27.305 }, 00:25:27.305 "memory_domains": [ 00:25:27.305 { 00:25:27.305 "dma_device_id": "system", 00:25:27.305 "dma_device_type": 1 00:25:27.305 }, 00:25:27.305 { 00:25:27.305 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:25:27.305 "dma_device_type": 2 00:25:27.305 } 00:25:27.305 ], 00:25:27.305 "driver_specific": {} 00:25:27.305 } 00:25:27.305 ]' 00:25:27.305 17:22:04 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # jq length 00:25:27.305 17:22:04 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:25:27.305 17:22:04 rpc.rpc_daemon_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc2 -p Passthru0 00:25:27.305 17:22:04 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:27.305 17:22:04 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:25:27.305 [2024-11-26 17:22:04.745169] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc2 00:25:27.305 [2024-11-26 17:22:04.745247] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:25:27.305 [2024-11-26 17:22:04.745275] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:25:27.305 [2024-11-26 17:22:04.745291] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:25:27.305 [2024-11-26 17:22:04.748321] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:25:27.305 [2024-11-26 17:22:04.748373] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:25:27.563 Passthru0 00:25:27.563 17:22:04 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:27.563 17:22:04 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:25:27.563 17:22:04 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:27.563 17:22:04 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:25:27.563 17:22:04 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:27.563 17:22:04 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:25:27.563 { 00:25:27.563 "name": "Malloc2", 00:25:27.563 "aliases": [ 00:25:27.563 "5facb36c-5720-47d6-8bb7-e215759773aa" 00:25:27.563 ], 00:25:27.563 "product_name": "Malloc disk", 00:25:27.563 "block_size": 512, 00:25:27.563 "num_blocks": 16384, 00:25:27.563 "uuid": "5facb36c-5720-47d6-8bb7-e215759773aa", 00:25:27.563 "assigned_rate_limits": { 00:25:27.563 "rw_ios_per_sec": 0, 00:25:27.563 "rw_mbytes_per_sec": 0, 00:25:27.563 "r_mbytes_per_sec": 0, 00:25:27.563 "w_mbytes_per_sec": 0 00:25:27.563 }, 00:25:27.563 "claimed": true, 00:25:27.563 "claim_type": "exclusive_write", 00:25:27.563 "zoned": false, 00:25:27.563 "supported_io_types": { 00:25:27.563 "read": true, 00:25:27.563 "write": true, 00:25:27.563 "unmap": true, 00:25:27.563 "flush": true, 00:25:27.563 "reset": true, 00:25:27.563 "nvme_admin": false, 00:25:27.563 "nvme_io": false, 00:25:27.563 "nvme_io_md": false, 00:25:27.563 "write_zeroes": true, 00:25:27.563 "zcopy": true, 00:25:27.563 "get_zone_info": false, 00:25:27.563 "zone_management": false, 00:25:27.563 "zone_append": false, 00:25:27.563 "compare": false, 00:25:27.563 "compare_and_write": false, 00:25:27.563 "abort": true, 00:25:27.563 "seek_hole": false, 00:25:27.563 "seek_data": false, 00:25:27.563 "copy": true, 00:25:27.563 "nvme_iov_md": false 00:25:27.563 }, 00:25:27.563 "memory_domains": [ 00:25:27.563 { 00:25:27.563 "dma_device_id": "system", 00:25:27.563 "dma_device_type": 1 00:25:27.563 }, 00:25:27.563 { 00:25:27.563 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:25:27.563 "dma_device_type": 2 00:25:27.563 } 00:25:27.563 ], 00:25:27.563 "driver_specific": {} 00:25:27.563 }, 00:25:27.563 { 00:25:27.563 "name": "Passthru0", 00:25:27.563 "aliases": [ 00:25:27.563 "e62f9218-f46b-55d8-8c40-3270052f0f8e" 00:25:27.563 ], 00:25:27.563 "product_name": "passthru", 00:25:27.563 "block_size": 512, 00:25:27.563 "num_blocks": 16384, 00:25:27.563 "uuid": "e62f9218-f46b-55d8-8c40-3270052f0f8e", 00:25:27.563 "assigned_rate_limits": { 00:25:27.563 "rw_ios_per_sec": 0, 00:25:27.563 "rw_mbytes_per_sec": 0, 00:25:27.563 "r_mbytes_per_sec": 0, 00:25:27.563 "w_mbytes_per_sec": 0 00:25:27.563 }, 00:25:27.563 "claimed": false, 00:25:27.563 "zoned": false, 00:25:27.563 "supported_io_types": { 00:25:27.563 "read": true, 00:25:27.563 "write": true, 00:25:27.563 "unmap": true, 00:25:27.563 "flush": true, 00:25:27.563 "reset": true, 00:25:27.563 "nvme_admin": false, 00:25:27.563 "nvme_io": false, 00:25:27.563 "nvme_io_md": false, 00:25:27.563 "write_zeroes": true, 00:25:27.563 "zcopy": true, 00:25:27.563 "get_zone_info": false, 00:25:27.563 "zone_management": false, 00:25:27.563 "zone_append": false, 00:25:27.563 "compare": false, 00:25:27.563 "compare_and_write": false, 00:25:27.563 "abort": true, 00:25:27.563 "seek_hole": false, 00:25:27.563 "seek_data": false, 00:25:27.563 "copy": true, 00:25:27.563 "nvme_iov_md": false 00:25:27.563 }, 00:25:27.563 "memory_domains": [ 00:25:27.563 { 00:25:27.563 "dma_device_id": "system", 00:25:27.563 "dma_device_type": 1 00:25:27.563 }, 00:25:27.563 { 00:25:27.563 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:25:27.563 "dma_device_type": 2 00:25:27.563 } 00:25:27.563 ], 00:25:27.563 "driver_specific": { 00:25:27.563 "passthru": { 00:25:27.563 "name": "Passthru0", 00:25:27.563 "base_bdev_name": "Malloc2" 00:25:27.563 } 00:25:27.563 } 00:25:27.563 } 00:25:27.563 ]' 00:25:27.563 17:22:04 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # jq length 00:25:27.563 17:22:04 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:25:27.563 17:22:04 rpc.rpc_daemon_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:25:27.563 17:22:04 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:27.563 17:22:04 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:25:27.563 17:22:04 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:27.563 17:22:04 rpc.rpc_daemon_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc2 00:25:27.563 17:22:04 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:27.563 17:22:04 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:25:27.563 17:22:04 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:27.563 17:22:04 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:25:27.563 17:22:04 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:27.563 17:22:04 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:25:27.563 17:22:04 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:27.563 17:22:04 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:25:27.563 17:22:04 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # jq length 00:25:27.563 ************************************ 00:25:27.563 END TEST rpc_daemon_integrity 00:25:27.563 ************************************ 00:25:27.563 17:22:04 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:25:27.563 00:25:27.563 real 0m0.366s 00:25:27.563 user 0m0.220s 00:25:27.563 sys 0m0.048s 00:25:27.563 17:22:04 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1130 -- # xtrace_disable 00:25:27.563 17:22:04 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:25:27.563 17:22:04 rpc -- rpc/rpc.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:25:27.563 17:22:04 rpc -- rpc/rpc.sh@84 -- # killprocess 57063 00:25:27.563 17:22:04 rpc -- common/autotest_common.sh@954 -- # '[' -z 57063 ']' 00:25:27.563 17:22:04 rpc -- common/autotest_common.sh@958 -- # kill -0 57063 00:25:27.563 17:22:04 rpc -- common/autotest_common.sh@959 -- # uname 00:25:27.563 17:22:04 rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:25:27.563 17:22:04 rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 57063 00:25:27.822 killing process with pid 57063 00:25:27.822 17:22:05 rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:25:27.822 17:22:05 rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:25:27.822 17:22:05 rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 57063' 00:25:27.822 17:22:05 rpc -- common/autotest_common.sh@973 -- # kill 57063 00:25:27.822 17:22:05 rpc -- common/autotest_common.sh@978 -- # wait 57063 00:25:30.360 00:25:30.360 real 0m5.777s 00:25:30.360 user 0m6.381s 00:25:30.360 sys 0m0.925s 00:25:30.360 17:22:07 rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:25:30.360 17:22:07 rpc -- common/autotest_common.sh@10 -- # set +x 00:25:30.360 ************************************ 00:25:30.360 END TEST rpc 00:25:30.360 ************************************ 00:25:30.360 17:22:07 -- spdk/autotest.sh@157 -- # run_test skip_rpc /home/vagrant/spdk_repo/spdk/test/rpc/skip_rpc.sh 00:25:30.360 17:22:07 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:25:30.360 17:22:07 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:25:30.360 17:22:07 -- common/autotest_common.sh@10 -- # set +x 00:25:30.360 ************************************ 00:25:30.360 START TEST skip_rpc 00:25:30.360 ************************************ 00:25:30.360 17:22:07 skip_rpc -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/rpc/skip_rpc.sh 00:25:30.618 * Looking for test storage... 00:25:30.618 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc 00:25:30.618 17:22:07 skip_rpc -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:25:30.618 17:22:07 skip_rpc -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:25:30.618 17:22:07 skip_rpc -- common/autotest_common.sh@1693 -- # lcov --version 00:25:30.618 17:22:07 skip_rpc -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:25:30.618 17:22:07 skip_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:25:30.618 17:22:07 skip_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:25:30.618 17:22:07 skip_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:25:30.618 17:22:07 skip_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:25:30.618 17:22:07 skip_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:25:30.618 17:22:07 skip_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:25:30.618 17:22:07 skip_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:25:30.618 17:22:07 skip_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:25:30.618 17:22:07 skip_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:25:30.618 17:22:07 skip_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:25:30.618 17:22:07 skip_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:25:30.618 17:22:07 skip_rpc -- scripts/common.sh@344 -- # case "$op" in 00:25:30.618 17:22:07 skip_rpc -- scripts/common.sh@345 -- # : 1 00:25:30.618 17:22:07 skip_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:25:30.618 17:22:07 skip_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:25:30.618 17:22:07 skip_rpc -- scripts/common.sh@365 -- # decimal 1 00:25:30.618 17:22:07 skip_rpc -- scripts/common.sh@353 -- # local d=1 00:25:30.618 17:22:07 skip_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:25:30.618 17:22:07 skip_rpc -- scripts/common.sh@355 -- # echo 1 00:25:30.618 17:22:07 skip_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:25:30.618 17:22:07 skip_rpc -- scripts/common.sh@366 -- # decimal 2 00:25:30.618 17:22:07 skip_rpc -- scripts/common.sh@353 -- # local d=2 00:25:30.618 17:22:07 skip_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:25:30.618 17:22:07 skip_rpc -- scripts/common.sh@355 -- # echo 2 00:25:30.618 17:22:07 skip_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:25:30.618 17:22:07 skip_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:25:30.618 17:22:07 skip_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:25:30.618 17:22:07 skip_rpc -- scripts/common.sh@368 -- # return 0 00:25:30.618 17:22:07 skip_rpc -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:25:30.618 17:22:07 skip_rpc -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:25:30.618 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:30.618 --rc genhtml_branch_coverage=1 00:25:30.618 --rc genhtml_function_coverage=1 00:25:30.618 --rc genhtml_legend=1 00:25:30.618 --rc geninfo_all_blocks=1 00:25:30.618 --rc geninfo_unexecuted_blocks=1 00:25:30.618 00:25:30.618 ' 00:25:30.618 17:22:07 skip_rpc -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:25:30.618 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:30.618 --rc genhtml_branch_coverage=1 00:25:30.618 --rc genhtml_function_coverage=1 00:25:30.618 --rc genhtml_legend=1 00:25:30.618 --rc geninfo_all_blocks=1 00:25:30.618 --rc geninfo_unexecuted_blocks=1 00:25:30.618 00:25:30.618 ' 00:25:30.618 17:22:07 skip_rpc -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:25:30.618 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:30.618 --rc genhtml_branch_coverage=1 00:25:30.619 --rc genhtml_function_coverage=1 00:25:30.619 --rc genhtml_legend=1 00:25:30.619 --rc geninfo_all_blocks=1 00:25:30.619 --rc geninfo_unexecuted_blocks=1 00:25:30.619 00:25:30.619 ' 00:25:30.619 17:22:07 skip_rpc -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:25:30.619 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:30.619 --rc genhtml_branch_coverage=1 00:25:30.619 --rc genhtml_function_coverage=1 00:25:30.619 --rc genhtml_legend=1 00:25:30.619 --rc geninfo_all_blocks=1 00:25:30.619 --rc geninfo_unexecuted_blocks=1 00:25:30.619 00:25:30.619 ' 00:25:30.619 17:22:07 skip_rpc -- rpc/skip_rpc.sh@11 -- # CONFIG_PATH=/home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:25:30.619 17:22:07 skip_rpc -- rpc/skip_rpc.sh@12 -- # LOG_PATH=/home/vagrant/spdk_repo/spdk/test/rpc/log.txt 00:25:30.619 17:22:07 skip_rpc -- rpc/skip_rpc.sh@73 -- # run_test skip_rpc test_skip_rpc 00:25:30.619 17:22:07 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:25:30.619 17:22:07 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:25:30.619 17:22:07 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:25:30.619 ************************************ 00:25:30.619 START TEST skip_rpc 00:25:30.619 ************************************ 00:25:30.619 17:22:07 skip_rpc.skip_rpc -- common/autotest_common.sh@1129 -- # test_skip_rpc 00:25:30.619 17:22:07 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@16 -- # local spdk_pid=57303 00:25:30.619 17:22:07 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@18 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:25:30.619 17:22:07 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@15 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 00:25:30.619 17:22:07 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@19 -- # sleep 5 00:25:30.877 [2024-11-26 17:22:08.137849] Starting SPDK v25.01-pre git sha1 f7ce15267 / DPDK 24.03.0 initialization... 00:25:30.877 [2024-11-26 17:22:08.138061] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57303 ] 00:25:31.135 [2024-11-26 17:22:08.335118] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:31.135 [2024-11-26 17:22:08.470404] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:25:36.523 17:22:12 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@21 -- # NOT rpc_cmd spdk_get_version 00:25:36.523 17:22:12 skip_rpc.skip_rpc -- common/autotest_common.sh@652 -- # local es=0 00:25:36.523 17:22:12 skip_rpc.skip_rpc -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd spdk_get_version 00:25:36.523 17:22:12 skip_rpc.skip_rpc -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:25:36.523 17:22:12 skip_rpc.skip_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:25:36.523 17:22:12 skip_rpc.skip_rpc -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:25:36.523 17:22:12 skip_rpc.skip_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:25:36.523 17:22:12 skip_rpc.skip_rpc -- common/autotest_common.sh@655 -- # rpc_cmd spdk_get_version 00:25:36.523 17:22:12 skip_rpc.skip_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:36.523 17:22:12 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:25:36.523 17:22:12 skip_rpc.skip_rpc -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:25:36.523 17:22:12 skip_rpc.skip_rpc -- common/autotest_common.sh@655 -- # es=1 00:25:36.523 17:22:12 skip_rpc.skip_rpc -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:25:36.523 17:22:12 skip_rpc.skip_rpc -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:25:36.523 17:22:12 skip_rpc.skip_rpc -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:25:36.523 17:22:12 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@22 -- # trap - SIGINT SIGTERM EXIT 00:25:36.523 17:22:12 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@23 -- # killprocess 57303 00:25:36.523 17:22:12 skip_rpc.skip_rpc -- common/autotest_common.sh@954 -- # '[' -z 57303 ']' 00:25:36.523 17:22:12 skip_rpc.skip_rpc -- common/autotest_common.sh@958 -- # kill -0 57303 00:25:36.523 17:22:12 skip_rpc.skip_rpc -- common/autotest_common.sh@959 -- # uname 00:25:36.523 17:22:12 skip_rpc.skip_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:25:36.523 17:22:12 skip_rpc.skip_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 57303 00:25:36.523 killing process with pid 57303 00:25:36.523 17:22:13 skip_rpc.skip_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:25:36.524 17:22:13 skip_rpc.skip_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:25:36.524 17:22:13 skip_rpc.skip_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 57303' 00:25:36.524 17:22:13 skip_rpc.skip_rpc -- common/autotest_common.sh@973 -- # kill 57303 00:25:36.524 17:22:13 skip_rpc.skip_rpc -- common/autotest_common.sh@978 -- # wait 57303 00:25:39.053 00:25:39.053 real 0m7.971s 00:25:39.053 user 0m7.398s 00:25:39.053 sys 0m0.469s 00:25:39.053 17:22:15 skip_rpc.skip_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:25:39.053 17:22:15 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:25:39.053 ************************************ 00:25:39.053 END TEST skip_rpc 00:25:39.053 ************************************ 00:25:39.053 17:22:15 skip_rpc -- rpc/skip_rpc.sh@74 -- # run_test skip_rpc_with_json test_skip_rpc_with_json 00:25:39.053 17:22:15 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:25:39.053 17:22:15 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:25:39.053 17:22:15 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:25:39.053 ************************************ 00:25:39.053 START TEST skip_rpc_with_json 00:25:39.053 ************************************ 00:25:39.053 17:22:15 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1129 -- # test_skip_rpc_with_json 00:25:39.053 17:22:15 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@44 -- # gen_json_config 00:25:39.053 17:22:15 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@28 -- # local spdk_pid=57413 00:25:39.053 17:22:15 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:25:39.053 17:22:15 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@30 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:25:39.053 17:22:15 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@31 -- # waitforlisten 57413 00:25:39.053 17:22:15 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@835 -- # '[' -z 57413 ']' 00:25:39.053 17:22:15 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:39.053 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:39.053 17:22:15 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@840 -- # local max_retries=100 00:25:39.053 17:22:15 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:39.053 17:22:15 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@844 -- # xtrace_disable 00:25:39.053 17:22:15 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:25:39.053 [2024-11-26 17:22:16.144134] Starting SPDK v25.01-pre git sha1 f7ce15267 / DPDK 24.03.0 initialization... 00:25:39.053 [2024-11-26 17:22:16.144319] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57413 ] 00:25:39.053 [2024-11-26 17:22:16.369502] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:39.311 [2024-11-26 17:22:16.517564] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:25:40.245 17:22:17 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:25:40.245 17:22:17 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@868 -- # return 0 00:25:40.245 17:22:17 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_get_transports --trtype tcp 00:25:40.245 17:22:17 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:40.245 17:22:17 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:25:40.245 [2024-11-26 17:22:17.570556] nvmf_rpc.c:2706:rpc_nvmf_get_transports: *ERROR*: transport 'tcp' does not exist 00:25:40.245 request: 00:25:40.245 { 00:25:40.245 "trtype": "tcp", 00:25:40.245 "method": "nvmf_get_transports", 00:25:40.245 "req_id": 1 00:25:40.245 } 00:25:40.245 Got JSON-RPC error response 00:25:40.245 response: 00:25:40.245 { 00:25:40.245 "code": -19, 00:25:40.245 "message": "No such device" 00:25:40.245 } 00:25:40.245 17:22:17 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:25:40.245 17:22:17 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_create_transport -t tcp 00:25:40.245 17:22:17 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:40.245 17:22:17 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:25:40.245 [2024-11-26 17:22:17.582712] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:25:40.245 17:22:17 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:40.245 17:22:17 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@36 -- # rpc_cmd save_config 00:25:40.245 17:22:17 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:40.245 17:22:17 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:25:40.504 17:22:17 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:40.504 17:22:17 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@37 -- # cat /home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:25:40.504 { 00:25:40.504 "subsystems": [ 00:25:40.504 { 00:25:40.504 "subsystem": "fsdev", 00:25:40.504 "config": [ 00:25:40.504 { 00:25:40.504 "method": "fsdev_set_opts", 00:25:40.504 "params": { 00:25:40.504 "fsdev_io_pool_size": 65535, 00:25:40.504 "fsdev_io_cache_size": 256 00:25:40.504 } 00:25:40.504 } 00:25:40.504 ] 00:25:40.504 }, 00:25:40.504 { 00:25:40.504 "subsystem": "keyring", 00:25:40.504 "config": [] 00:25:40.504 }, 00:25:40.504 { 00:25:40.504 "subsystem": "iobuf", 00:25:40.504 "config": [ 00:25:40.504 { 00:25:40.504 "method": "iobuf_set_options", 00:25:40.504 "params": { 00:25:40.504 "small_pool_count": 8192, 00:25:40.504 "large_pool_count": 1024, 00:25:40.504 "small_bufsize": 8192, 00:25:40.504 "large_bufsize": 135168, 00:25:40.504 "enable_numa": false 00:25:40.504 } 00:25:40.504 } 00:25:40.504 ] 00:25:40.504 }, 00:25:40.504 { 00:25:40.504 "subsystem": "sock", 00:25:40.504 "config": [ 00:25:40.504 { 00:25:40.504 "method": "sock_set_default_impl", 00:25:40.504 "params": { 00:25:40.504 "impl_name": "posix" 00:25:40.504 } 00:25:40.504 }, 00:25:40.504 { 00:25:40.504 "method": "sock_impl_set_options", 00:25:40.504 "params": { 00:25:40.504 "impl_name": "ssl", 00:25:40.504 "recv_buf_size": 4096, 00:25:40.504 "send_buf_size": 4096, 00:25:40.504 "enable_recv_pipe": true, 00:25:40.504 "enable_quickack": false, 00:25:40.504 "enable_placement_id": 0, 00:25:40.504 "enable_zerocopy_send_server": true, 00:25:40.504 "enable_zerocopy_send_client": false, 00:25:40.504 "zerocopy_threshold": 0, 00:25:40.504 "tls_version": 0, 00:25:40.504 "enable_ktls": false 00:25:40.504 } 00:25:40.504 }, 00:25:40.504 { 00:25:40.504 "method": "sock_impl_set_options", 00:25:40.504 "params": { 00:25:40.504 "impl_name": "posix", 00:25:40.504 "recv_buf_size": 2097152, 00:25:40.504 "send_buf_size": 2097152, 00:25:40.504 "enable_recv_pipe": true, 00:25:40.504 "enable_quickack": false, 00:25:40.504 "enable_placement_id": 0, 00:25:40.504 "enable_zerocopy_send_server": true, 00:25:40.504 "enable_zerocopy_send_client": false, 00:25:40.504 "zerocopy_threshold": 0, 00:25:40.504 "tls_version": 0, 00:25:40.504 "enable_ktls": false 00:25:40.504 } 00:25:40.504 } 00:25:40.504 ] 00:25:40.504 }, 00:25:40.504 { 00:25:40.504 "subsystem": "vmd", 00:25:40.504 "config": [] 00:25:40.504 }, 00:25:40.504 { 00:25:40.504 "subsystem": "accel", 00:25:40.504 "config": [ 00:25:40.504 { 00:25:40.504 "method": "accel_set_options", 00:25:40.504 "params": { 00:25:40.504 "small_cache_size": 128, 00:25:40.504 "large_cache_size": 16, 00:25:40.504 "task_count": 2048, 00:25:40.504 "sequence_count": 2048, 00:25:40.504 "buf_count": 2048 00:25:40.504 } 00:25:40.504 } 00:25:40.504 ] 00:25:40.504 }, 00:25:40.504 { 00:25:40.504 "subsystem": "bdev", 00:25:40.504 "config": [ 00:25:40.504 { 00:25:40.504 "method": "bdev_set_options", 00:25:40.504 "params": { 00:25:40.504 "bdev_io_pool_size": 65535, 00:25:40.504 "bdev_io_cache_size": 256, 00:25:40.504 "bdev_auto_examine": true, 00:25:40.504 "iobuf_small_cache_size": 128, 00:25:40.504 "iobuf_large_cache_size": 16 00:25:40.504 } 00:25:40.504 }, 00:25:40.504 { 00:25:40.504 "method": "bdev_raid_set_options", 00:25:40.504 "params": { 00:25:40.504 "process_window_size_kb": 1024, 00:25:40.504 "process_max_bandwidth_mb_sec": 0 00:25:40.504 } 00:25:40.504 }, 00:25:40.504 { 00:25:40.504 "method": "bdev_iscsi_set_options", 00:25:40.504 "params": { 00:25:40.504 "timeout_sec": 30 00:25:40.504 } 00:25:40.504 }, 00:25:40.504 { 00:25:40.504 "method": "bdev_nvme_set_options", 00:25:40.504 "params": { 00:25:40.504 "action_on_timeout": "none", 00:25:40.504 "timeout_us": 0, 00:25:40.504 "timeout_admin_us": 0, 00:25:40.504 "keep_alive_timeout_ms": 10000, 00:25:40.504 "arbitration_burst": 0, 00:25:40.504 "low_priority_weight": 0, 00:25:40.504 "medium_priority_weight": 0, 00:25:40.504 "high_priority_weight": 0, 00:25:40.504 "nvme_adminq_poll_period_us": 10000, 00:25:40.504 "nvme_ioq_poll_period_us": 0, 00:25:40.504 "io_queue_requests": 0, 00:25:40.504 "delay_cmd_submit": true, 00:25:40.504 "transport_retry_count": 4, 00:25:40.504 "bdev_retry_count": 3, 00:25:40.504 "transport_ack_timeout": 0, 00:25:40.504 "ctrlr_loss_timeout_sec": 0, 00:25:40.504 "reconnect_delay_sec": 0, 00:25:40.504 "fast_io_fail_timeout_sec": 0, 00:25:40.504 "disable_auto_failback": false, 00:25:40.504 "generate_uuids": false, 00:25:40.504 "transport_tos": 0, 00:25:40.504 "nvme_error_stat": false, 00:25:40.504 "rdma_srq_size": 0, 00:25:40.504 "io_path_stat": false, 00:25:40.504 "allow_accel_sequence": false, 00:25:40.504 "rdma_max_cq_size": 0, 00:25:40.504 "rdma_cm_event_timeout_ms": 0, 00:25:40.504 "dhchap_digests": [ 00:25:40.504 "sha256", 00:25:40.504 "sha384", 00:25:40.504 "sha512" 00:25:40.504 ], 00:25:40.504 "dhchap_dhgroups": [ 00:25:40.504 "null", 00:25:40.504 "ffdhe2048", 00:25:40.504 "ffdhe3072", 00:25:40.504 "ffdhe4096", 00:25:40.504 "ffdhe6144", 00:25:40.504 "ffdhe8192" 00:25:40.504 ] 00:25:40.504 } 00:25:40.504 }, 00:25:40.504 { 00:25:40.504 "method": "bdev_nvme_set_hotplug", 00:25:40.504 "params": { 00:25:40.504 "period_us": 100000, 00:25:40.504 "enable": false 00:25:40.504 } 00:25:40.504 }, 00:25:40.504 { 00:25:40.504 "method": "bdev_wait_for_examine" 00:25:40.504 } 00:25:40.504 ] 00:25:40.504 }, 00:25:40.504 { 00:25:40.504 "subsystem": "scsi", 00:25:40.504 "config": null 00:25:40.504 }, 00:25:40.504 { 00:25:40.504 "subsystem": "scheduler", 00:25:40.504 "config": [ 00:25:40.504 { 00:25:40.504 "method": "framework_set_scheduler", 00:25:40.504 "params": { 00:25:40.504 "name": "static" 00:25:40.504 } 00:25:40.504 } 00:25:40.504 ] 00:25:40.504 }, 00:25:40.504 { 00:25:40.504 "subsystem": "vhost_scsi", 00:25:40.504 "config": [] 00:25:40.504 }, 00:25:40.504 { 00:25:40.504 "subsystem": "vhost_blk", 00:25:40.504 "config": [] 00:25:40.504 }, 00:25:40.504 { 00:25:40.504 "subsystem": "ublk", 00:25:40.504 "config": [] 00:25:40.504 }, 00:25:40.504 { 00:25:40.504 "subsystem": "nbd", 00:25:40.504 "config": [] 00:25:40.505 }, 00:25:40.505 { 00:25:40.505 "subsystem": "nvmf", 00:25:40.505 "config": [ 00:25:40.505 { 00:25:40.505 "method": "nvmf_set_config", 00:25:40.505 "params": { 00:25:40.505 "discovery_filter": "match_any", 00:25:40.505 "admin_cmd_passthru": { 00:25:40.505 "identify_ctrlr": false 00:25:40.505 }, 00:25:40.505 "dhchap_digests": [ 00:25:40.505 "sha256", 00:25:40.505 "sha384", 00:25:40.505 "sha512" 00:25:40.505 ], 00:25:40.505 "dhchap_dhgroups": [ 00:25:40.505 "null", 00:25:40.505 "ffdhe2048", 00:25:40.505 "ffdhe3072", 00:25:40.505 "ffdhe4096", 00:25:40.505 "ffdhe6144", 00:25:40.505 "ffdhe8192" 00:25:40.505 ] 00:25:40.505 } 00:25:40.505 }, 00:25:40.505 { 00:25:40.505 "method": "nvmf_set_max_subsystems", 00:25:40.505 "params": { 00:25:40.505 "max_subsystems": 1024 00:25:40.505 } 00:25:40.505 }, 00:25:40.505 { 00:25:40.505 "method": "nvmf_set_crdt", 00:25:40.505 "params": { 00:25:40.505 "crdt1": 0, 00:25:40.505 "crdt2": 0, 00:25:40.505 "crdt3": 0 00:25:40.505 } 00:25:40.505 }, 00:25:40.505 { 00:25:40.505 "method": "nvmf_create_transport", 00:25:40.505 "params": { 00:25:40.505 "trtype": "TCP", 00:25:40.505 "max_queue_depth": 128, 00:25:40.505 "max_io_qpairs_per_ctrlr": 127, 00:25:40.505 "in_capsule_data_size": 4096, 00:25:40.505 "max_io_size": 131072, 00:25:40.505 "io_unit_size": 131072, 00:25:40.505 "max_aq_depth": 128, 00:25:40.505 "num_shared_buffers": 511, 00:25:40.505 "buf_cache_size": 4294967295, 00:25:40.505 "dif_insert_or_strip": false, 00:25:40.505 "zcopy": false, 00:25:40.505 "c2h_success": true, 00:25:40.505 "sock_priority": 0, 00:25:40.505 "abort_timeout_sec": 1, 00:25:40.505 "ack_timeout": 0, 00:25:40.505 "data_wr_pool_size": 0 00:25:40.505 } 00:25:40.505 } 00:25:40.505 ] 00:25:40.505 }, 00:25:40.505 { 00:25:40.505 "subsystem": "iscsi", 00:25:40.505 "config": [ 00:25:40.505 { 00:25:40.505 "method": "iscsi_set_options", 00:25:40.505 "params": { 00:25:40.505 "node_base": "iqn.2016-06.io.spdk", 00:25:40.505 "max_sessions": 128, 00:25:40.505 "max_connections_per_session": 2, 00:25:40.505 "max_queue_depth": 64, 00:25:40.505 "default_time2wait": 2, 00:25:40.505 "default_time2retain": 20, 00:25:40.505 "first_burst_length": 8192, 00:25:40.505 "immediate_data": true, 00:25:40.505 "allow_duplicated_isid": false, 00:25:40.505 "error_recovery_level": 0, 00:25:40.505 "nop_timeout": 60, 00:25:40.505 "nop_in_interval": 30, 00:25:40.505 "disable_chap": false, 00:25:40.505 "require_chap": false, 00:25:40.505 "mutual_chap": false, 00:25:40.505 "chap_group": 0, 00:25:40.505 "max_large_datain_per_connection": 64, 00:25:40.505 "max_r2t_per_connection": 4, 00:25:40.505 "pdu_pool_size": 36864, 00:25:40.505 "immediate_data_pool_size": 16384, 00:25:40.505 "data_out_pool_size": 2048 00:25:40.505 } 00:25:40.505 } 00:25:40.505 ] 00:25:40.505 } 00:25:40.505 ] 00:25:40.505 } 00:25:40.505 17:22:17 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:25:40.505 17:22:17 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@40 -- # killprocess 57413 00:25:40.505 17:22:17 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # '[' -z 57413 ']' 00:25:40.505 17:22:17 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@958 -- # kill -0 57413 00:25:40.505 17:22:17 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # uname 00:25:40.505 17:22:17 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:25:40.505 17:22:17 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 57413 00:25:40.505 17:22:17 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:25:40.505 17:22:17 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:25:40.505 killing process with pid 57413 00:25:40.505 17:22:17 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@972 -- # echo 'killing process with pid 57413' 00:25:40.505 17:22:17 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@973 -- # kill 57413 00:25:40.505 17:22:17 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@978 -- # wait 57413 00:25:43.802 17:22:20 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@47 -- # local spdk_pid=57475 00:25:43.802 17:22:20 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@48 -- # sleep 5 00:25:43.802 17:22:20 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --json /home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:25:49.170 17:22:25 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@50 -- # killprocess 57475 00:25:49.170 17:22:25 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # '[' -z 57475 ']' 00:25:49.170 17:22:25 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@958 -- # kill -0 57475 00:25:49.170 17:22:25 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # uname 00:25:49.170 17:22:25 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:25:49.170 17:22:25 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 57475 00:25:49.170 17:22:25 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:25:49.170 killing process with pid 57475 00:25:49.170 17:22:25 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:25:49.170 17:22:25 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@972 -- # echo 'killing process with pid 57475' 00:25:49.170 17:22:25 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@973 -- # kill 57475 00:25:49.170 17:22:25 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@978 -- # wait 57475 00:25:51.070 17:22:28 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@51 -- # grep -q 'TCP Transport Init' /home/vagrant/spdk_repo/spdk/test/rpc/log.txt 00:25:51.070 17:22:28 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@52 -- # rm /home/vagrant/spdk_repo/spdk/test/rpc/log.txt 00:25:51.070 00:25:51.070 real 0m12.427s 00:25:51.070 user 0m11.860s 00:25:51.070 sys 0m1.021s 00:25:51.070 17:22:28 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1130 -- # xtrace_disable 00:25:51.070 ************************************ 00:25:51.070 END TEST skip_rpc_with_json 00:25:51.070 ************************************ 00:25:51.070 17:22:28 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:25:51.070 17:22:28 skip_rpc -- rpc/skip_rpc.sh@75 -- # run_test skip_rpc_with_delay test_skip_rpc_with_delay 00:25:51.070 17:22:28 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:25:51.070 17:22:28 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:25:51.070 17:22:28 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:25:51.070 ************************************ 00:25:51.070 START TEST skip_rpc_with_delay 00:25:51.070 ************************************ 00:25:51.070 17:22:28 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1129 -- # test_skip_rpc_with_delay 00:25:51.070 17:22:28 skip_rpc.skip_rpc_with_delay -- rpc/skip_rpc.sh@57 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:25:51.070 17:22:28 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@652 -- # local es=0 00:25:51.070 17:22:28 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:25:51.070 17:22:28 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:25:51.070 17:22:28 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:25:51.070 17:22:28 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:25:51.070 17:22:28 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:25:51.070 17:22:28 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:25:51.070 17:22:28 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:25:51.070 17:22:28 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:25:51.070 17:22:28 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt ]] 00:25:51.070 17:22:28 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:25:51.328 [2024-11-26 17:22:28.628574] app.c: 842:spdk_app_start: *ERROR*: Cannot use '--wait-for-rpc' if no RPC server is going to be started. 00:25:51.328 17:22:28 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@655 -- # es=1 00:25:51.328 17:22:28 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:25:51.328 17:22:28 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:25:51.328 17:22:28 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:25:51.328 00:25:51.328 real 0m0.234s 00:25:51.328 user 0m0.120s 00:25:51.328 sys 0m0.111s 00:25:51.328 17:22:28 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1130 -- # xtrace_disable 00:25:51.328 17:22:28 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@10 -- # set +x 00:25:51.328 ************************************ 00:25:51.328 END TEST skip_rpc_with_delay 00:25:51.328 ************************************ 00:25:51.328 17:22:28 skip_rpc -- rpc/skip_rpc.sh@77 -- # uname 00:25:51.328 17:22:28 skip_rpc -- rpc/skip_rpc.sh@77 -- # '[' Linux '!=' FreeBSD ']' 00:25:51.328 17:22:28 skip_rpc -- rpc/skip_rpc.sh@78 -- # run_test exit_on_failed_rpc_init test_exit_on_failed_rpc_init 00:25:51.328 17:22:28 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:25:51.328 17:22:28 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:25:51.328 17:22:28 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:25:51.584 ************************************ 00:25:51.584 START TEST exit_on_failed_rpc_init 00:25:51.584 ************************************ 00:25:51.584 17:22:28 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1129 -- # test_exit_on_failed_rpc_init 00:25:51.584 17:22:28 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@62 -- # local spdk_pid=57614 00:25:51.584 17:22:28 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@63 -- # waitforlisten 57614 00:25:51.584 17:22:28 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@835 -- # '[' -z 57614 ']' 00:25:51.584 17:22:28 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:51.584 17:22:28 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@61 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:25:51.584 17:22:28 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@840 -- # local max_retries=100 00:25:51.584 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:51.584 17:22:28 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:51.584 17:22:28 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@844 -- # xtrace_disable 00:25:51.584 17:22:28 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:25:51.584 [2024-11-26 17:22:28.925988] Starting SPDK v25.01-pre git sha1 f7ce15267 / DPDK 24.03.0 initialization... 00:25:51.584 [2024-11-26 17:22:28.926179] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57614 ] 00:25:51.896 [2024-11-26 17:22:29.132836] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:51.896 [2024-11-26 17:22:29.336538] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:25:53.281 17:22:30 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:25:53.281 17:22:30 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@868 -- # return 0 00:25:53.281 17:22:30 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@65 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:25:53.281 17:22:30 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@67 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x2 00:25:53.281 17:22:30 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@652 -- # local es=0 00:25:53.281 17:22:30 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x2 00:25:53.281 17:22:30 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:25:53.281 17:22:30 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:25:53.281 17:22:30 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:25:53.281 17:22:30 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:25:53.281 17:22:30 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:25:53.281 17:22:30 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:25:53.281 17:22:30 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:25:53.281 17:22:30 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt ]] 00:25:53.281 17:22:30 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x2 00:25:53.281 [2024-11-26 17:22:30.572290] Starting SPDK v25.01-pre git sha1 f7ce15267 / DPDK 24.03.0 initialization... 00:25:53.281 [2024-11-26 17:22:30.572467] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57638 ] 00:25:53.540 [2024-11-26 17:22:30.776726] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:53.540 [2024-11-26 17:22:30.959238] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:25:53.540 [2024-11-26 17:22:30.959369] rpc.c: 180:_spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:25:53.540 [2024-11-26 17:22:30.959401] rpc.c: 166:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:25:53.540 [2024-11-26 17:22:30.959435] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:25:54.108 17:22:31 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@655 -- # es=234 00:25:54.108 17:22:31 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:25:54.108 17:22:31 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@664 -- # es=106 00:25:54.108 17:22:31 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@665 -- # case "$es" in 00:25:54.108 17:22:31 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@672 -- # es=1 00:25:54.108 17:22:31 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:25:54.108 17:22:31 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:25:54.108 17:22:31 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@70 -- # killprocess 57614 00:25:54.108 17:22:31 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@954 -- # '[' -z 57614 ']' 00:25:54.108 17:22:31 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@958 -- # kill -0 57614 00:25:54.108 17:22:31 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@959 -- # uname 00:25:54.108 17:22:31 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:25:54.108 17:22:31 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 57614 00:25:54.108 17:22:31 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:25:54.108 killing process with pid 57614 00:25:54.108 17:22:31 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:25:54.108 17:22:31 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@972 -- # echo 'killing process with pid 57614' 00:25:54.108 17:22:31 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@973 -- # kill 57614 00:25:54.108 17:22:31 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@978 -- # wait 57614 00:25:56.678 00:25:56.678 real 0m5.144s 00:25:56.678 user 0m5.784s 00:25:56.678 sys 0m0.711s 00:25:56.678 17:22:33 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1130 -- # xtrace_disable 00:25:56.678 17:22:33 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:25:56.678 ************************************ 00:25:56.678 END TEST exit_on_failed_rpc_init 00:25:56.678 ************************************ 00:25:56.678 17:22:33 skip_rpc -- rpc/skip_rpc.sh@81 -- # rm /home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:25:56.678 00:25:56.678 real 0m26.230s 00:25:56.678 user 0m25.358s 00:25:56.678 sys 0m2.578s 00:25:56.678 17:22:33 skip_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:25:56.678 17:22:33 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:25:56.678 ************************************ 00:25:56.678 END TEST skip_rpc 00:25:56.678 ************************************ 00:25:56.678 17:22:34 -- spdk/autotest.sh@158 -- # run_test rpc_client /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client.sh 00:25:56.678 17:22:34 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:25:56.678 17:22:34 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:25:56.678 17:22:34 -- common/autotest_common.sh@10 -- # set +x 00:25:56.678 ************************************ 00:25:56.678 START TEST rpc_client 00:25:56.678 ************************************ 00:25:56.678 17:22:34 rpc_client -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client.sh 00:25:56.678 * Looking for test storage... 00:25:56.937 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc_client 00:25:56.937 17:22:34 rpc_client -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:25:56.937 17:22:34 rpc_client -- common/autotest_common.sh@1693 -- # lcov --version 00:25:56.937 17:22:34 rpc_client -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:25:56.937 17:22:34 rpc_client -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:25:56.937 17:22:34 rpc_client -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:25:56.937 17:22:34 rpc_client -- scripts/common.sh@333 -- # local ver1 ver1_l 00:25:56.937 17:22:34 rpc_client -- scripts/common.sh@334 -- # local ver2 ver2_l 00:25:56.937 17:22:34 rpc_client -- scripts/common.sh@336 -- # IFS=.-: 00:25:56.937 17:22:34 rpc_client -- scripts/common.sh@336 -- # read -ra ver1 00:25:56.937 17:22:34 rpc_client -- scripts/common.sh@337 -- # IFS=.-: 00:25:56.937 17:22:34 rpc_client -- scripts/common.sh@337 -- # read -ra ver2 00:25:56.937 17:22:34 rpc_client -- scripts/common.sh@338 -- # local 'op=<' 00:25:56.937 17:22:34 rpc_client -- scripts/common.sh@340 -- # ver1_l=2 00:25:56.937 17:22:34 rpc_client -- scripts/common.sh@341 -- # ver2_l=1 00:25:56.937 17:22:34 rpc_client -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:25:56.937 17:22:34 rpc_client -- scripts/common.sh@344 -- # case "$op" in 00:25:56.937 17:22:34 rpc_client -- scripts/common.sh@345 -- # : 1 00:25:56.937 17:22:34 rpc_client -- scripts/common.sh@364 -- # (( v = 0 )) 00:25:56.937 17:22:34 rpc_client -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:25:56.937 17:22:34 rpc_client -- scripts/common.sh@365 -- # decimal 1 00:25:56.937 17:22:34 rpc_client -- scripts/common.sh@353 -- # local d=1 00:25:56.937 17:22:34 rpc_client -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:25:56.937 17:22:34 rpc_client -- scripts/common.sh@355 -- # echo 1 00:25:56.937 17:22:34 rpc_client -- scripts/common.sh@365 -- # ver1[v]=1 00:25:56.937 17:22:34 rpc_client -- scripts/common.sh@366 -- # decimal 2 00:25:56.937 17:22:34 rpc_client -- scripts/common.sh@353 -- # local d=2 00:25:56.937 17:22:34 rpc_client -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:25:56.937 17:22:34 rpc_client -- scripts/common.sh@355 -- # echo 2 00:25:56.937 17:22:34 rpc_client -- scripts/common.sh@366 -- # ver2[v]=2 00:25:56.937 17:22:34 rpc_client -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:25:56.937 17:22:34 rpc_client -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:25:56.937 17:22:34 rpc_client -- scripts/common.sh@368 -- # return 0 00:25:56.937 17:22:34 rpc_client -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:25:56.937 17:22:34 rpc_client -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:25:56.937 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:56.937 --rc genhtml_branch_coverage=1 00:25:56.937 --rc genhtml_function_coverage=1 00:25:56.937 --rc genhtml_legend=1 00:25:56.937 --rc geninfo_all_blocks=1 00:25:56.937 --rc geninfo_unexecuted_blocks=1 00:25:56.937 00:25:56.937 ' 00:25:56.937 17:22:34 rpc_client -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:25:56.937 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:56.937 --rc genhtml_branch_coverage=1 00:25:56.937 --rc genhtml_function_coverage=1 00:25:56.937 --rc genhtml_legend=1 00:25:56.937 --rc geninfo_all_blocks=1 00:25:56.937 --rc geninfo_unexecuted_blocks=1 00:25:56.937 00:25:56.937 ' 00:25:56.937 17:22:34 rpc_client -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:25:56.937 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:56.937 --rc genhtml_branch_coverage=1 00:25:56.937 --rc genhtml_function_coverage=1 00:25:56.937 --rc genhtml_legend=1 00:25:56.937 --rc geninfo_all_blocks=1 00:25:56.937 --rc geninfo_unexecuted_blocks=1 00:25:56.937 00:25:56.937 ' 00:25:56.937 17:22:34 rpc_client -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:25:56.937 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:56.937 --rc genhtml_branch_coverage=1 00:25:56.937 --rc genhtml_function_coverage=1 00:25:56.937 --rc genhtml_legend=1 00:25:56.937 --rc geninfo_all_blocks=1 00:25:56.937 --rc geninfo_unexecuted_blocks=1 00:25:56.937 00:25:56.937 ' 00:25:56.937 17:22:34 rpc_client -- rpc_client/rpc_client.sh@10 -- # /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client_test 00:25:56.937 OK 00:25:56.937 17:22:34 rpc_client -- rpc_client/rpc_client.sh@12 -- # trap - SIGINT SIGTERM EXIT 00:25:56.937 00:25:56.937 real 0m0.268s 00:25:56.937 user 0m0.147s 00:25:56.937 sys 0m0.135s 00:25:56.937 17:22:34 rpc_client -- common/autotest_common.sh@1130 -- # xtrace_disable 00:25:56.937 17:22:34 rpc_client -- common/autotest_common.sh@10 -- # set +x 00:25:56.937 ************************************ 00:25:56.937 END TEST rpc_client 00:25:56.937 ************************************ 00:25:56.937 17:22:34 -- spdk/autotest.sh@159 -- # run_test json_config /home/vagrant/spdk_repo/spdk/test/json_config/json_config.sh 00:25:56.937 17:22:34 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:25:56.937 17:22:34 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:25:56.937 17:22:34 -- common/autotest_common.sh@10 -- # set +x 00:25:56.937 ************************************ 00:25:56.937 START TEST json_config 00:25:56.937 ************************************ 00:25:56.937 17:22:34 json_config -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_config.sh 00:25:57.196 17:22:34 json_config -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:25:57.196 17:22:34 json_config -- common/autotest_common.sh@1693 -- # lcov --version 00:25:57.196 17:22:34 json_config -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:25:57.196 17:22:34 json_config -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:25:57.196 17:22:34 json_config -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:25:57.196 17:22:34 json_config -- scripts/common.sh@333 -- # local ver1 ver1_l 00:25:57.196 17:22:34 json_config -- scripts/common.sh@334 -- # local ver2 ver2_l 00:25:57.196 17:22:34 json_config -- scripts/common.sh@336 -- # IFS=.-: 00:25:57.196 17:22:34 json_config -- scripts/common.sh@336 -- # read -ra ver1 00:25:57.196 17:22:34 json_config -- scripts/common.sh@337 -- # IFS=.-: 00:25:57.196 17:22:34 json_config -- scripts/common.sh@337 -- # read -ra ver2 00:25:57.196 17:22:34 json_config -- scripts/common.sh@338 -- # local 'op=<' 00:25:57.196 17:22:34 json_config -- scripts/common.sh@340 -- # ver1_l=2 00:25:57.196 17:22:34 json_config -- scripts/common.sh@341 -- # ver2_l=1 00:25:57.196 17:22:34 json_config -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:25:57.196 17:22:34 json_config -- scripts/common.sh@344 -- # case "$op" in 00:25:57.196 17:22:34 json_config -- scripts/common.sh@345 -- # : 1 00:25:57.196 17:22:34 json_config -- scripts/common.sh@364 -- # (( v = 0 )) 00:25:57.196 17:22:34 json_config -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:25:57.196 17:22:34 json_config -- scripts/common.sh@365 -- # decimal 1 00:25:57.196 17:22:34 json_config -- scripts/common.sh@353 -- # local d=1 00:25:57.196 17:22:34 json_config -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:25:57.196 17:22:34 json_config -- scripts/common.sh@355 -- # echo 1 00:25:57.196 17:22:34 json_config -- scripts/common.sh@365 -- # ver1[v]=1 00:25:57.196 17:22:34 json_config -- scripts/common.sh@366 -- # decimal 2 00:25:57.196 17:22:34 json_config -- scripts/common.sh@353 -- # local d=2 00:25:57.196 17:22:34 json_config -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:25:57.196 17:22:34 json_config -- scripts/common.sh@355 -- # echo 2 00:25:57.196 17:22:34 json_config -- scripts/common.sh@366 -- # ver2[v]=2 00:25:57.196 17:22:34 json_config -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:25:57.196 17:22:34 json_config -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:25:57.196 17:22:34 json_config -- scripts/common.sh@368 -- # return 0 00:25:57.196 17:22:34 json_config -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:25:57.196 17:22:34 json_config -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:25:57.196 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:57.196 --rc genhtml_branch_coverage=1 00:25:57.196 --rc genhtml_function_coverage=1 00:25:57.196 --rc genhtml_legend=1 00:25:57.196 --rc geninfo_all_blocks=1 00:25:57.196 --rc geninfo_unexecuted_blocks=1 00:25:57.196 00:25:57.196 ' 00:25:57.196 17:22:34 json_config -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:25:57.196 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:57.196 --rc genhtml_branch_coverage=1 00:25:57.196 --rc genhtml_function_coverage=1 00:25:57.196 --rc genhtml_legend=1 00:25:57.196 --rc geninfo_all_blocks=1 00:25:57.196 --rc geninfo_unexecuted_blocks=1 00:25:57.196 00:25:57.196 ' 00:25:57.196 17:22:34 json_config -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:25:57.196 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:57.196 --rc genhtml_branch_coverage=1 00:25:57.196 --rc genhtml_function_coverage=1 00:25:57.196 --rc genhtml_legend=1 00:25:57.196 --rc geninfo_all_blocks=1 00:25:57.196 --rc geninfo_unexecuted_blocks=1 00:25:57.196 00:25:57.196 ' 00:25:57.196 17:22:34 json_config -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:25:57.196 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:57.196 --rc genhtml_branch_coverage=1 00:25:57.196 --rc genhtml_function_coverage=1 00:25:57.196 --rc genhtml_legend=1 00:25:57.196 --rc geninfo_all_blocks=1 00:25:57.196 --rc geninfo_unexecuted_blocks=1 00:25:57.196 00:25:57.196 ' 00:25:57.196 17:22:34 json_config -- json_config/json_config.sh@8 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:25:57.196 17:22:34 json_config -- nvmf/common.sh@7 -- # uname -s 00:25:57.196 17:22:34 json_config -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:25:57.196 17:22:34 json_config -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:25:57.196 17:22:34 json_config -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:25:57.197 17:22:34 json_config -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:25:57.197 17:22:34 json_config -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:25:57.197 17:22:34 json_config -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:25:57.197 17:22:34 json_config -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:25:57.197 17:22:34 json_config -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:25:57.197 17:22:34 json_config -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:25:57.197 17:22:34 json_config -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:25:57.197 17:22:34 json_config -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:d9aa832d-f5ae-44cc-9119-911c3264b49a 00:25:57.197 17:22:34 json_config -- nvmf/common.sh@18 -- # NVME_HOSTID=d9aa832d-f5ae-44cc-9119-911c3264b49a 00:25:57.197 17:22:34 json_config -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:25:57.197 17:22:34 json_config -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:25:57.197 17:22:34 json_config -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:25:57.197 17:22:34 json_config -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:25:57.197 17:22:34 json_config -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:25:57.197 17:22:34 json_config -- scripts/common.sh@15 -- # shopt -s extglob 00:25:57.197 17:22:34 json_config -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:25:57.197 17:22:34 json_config -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:25:57.197 17:22:34 json_config -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:25:57.197 17:22:34 json_config -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:57.197 17:22:34 json_config -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:57.197 17:22:34 json_config -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:57.197 17:22:34 json_config -- paths/export.sh@5 -- # export PATH 00:25:57.197 17:22:34 json_config -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:57.197 17:22:34 json_config -- nvmf/common.sh@51 -- # : 0 00:25:57.197 17:22:34 json_config -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:25:57.197 17:22:34 json_config -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:25:57.197 17:22:34 json_config -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:25:57.197 17:22:34 json_config -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:25:57.197 17:22:34 json_config -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:25:57.197 17:22:34 json_config -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:25:57.197 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:25:57.197 17:22:34 json_config -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:25:57.197 17:22:34 json_config -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:25:57.197 17:22:34 json_config -- nvmf/common.sh@55 -- # have_pci_nics=0 00:25:57.197 17:22:34 json_config -- json_config/json_config.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/json_config/common.sh 00:25:57.197 17:22:34 json_config -- json_config/json_config.sh@11 -- # [[ 0 -eq 1 ]] 00:25:57.197 17:22:34 json_config -- json_config/json_config.sh@15 -- # [[ 0 -ne 1 ]] 00:25:57.197 17:22:34 json_config -- json_config/json_config.sh@15 -- # [[ 0 -eq 1 ]] 00:25:57.197 17:22:34 json_config -- json_config/json_config.sh@26 -- # (( SPDK_TEST_BLOCKDEV + SPDK_TEST_ISCSI + SPDK_TEST_NVMF + SPDK_TEST_VHOST + SPDK_TEST_VHOST_INIT + SPDK_TEST_RBD == 0 )) 00:25:57.197 17:22:34 json_config -- json_config/json_config.sh@27 -- # echo 'WARNING: No tests are enabled so not running JSON configuration tests' 00:25:57.197 WARNING: No tests are enabled so not running JSON configuration tests 00:25:57.197 17:22:34 json_config -- json_config/json_config.sh@28 -- # exit 0 00:25:57.197 00:25:57.197 real 0m0.176s 00:25:57.197 user 0m0.100s 00:25:57.197 sys 0m0.079s 00:25:57.197 17:22:34 json_config -- common/autotest_common.sh@1130 -- # xtrace_disable 00:25:57.197 17:22:34 json_config -- common/autotest_common.sh@10 -- # set +x 00:25:57.197 ************************************ 00:25:57.197 END TEST json_config 00:25:57.197 ************************************ 00:25:57.197 17:22:34 -- spdk/autotest.sh@160 -- # run_test json_config_extra_key /home/vagrant/spdk_repo/spdk/test/json_config/json_config_extra_key.sh 00:25:57.197 17:22:34 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:25:57.197 17:22:34 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:25:57.197 17:22:34 -- common/autotest_common.sh@10 -- # set +x 00:25:57.197 ************************************ 00:25:57.197 START TEST json_config_extra_key 00:25:57.197 ************************************ 00:25:57.197 17:22:34 json_config_extra_key -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_config_extra_key.sh 00:25:57.457 17:22:34 json_config_extra_key -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:25:57.457 17:22:34 json_config_extra_key -- common/autotest_common.sh@1693 -- # lcov --version 00:25:57.457 17:22:34 json_config_extra_key -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:25:57.457 17:22:34 json_config_extra_key -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:25:57.457 17:22:34 json_config_extra_key -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:25:57.457 17:22:34 json_config_extra_key -- scripts/common.sh@333 -- # local ver1 ver1_l 00:25:57.457 17:22:34 json_config_extra_key -- scripts/common.sh@334 -- # local ver2 ver2_l 00:25:57.457 17:22:34 json_config_extra_key -- scripts/common.sh@336 -- # IFS=.-: 00:25:57.457 17:22:34 json_config_extra_key -- scripts/common.sh@336 -- # read -ra ver1 00:25:57.457 17:22:34 json_config_extra_key -- scripts/common.sh@337 -- # IFS=.-: 00:25:57.457 17:22:34 json_config_extra_key -- scripts/common.sh@337 -- # read -ra ver2 00:25:57.457 17:22:34 json_config_extra_key -- scripts/common.sh@338 -- # local 'op=<' 00:25:57.457 17:22:34 json_config_extra_key -- scripts/common.sh@340 -- # ver1_l=2 00:25:57.457 17:22:34 json_config_extra_key -- scripts/common.sh@341 -- # ver2_l=1 00:25:57.457 17:22:34 json_config_extra_key -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:25:57.457 17:22:34 json_config_extra_key -- scripts/common.sh@344 -- # case "$op" in 00:25:57.457 17:22:34 json_config_extra_key -- scripts/common.sh@345 -- # : 1 00:25:57.457 17:22:34 json_config_extra_key -- scripts/common.sh@364 -- # (( v = 0 )) 00:25:57.457 17:22:34 json_config_extra_key -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:25:57.457 17:22:34 json_config_extra_key -- scripts/common.sh@365 -- # decimal 1 00:25:57.457 17:22:34 json_config_extra_key -- scripts/common.sh@353 -- # local d=1 00:25:57.457 17:22:34 json_config_extra_key -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:25:57.457 17:22:34 json_config_extra_key -- scripts/common.sh@355 -- # echo 1 00:25:57.457 17:22:34 json_config_extra_key -- scripts/common.sh@365 -- # ver1[v]=1 00:25:57.457 17:22:34 json_config_extra_key -- scripts/common.sh@366 -- # decimal 2 00:25:57.457 17:22:34 json_config_extra_key -- scripts/common.sh@353 -- # local d=2 00:25:57.457 17:22:34 json_config_extra_key -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:25:57.457 17:22:34 json_config_extra_key -- scripts/common.sh@355 -- # echo 2 00:25:57.457 17:22:34 json_config_extra_key -- scripts/common.sh@366 -- # ver2[v]=2 00:25:57.457 17:22:34 json_config_extra_key -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:25:57.457 17:22:34 json_config_extra_key -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:25:57.457 17:22:34 json_config_extra_key -- scripts/common.sh@368 -- # return 0 00:25:57.457 17:22:34 json_config_extra_key -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:25:57.457 17:22:34 json_config_extra_key -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:25:57.457 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:57.457 --rc genhtml_branch_coverage=1 00:25:57.457 --rc genhtml_function_coverage=1 00:25:57.457 --rc genhtml_legend=1 00:25:57.457 --rc geninfo_all_blocks=1 00:25:57.457 --rc geninfo_unexecuted_blocks=1 00:25:57.457 00:25:57.457 ' 00:25:57.457 17:22:34 json_config_extra_key -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:25:57.457 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:57.457 --rc genhtml_branch_coverage=1 00:25:57.457 --rc genhtml_function_coverage=1 00:25:57.457 --rc genhtml_legend=1 00:25:57.457 --rc geninfo_all_blocks=1 00:25:57.457 --rc geninfo_unexecuted_blocks=1 00:25:57.457 00:25:57.457 ' 00:25:57.457 17:22:34 json_config_extra_key -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:25:57.457 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:57.457 --rc genhtml_branch_coverage=1 00:25:57.457 --rc genhtml_function_coverage=1 00:25:57.457 --rc genhtml_legend=1 00:25:57.457 --rc geninfo_all_blocks=1 00:25:57.457 --rc geninfo_unexecuted_blocks=1 00:25:57.457 00:25:57.457 ' 00:25:57.457 17:22:34 json_config_extra_key -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:25:57.457 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:57.457 --rc genhtml_branch_coverage=1 00:25:57.457 --rc genhtml_function_coverage=1 00:25:57.457 --rc genhtml_legend=1 00:25:57.457 --rc geninfo_all_blocks=1 00:25:57.457 --rc geninfo_unexecuted_blocks=1 00:25:57.457 00:25:57.457 ' 00:25:57.457 17:22:34 json_config_extra_key -- json_config/json_config_extra_key.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:25:57.457 17:22:34 json_config_extra_key -- nvmf/common.sh@7 -- # uname -s 00:25:57.457 17:22:34 json_config_extra_key -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:25:57.457 17:22:34 json_config_extra_key -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:25:57.457 17:22:34 json_config_extra_key -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:25:57.457 17:22:34 json_config_extra_key -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:25:57.457 17:22:34 json_config_extra_key -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:25:57.457 17:22:34 json_config_extra_key -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:25:57.457 17:22:34 json_config_extra_key -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:25:57.457 17:22:34 json_config_extra_key -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:25:57.457 17:22:34 json_config_extra_key -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:25:57.457 17:22:34 json_config_extra_key -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:25:57.457 17:22:34 json_config_extra_key -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:d9aa832d-f5ae-44cc-9119-911c3264b49a 00:25:57.457 17:22:34 json_config_extra_key -- nvmf/common.sh@18 -- # NVME_HOSTID=d9aa832d-f5ae-44cc-9119-911c3264b49a 00:25:57.457 17:22:34 json_config_extra_key -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:25:57.457 17:22:34 json_config_extra_key -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:25:57.457 17:22:34 json_config_extra_key -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:25:57.457 17:22:34 json_config_extra_key -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:25:57.457 17:22:34 json_config_extra_key -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:25:57.457 17:22:34 json_config_extra_key -- scripts/common.sh@15 -- # shopt -s extglob 00:25:57.457 17:22:34 json_config_extra_key -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:25:57.457 17:22:34 json_config_extra_key -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:25:57.457 17:22:34 json_config_extra_key -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:25:57.457 17:22:34 json_config_extra_key -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:57.457 17:22:34 json_config_extra_key -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:57.457 17:22:34 json_config_extra_key -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:57.457 17:22:34 json_config_extra_key -- paths/export.sh@5 -- # export PATH 00:25:57.457 17:22:34 json_config_extra_key -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:57.457 17:22:34 json_config_extra_key -- nvmf/common.sh@51 -- # : 0 00:25:57.457 17:22:34 json_config_extra_key -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:25:57.457 17:22:34 json_config_extra_key -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:25:57.457 17:22:34 json_config_extra_key -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:25:57.457 17:22:34 json_config_extra_key -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:25:57.457 17:22:34 json_config_extra_key -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:25:57.457 17:22:34 json_config_extra_key -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:25:57.457 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:25:57.457 17:22:34 json_config_extra_key -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:25:57.457 17:22:34 json_config_extra_key -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:25:57.457 17:22:34 json_config_extra_key -- nvmf/common.sh@55 -- # have_pci_nics=0 00:25:57.457 17:22:34 json_config_extra_key -- json_config/json_config_extra_key.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/json_config/common.sh 00:25:57.457 17:22:34 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # app_pid=(['target']='') 00:25:57.457 17:22:34 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # declare -A app_pid 00:25:57.457 17:22:34 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock') 00:25:57.457 17:22:34 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # declare -A app_socket 00:25:57.457 17:22:34 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # app_params=(['target']='-m 0x1 -s 1024') 00:25:57.457 17:22:34 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # declare -A app_params 00:25:57.457 17:22:34 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # configs_path=(['target']='/home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json') 00:25:57.457 17:22:34 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # declare -A configs_path 00:25:57.457 17:22:34 json_config_extra_key -- json_config/json_config_extra_key.sh@22 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:25:57.458 17:22:34 json_config_extra_key -- json_config/json_config_extra_key.sh@24 -- # echo 'INFO: launching applications...' 00:25:57.458 INFO: launching applications... 00:25:57.458 17:22:34 json_config_extra_key -- json_config/json_config_extra_key.sh@25 -- # json_config_test_start_app target --json /home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json 00:25:57.458 17:22:34 json_config_extra_key -- json_config/common.sh@9 -- # local app=target 00:25:57.458 17:22:34 json_config_extra_key -- json_config/common.sh@10 -- # shift 00:25:57.458 17:22:34 json_config_extra_key -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:25:57.458 17:22:34 json_config_extra_key -- json_config/common.sh@13 -- # [[ -z '' ]] 00:25:57.458 17:22:34 json_config_extra_key -- json_config/common.sh@15 -- # local app_extra_params= 00:25:57.458 17:22:34 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:25:57.458 17:22:34 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:25:57.458 17:22:34 json_config_extra_key -- json_config/common.sh@22 -- # app_pid["$app"]=57853 00:25:57.458 Waiting for target to run... 00:25:57.458 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:25:57.458 17:22:34 json_config_extra_key -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:25:57.458 17:22:34 json_config_extra_key -- json_config/common.sh@25 -- # waitforlisten 57853 /var/tmp/spdk_tgt.sock 00:25:57.458 17:22:34 json_config_extra_key -- common/autotest_common.sh@835 -- # '[' -z 57853 ']' 00:25:57.458 17:22:34 json_config_extra_key -- json_config/common.sh@21 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json 00:25:57.458 17:22:34 json_config_extra_key -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:25:57.458 17:22:34 json_config_extra_key -- common/autotest_common.sh@840 -- # local max_retries=100 00:25:57.458 17:22:34 json_config_extra_key -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:25:57.458 17:22:34 json_config_extra_key -- common/autotest_common.sh@844 -- # xtrace_disable 00:25:57.458 17:22:34 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:25:57.766 [2024-11-26 17:22:34.939606] Starting SPDK v25.01-pre git sha1 f7ce15267 / DPDK 24.03.0 initialization... 00:25:57.766 [2024-11-26 17:22:34.939780] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57853 ] 00:25:58.024 [2024-11-26 17:22:35.381758] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:58.282 [2024-11-26 17:22:35.543522] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:25:59.214 00:25:59.214 INFO: shutting down applications... 00:25:59.214 17:22:36 json_config_extra_key -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:25:59.214 17:22:36 json_config_extra_key -- common/autotest_common.sh@868 -- # return 0 00:25:59.214 17:22:36 json_config_extra_key -- json_config/common.sh@26 -- # echo '' 00:25:59.214 17:22:36 json_config_extra_key -- json_config/json_config_extra_key.sh@27 -- # echo 'INFO: shutting down applications...' 00:25:59.214 17:22:36 json_config_extra_key -- json_config/json_config_extra_key.sh@28 -- # json_config_test_shutdown_app target 00:25:59.214 17:22:36 json_config_extra_key -- json_config/common.sh@31 -- # local app=target 00:25:59.214 17:22:36 json_config_extra_key -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:25:59.214 17:22:36 json_config_extra_key -- json_config/common.sh@35 -- # [[ -n 57853 ]] 00:25:59.214 17:22:36 json_config_extra_key -- json_config/common.sh@38 -- # kill -SIGINT 57853 00:25:59.214 17:22:36 json_config_extra_key -- json_config/common.sh@40 -- # (( i = 0 )) 00:25:59.214 17:22:36 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:25:59.214 17:22:36 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 57853 00:25:59.214 17:22:36 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:25:59.473 17:22:36 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:25:59.473 17:22:36 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:25:59.473 17:22:36 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 57853 00:25:59.473 17:22:36 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:26:00.040 17:22:37 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:26:00.040 17:22:37 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:26:00.040 17:22:37 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 57853 00:26:00.040 17:22:37 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:26:00.607 17:22:37 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:26:00.607 17:22:37 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:26:00.607 17:22:37 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 57853 00:26:00.607 17:22:37 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:26:01.175 17:22:38 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:26:01.175 17:22:38 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:26:01.175 17:22:38 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 57853 00:26:01.175 17:22:38 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:26:01.743 17:22:38 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:26:01.743 17:22:38 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:26:01.743 17:22:38 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 57853 00:26:01.743 17:22:38 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:26:02.000 17:22:39 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:26:02.000 17:22:39 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:26:02.000 17:22:39 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 57853 00:26:02.000 17:22:39 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:26:02.566 SPDK target shutdown done 00:26:02.566 17:22:39 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:26:02.566 17:22:39 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:26:02.566 17:22:39 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 57853 00:26:02.566 17:22:39 json_config_extra_key -- json_config/common.sh@42 -- # app_pid["$app"]= 00:26:02.566 17:22:39 json_config_extra_key -- json_config/common.sh@43 -- # break 00:26:02.566 17:22:39 json_config_extra_key -- json_config/common.sh@48 -- # [[ -n '' ]] 00:26:02.566 17:22:39 json_config_extra_key -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:26:02.566 Success 00:26:02.566 17:22:39 json_config_extra_key -- json_config/json_config_extra_key.sh@30 -- # echo Success 00:26:02.566 ************************************ 00:26:02.566 END TEST json_config_extra_key 00:26:02.566 ************************************ 00:26:02.566 00:26:02.566 real 0m5.306s 00:26:02.566 user 0m4.610s 00:26:02.566 sys 0m0.668s 00:26:02.566 17:22:39 json_config_extra_key -- common/autotest_common.sh@1130 -- # xtrace_disable 00:26:02.566 17:22:39 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:26:02.566 17:22:39 -- spdk/autotest.sh@161 -- # run_test alias_rpc /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:26:02.566 17:22:39 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:26:02.566 17:22:39 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:26:02.566 17:22:39 -- common/autotest_common.sh@10 -- # set +x 00:26:02.566 ************************************ 00:26:02.566 START TEST alias_rpc 00:26:02.566 ************************************ 00:26:02.566 17:22:39 alias_rpc -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:26:02.825 * Looking for test storage... 00:26:02.825 * Found test storage at /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc 00:26:02.825 17:22:40 alias_rpc -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:26:02.825 17:22:40 alias_rpc -- common/autotest_common.sh@1693 -- # lcov --version 00:26:02.825 17:22:40 alias_rpc -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:26:02.825 17:22:40 alias_rpc -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:26:02.825 17:22:40 alias_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:26:02.825 17:22:40 alias_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:26:02.825 17:22:40 alias_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:26:02.825 17:22:40 alias_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:26:02.825 17:22:40 alias_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:26:02.825 17:22:40 alias_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:26:02.825 17:22:40 alias_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:26:02.825 17:22:40 alias_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:26:02.825 17:22:40 alias_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:26:02.825 17:22:40 alias_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:26:02.825 17:22:40 alias_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:26:02.825 17:22:40 alias_rpc -- scripts/common.sh@344 -- # case "$op" in 00:26:02.825 17:22:40 alias_rpc -- scripts/common.sh@345 -- # : 1 00:26:02.825 17:22:40 alias_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:26:02.825 17:22:40 alias_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:26:02.825 17:22:40 alias_rpc -- scripts/common.sh@365 -- # decimal 1 00:26:02.825 17:22:40 alias_rpc -- scripts/common.sh@353 -- # local d=1 00:26:02.825 17:22:40 alias_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:26:02.825 17:22:40 alias_rpc -- scripts/common.sh@355 -- # echo 1 00:26:02.825 17:22:40 alias_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:26:02.825 17:22:40 alias_rpc -- scripts/common.sh@366 -- # decimal 2 00:26:02.825 17:22:40 alias_rpc -- scripts/common.sh@353 -- # local d=2 00:26:02.825 17:22:40 alias_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:26:02.825 17:22:40 alias_rpc -- scripts/common.sh@355 -- # echo 2 00:26:02.825 17:22:40 alias_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:26:02.825 17:22:40 alias_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:26:02.825 17:22:40 alias_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:26:02.825 17:22:40 alias_rpc -- scripts/common.sh@368 -- # return 0 00:26:02.825 17:22:40 alias_rpc -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:26:02.825 17:22:40 alias_rpc -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:26:02.825 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:02.825 --rc genhtml_branch_coverage=1 00:26:02.825 --rc genhtml_function_coverage=1 00:26:02.825 --rc genhtml_legend=1 00:26:02.825 --rc geninfo_all_blocks=1 00:26:02.825 --rc geninfo_unexecuted_blocks=1 00:26:02.825 00:26:02.825 ' 00:26:02.825 17:22:40 alias_rpc -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:26:02.825 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:02.825 --rc genhtml_branch_coverage=1 00:26:02.825 --rc genhtml_function_coverage=1 00:26:02.825 --rc genhtml_legend=1 00:26:02.825 --rc geninfo_all_blocks=1 00:26:02.825 --rc geninfo_unexecuted_blocks=1 00:26:02.825 00:26:02.825 ' 00:26:02.825 17:22:40 alias_rpc -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:26:02.825 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:02.825 --rc genhtml_branch_coverage=1 00:26:02.825 --rc genhtml_function_coverage=1 00:26:02.825 --rc genhtml_legend=1 00:26:02.825 --rc geninfo_all_blocks=1 00:26:02.825 --rc geninfo_unexecuted_blocks=1 00:26:02.825 00:26:02.825 ' 00:26:02.825 17:22:40 alias_rpc -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:26:02.825 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:02.825 --rc genhtml_branch_coverage=1 00:26:02.825 --rc genhtml_function_coverage=1 00:26:02.825 --rc genhtml_legend=1 00:26:02.825 --rc geninfo_all_blocks=1 00:26:02.825 --rc geninfo_unexecuted_blocks=1 00:26:02.825 00:26:02.825 ' 00:26:02.825 17:22:40 alias_rpc -- alias_rpc/alias_rpc.sh@10 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:26:02.825 17:22:40 alias_rpc -- alias_rpc/alias_rpc.sh@13 -- # spdk_tgt_pid=57971 00:26:02.825 17:22:40 alias_rpc -- alias_rpc/alias_rpc.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:26:02.825 17:22:40 alias_rpc -- alias_rpc/alias_rpc.sh@14 -- # waitforlisten 57971 00:26:02.825 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:26:02.825 17:22:40 alias_rpc -- common/autotest_common.sh@835 -- # '[' -z 57971 ']' 00:26:02.825 17:22:40 alias_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:26:02.825 17:22:40 alias_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:26:02.825 17:22:40 alias_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:26:02.825 17:22:40 alias_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:26:02.825 17:22:40 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:26:03.084 [2024-11-26 17:22:40.304335] Starting SPDK v25.01-pre git sha1 f7ce15267 / DPDK 24.03.0 initialization... 00:26:03.084 [2024-11-26 17:22:40.304518] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57971 ] 00:26:03.084 [2024-11-26 17:22:40.508077] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:03.342 [2024-11-26 17:22:40.683454] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:26:04.274 17:22:41 alias_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:26:04.274 17:22:41 alias_rpc -- common/autotest_common.sh@868 -- # return 0 00:26:04.275 17:22:41 alias_rpc -- alias_rpc/alias_rpc.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py load_config -i 00:26:04.533 17:22:41 alias_rpc -- alias_rpc/alias_rpc.sh@19 -- # killprocess 57971 00:26:04.533 17:22:41 alias_rpc -- common/autotest_common.sh@954 -- # '[' -z 57971 ']' 00:26:04.533 17:22:41 alias_rpc -- common/autotest_common.sh@958 -- # kill -0 57971 00:26:04.533 17:22:41 alias_rpc -- common/autotest_common.sh@959 -- # uname 00:26:04.533 17:22:41 alias_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:26:04.533 17:22:41 alias_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 57971 00:26:04.791 killing process with pid 57971 00:26:04.791 17:22:42 alias_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:26:04.791 17:22:42 alias_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:26:04.791 17:22:42 alias_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 57971' 00:26:04.791 17:22:42 alias_rpc -- common/autotest_common.sh@973 -- # kill 57971 00:26:04.791 17:22:42 alias_rpc -- common/autotest_common.sh@978 -- # wait 57971 00:26:07.319 ************************************ 00:26:07.319 END TEST alias_rpc 00:26:07.319 ************************************ 00:26:07.319 00:26:07.319 real 0m4.731s 00:26:07.319 user 0m4.860s 00:26:07.319 sys 0m0.657s 00:26:07.319 17:22:44 alias_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:26:07.319 17:22:44 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:26:07.319 17:22:44 -- spdk/autotest.sh@163 -- # [[ 0 -eq 0 ]] 00:26:07.319 17:22:44 -- spdk/autotest.sh@164 -- # run_test spdkcli_tcp /home/vagrant/spdk_repo/spdk/test/spdkcli/tcp.sh 00:26:07.319 17:22:44 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:26:07.319 17:22:44 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:26:07.319 17:22:44 -- common/autotest_common.sh@10 -- # set +x 00:26:07.319 ************************************ 00:26:07.319 START TEST spdkcli_tcp 00:26:07.319 ************************************ 00:26:07.319 17:22:44 spdkcli_tcp -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/spdkcli/tcp.sh 00:26:07.577 * Looking for test storage... 00:26:07.577 * Found test storage at /home/vagrant/spdk_repo/spdk/test/spdkcli 00:26:07.577 17:22:44 spdkcli_tcp -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:26:07.577 17:22:44 spdkcli_tcp -- common/autotest_common.sh@1693 -- # lcov --version 00:26:07.577 17:22:44 spdkcli_tcp -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:26:07.577 17:22:44 spdkcli_tcp -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:26:07.577 17:22:44 spdkcli_tcp -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:26:07.577 17:22:44 spdkcli_tcp -- scripts/common.sh@333 -- # local ver1 ver1_l 00:26:07.577 17:22:44 spdkcli_tcp -- scripts/common.sh@334 -- # local ver2 ver2_l 00:26:07.577 17:22:44 spdkcli_tcp -- scripts/common.sh@336 -- # IFS=.-: 00:26:07.577 17:22:44 spdkcli_tcp -- scripts/common.sh@336 -- # read -ra ver1 00:26:07.577 17:22:44 spdkcli_tcp -- scripts/common.sh@337 -- # IFS=.-: 00:26:07.577 17:22:44 spdkcli_tcp -- scripts/common.sh@337 -- # read -ra ver2 00:26:07.577 17:22:44 spdkcli_tcp -- scripts/common.sh@338 -- # local 'op=<' 00:26:07.577 17:22:44 spdkcli_tcp -- scripts/common.sh@340 -- # ver1_l=2 00:26:07.577 17:22:44 spdkcli_tcp -- scripts/common.sh@341 -- # ver2_l=1 00:26:07.577 17:22:44 spdkcli_tcp -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:26:07.577 17:22:44 spdkcli_tcp -- scripts/common.sh@344 -- # case "$op" in 00:26:07.577 17:22:44 spdkcli_tcp -- scripts/common.sh@345 -- # : 1 00:26:07.577 17:22:44 spdkcli_tcp -- scripts/common.sh@364 -- # (( v = 0 )) 00:26:07.577 17:22:44 spdkcli_tcp -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:26:07.577 17:22:44 spdkcli_tcp -- scripts/common.sh@365 -- # decimal 1 00:26:07.577 17:22:44 spdkcli_tcp -- scripts/common.sh@353 -- # local d=1 00:26:07.577 17:22:44 spdkcli_tcp -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:26:07.577 17:22:44 spdkcli_tcp -- scripts/common.sh@355 -- # echo 1 00:26:07.577 17:22:44 spdkcli_tcp -- scripts/common.sh@365 -- # ver1[v]=1 00:26:07.577 17:22:44 spdkcli_tcp -- scripts/common.sh@366 -- # decimal 2 00:26:07.577 17:22:44 spdkcli_tcp -- scripts/common.sh@353 -- # local d=2 00:26:07.577 17:22:44 spdkcli_tcp -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:26:07.577 17:22:44 spdkcli_tcp -- scripts/common.sh@355 -- # echo 2 00:26:07.577 17:22:44 spdkcli_tcp -- scripts/common.sh@366 -- # ver2[v]=2 00:26:07.577 17:22:44 spdkcli_tcp -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:26:07.577 17:22:44 spdkcli_tcp -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:26:07.577 17:22:44 spdkcli_tcp -- scripts/common.sh@368 -- # return 0 00:26:07.577 17:22:44 spdkcli_tcp -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:26:07.577 17:22:44 spdkcli_tcp -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:26:07.577 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:07.577 --rc genhtml_branch_coverage=1 00:26:07.577 --rc genhtml_function_coverage=1 00:26:07.577 --rc genhtml_legend=1 00:26:07.577 --rc geninfo_all_blocks=1 00:26:07.577 --rc geninfo_unexecuted_blocks=1 00:26:07.577 00:26:07.577 ' 00:26:07.577 17:22:44 spdkcli_tcp -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:26:07.577 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:07.577 --rc genhtml_branch_coverage=1 00:26:07.577 --rc genhtml_function_coverage=1 00:26:07.577 --rc genhtml_legend=1 00:26:07.577 --rc geninfo_all_blocks=1 00:26:07.577 --rc geninfo_unexecuted_blocks=1 00:26:07.577 00:26:07.577 ' 00:26:07.577 17:22:44 spdkcli_tcp -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:26:07.577 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:07.577 --rc genhtml_branch_coverage=1 00:26:07.577 --rc genhtml_function_coverage=1 00:26:07.577 --rc genhtml_legend=1 00:26:07.577 --rc geninfo_all_blocks=1 00:26:07.577 --rc geninfo_unexecuted_blocks=1 00:26:07.577 00:26:07.577 ' 00:26:07.577 17:22:44 spdkcli_tcp -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:26:07.577 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:07.577 --rc genhtml_branch_coverage=1 00:26:07.577 --rc genhtml_function_coverage=1 00:26:07.577 --rc genhtml_legend=1 00:26:07.577 --rc geninfo_all_blocks=1 00:26:07.577 --rc geninfo_unexecuted_blocks=1 00:26:07.577 00:26:07.577 ' 00:26:07.577 17:22:44 spdkcli_tcp -- spdkcli/tcp.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/spdkcli/common.sh 00:26:07.577 17:22:44 spdkcli_tcp -- spdkcli/common.sh@6 -- # spdkcli_job=/home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py 00:26:07.577 17:22:44 spdkcli_tcp -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/home/vagrant/spdk_repo/spdk/test/json_config/clear_config.py 00:26:07.577 17:22:44 spdkcli_tcp -- spdkcli/tcp.sh@18 -- # IP_ADDRESS=127.0.0.1 00:26:07.577 17:22:44 spdkcli_tcp -- spdkcli/tcp.sh@19 -- # PORT=9998 00:26:07.577 17:22:44 spdkcli_tcp -- spdkcli/tcp.sh@21 -- # trap 'err_cleanup; exit 1' SIGINT SIGTERM EXIT 00:26:07.577 17:22:44 spdkcli_tcp -- spdkcli/tcp.sh@23 -- # timing_enter run_spdk_tgt_tcp 00:26:07.577 17:22:44 spdkcli_tcp -- common/autotest_common.sh@726 -- # xtrace_disable 00:26:07.577 17:22:44 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:26:07.577 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:26:07.577 17:22:44 spdkcli_tcp -- spdkcli/tcp.sh@25 -- # spdk_tgt_pid=58084 00:26:07.578 17:22:44 spdkcli_tcp -- spdkcli/tcp.sh@27 -- # waitforlisten 58084 00:26:07.578 17:22:44 spdkcli_tcp -- common/autotest_common.sh@835 -- # '[' -z 58084 ']' 00:26:07.578 17:22:44 spdkcli_tcp -- spdkcli/tcp.sh@24 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 -p 0 00:26:07.578 17:22:44 spdkcli_tcp -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:26:07.578 17:22:44 spdkcli_tcp -- common/autotest_common.sh@840 -- # local max_retries=100 00:26:07.578 17:22:44 spdkcli_tcp -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:26:07.578 17:22:44 spdkcli_tcp -- common/autotest_common.sh@844 -- # xtrace_disable 00:26:07.578 17:22:44 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:26:07.835 [2024-11-26 17:22:45.111802] Starting SPDK v25.01-pre git sha1 f7ce15267 / DPDK 24.03.0 initialization... 00:26:07.835 [2024-11-26 17:22:45.112404] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58084 ] 00:26:08.093 [2024-11-26 17:22:45.308905] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:26:08.093 [2024-11-26 17:22:45.444713] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:26:08.093 [2024-11-26 17:22:45.444734] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:26:09.469 17:22:46 spdkcli_tcp -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:26:09.469 17:22:46 spdkcli_tcp -- common/autotest_common.sh@868 -- # return 0 00:26:09.469 17:22:46 spdkcli_tcp -- spdkcli/tcp.sh@31 -- # socat_pid=58106 00:26:09.469 17:22:46 spdkcli_tcp -- spdkcli/tcp.sh@30 -- # socat TCP-LISTEN:9998 UNIX-CONNECT:/var/tmp/spdk.sock 00:26:09.469 17:22:46 spdkcli_tcp -- spdkcli/tcp.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -r 100 -t 2 -s 127.0.0.1 -p 9998 rpc_get_methods 00:26:09.469 [ 00:26:09.469 "bdev_malloc_delete", 00:26:09.469 "bdev_malloc_create", 00:26:09.469 "bdev_null_resize", 00:26:09.469 "bdev_null_delete", 00:26:09.469 "bdev_null_create", 00:26:09.469 "bdev_nvme_cuse_unregister", 00:26:09.469 "bdev_nvme_cuse_register", 00:26:09.469 "bdev_opal_new_user", 00:26:09.469 "bdev_opal_set_lock_state", 00:26:09.469 "bdev_opal_delete", 00:26:09.469 "bdev_opal_get_info", 00:26:09.469 "bdev_opal_create", 00:26:09.469 "bdev_nvme_opal_revert", 00:26:09.469 "bdev_nvme_opal_init", 00:26:09.469 "bdev_nvme_send_cmd", 00:26:09.469 "bdev_nvme_set_keys", 00:26:09.469 "bdev_nvme_get_path_iostat", 00:26:09.469 "bdev_nvme_get_mdns_discovery_info", 00:26:09.469 "bdev_nvme_stop_mdns_discovery", 00:26:09.469 "bdev_nvme_start_mdns_discovery", 00:26:09.469 "bdev_nvme_set_multipath_policy", 00:26:09.469 "bdev_nvme_set_preferred_path", 00:26:09.469 "bdev_nvme_get_io_paths", 00:26:09.469 "bdev_nvme_remove_error_injection", 00:26:09.469 "bdev_nvme_add_error_injection", 00:26:09.469 "bdev_nvme_get_discovery_info", 00:26:09.469 "bdev_nvme_stop_discovery", 00:26:09.469 "bdev_nvme_start_discovery", 00:26:09.469 "bdev_nvme_get_controller_health_info", 00:26:09.469 "bdev_nvme_disable_controller", 00:26:09.469 "bdev_nvme_enable_controller", 00:26:09.469 "bdev_nvme_reset_controller", 00:26:09.469 "bdev_nvme_get_transport_statistics", 00:26:09.469 "bdev_nvme_apply_firmware", 00:26:09.469 "bdev_nvme_detach_controller", 00:26:09.469 "bdev_nvme_get_controllers", 00:26:09.469 "bdev_nvme_attach_controller", 00:26:09.469 "bdev_nvme_set_hotplug", 00:26:09.469 "bdev_nvme_set_options", 00:26:09.469 "bdev_passthru_delete", 00:26:09.469 "bdev_passthru_create", 00:26:09.469 "bdev_lvol_set_parent_bdev", 00:26:09.469 "bdev_lvol_set_parent", 00:26:09.469 "bdev_lvol_check_shallow_copy", 00:26:09.469 "bdev_lvol_start_shallow_copy", 00:26:09.469 "bdev_lvol_grow_lvstore", 00:26:09.469 "bdev_lvol_get_lvols", 00:26:09.469 "bdev_lvol_get_lvstores", 00:26:09.469 "bdev_lvol_delete", 00:26:09.469 "bdev_lvol_set_read_only", 00:26:09.469 "bdev_lvol_resize", 00:26:09.469 "bdev_lvol_decouple_parent", 00:26:09.469 "bdev_lvol_inflate", 00:26:09.469 "bdev_lvol_rename", 00:26:09.469 "bdev_lvol_clone_bdev", 00:26:09.469 "bdev_lvol_clone", 00:26:09.469 "bdev_lvol_snapshot", 00:26:09.469 "bdev_lvol_create", 00:26:09.469 "bdev_lvol_delete_lvstore", 00:26:09.469 "bdev_lvol_rename_lvstore", 00:26:09.469 "bdev_lvol_create_lvstore", 00:26:09.469 "bdev_raid_set_options", 00:26:09.469 "bdev_raid_remove_base_bdev", 00:26:09.469 "bdev_raid_add_base_bdev", 00:26:09.469 "bdev_raid_delete", 00:26:09.469 "bdev_raid_create", 00:26:09.469 "bdev_raid_get_bdevs", 00:26:09.469 "bdev_error_inject_error", 00:26:09.469 "bdev_error_delete", 00:26:09.469 "bdev_error_create", 00:26:09.469 "bdev_split_delete", 00:26:09.469 "bdev_split_create", 00:26:09.469 "bdev_delay_delete", 00:26:09.469 "bdev_delay_create", 00:26:09.469 "bdev_delay_update_latency", 00:26:09.469 "bdev_zone_block_delete", 00:26:09.469 "bdev_zone_block_create", 00:26:09.469 "blobfs_create", 00:26:09.469 "blobfs_detect", 00:26:09.469 "blobfs_set_cache_size", 00:26:09.469 "bdev_aio_delete", 00:26:09.469 "bdev_aio_rescan", 00:26:09.469 "bdev_aio_create", 00:26:09.469 "bdev_ftl_set_property", 00:26:09.469 "bdev_ftl_get_properties", 00:26:09.469 "bdev_ftl_get_stats", 00:26:09.469 "bdev_ftl_unmap", 00:26:09.469 "bdev_ftl_unload", 00:26:09.469 "bdev_ftl_delete", 00:26:09.469 "bdev_ftl_load", 00:26:09.469 "bdev_ftl_create", 00:26:09.469 "bdev_virtio_attach_controller", 00:26:09.469 "bdev_virtio_scsi_get_devices", 00:26:09.469 "bdev_virtio_detach_controller", 00:26:09.469 "bdev_virtio_blk_set_hotplug", 00:26:09.469 "bdev_iscsi_delete", 00:26:09.469 "bdev_iscsi_create", 00:26:09.469 "bdev_iscsi_set_options", 00:26:09.469 "accel_error_inject_error", 00:26:09.469 "ioat_scan_accel_module", 00:26:09.469 "dsa_scan_accel_module", 00:26:09.469 "iaa_scan_accel_module", 00:26:09.469 "keyring_file_remove_key", 00:26:09.469 "keyring_file_add_key", 00:26:09.469 "keyring_linux_set_options", 00:26:09.469 "fsdev_aio_delete", 00:26:09.469 "fsdev_aio_create", 00:26:09.469 "iscsi_get_histogram", 00:26:09.469 "iscsi_enable_histogram", 00:26:09.469 "iscsi_set_options", 00:26:09.469 "iscsi_get_auth_groups", 00:26:09.469 "iscsi_auth_group_remove_secret", 00:26:09.469 "iscsi_auth_group_add_secret", 00:26:09.469 "iscsi_delete_auth_group", 00:26:09.469 "iscsi_create_auth_group", 00:26:09.469 "iscsi_set_discovery_auth", 00:26:09.469 "iscsi_get_options", 00:26:09.469 "iscsi_target_node_request_logout", 00:26:09.469 "iscsi_target_node_set_redirect", 00:26:09.469 "iscsi_target_node_set_auth", 00:26:09.469 "iscsi_target_node_add_lun", 00:26:09.469 "iscsi_get_stats", 00:26:09.469 "iscsi_get_connections", 00:26:09.469 "iscsi_portal_group_set_auth", 00:26:09.469 "iscsi_start_portal_group", 00:26:09.469 "iscsi_delete_portal_group", 00:26:09.469 "iscsi_create_portal_group", 00:26:09.469 "iscsi_get_portal_groups", 00:26:09.470 "iscsi_delete_target_node", 00:26:09.470 "iscsi_target_node_remove_pg_ig_maps", 00:26:09.470 "iscsi_target_node_add_pg_ig_maps", 00:26:09.470 "iscsi_create_target_node", 00:26:09.470 "iscsi_get_target_nodes", 00:26:09.470 "iscsi_delete_initiator_group", 00:26:09.470 "iscsi_initiator_group_remove_initiators", 00:26:09.470 "iscsi_initiator_group_add_initiators", 00:26:09.470 "iscsi_create_initiator_group", 00:26:09.470 "iscsi_get_initiator_groups", 00:26:09.470 "nvmf_set_crdt", 00:26:09.470 "nvmf_set_config", 00:26:09.470 "nvmf_set_max_subsystems", 00:26:09.470 "nvmf_stop_mdns_prr", 00:26:09.470 "nvmf_publish_mdns_prr", 00:26:09.470 "nvmf_subsystem_get_listeners", 00:26:09.470 "nvmf_subsystem_get_qpairs", 00:26:09.470 "nvmf_subsystem_get_controllers", 00:26:09.470 "nvmf_get_stats", 00:26:09.470 "nvmf_get_transports", 00:26:09.470 "nvmf_create_transport", 00:26:09.470 "nvmf_get_targets", 00:26:09.470 "nvmf_delete_target", 00:26:09.470 "nvmf_create_target", 00:26:09.470 "nvmf_subsystem_allow_any_host", 00:26:09.470 "nvmf_subsystem_set_keys", 00:26:09.470 "nvmf_subsystem_remove_host", 00:26:09.470 "nvmf_subsystem_add_host", 00:26:09.470 "nvmf_ns_remove_host", 00:26:09.470 "nvmf_ns_add_host", 00:26:09.470 "nvmf_subsystem_remove_ns", 00:26:09.470 "nvmf_subsystem_set_ns_ana_group", 00:26:09.470 "nvmf_subsystem_add_ns", 00:26:09.470 "nvmf_subsystem_listener_set_ana_state", 00:26:09.470 "nvmf_discovery_get_referrals", 00:26:09.470 "nvmf_discovery_remove_referral", 00:26:09.470 "nvmf_discovery_add_referral", 00:26:09.470 "nvmf_subsystem_remove_listener", 00:26:09.470 "nvmf_subsystem_add_listener", 00:26:09.470 "nvmf_delete_subsystem", 00:26:09.470 "nvmf_create_subsystem", 00:26:09.470 "nvmf_get_subsystems", 00:26:09.470 "env_dpdk_get_mem_stats", 00:26:09.470 "nbd_get_disks", 00:26:09.470 "nbd_stop_disk", 00:26:09.470 "nbd_start_disk", 00:26:09.470 "ublk_recover_disk", 00:26:09.470 "ublk_get_disks", 00:26:09.470 "ublk_stop_disk", 00:26:09.470 "ublk_start_disk", 00:26:09.470 "ublk_destroy_target", 00:26:09.470 "ublk_create_target", 00:26:09.470 "virtio_blk_create_transport", 00:26:09.470 "virtio_blk_get_transports", 00:26:09.470 "vhost_controller_set_coalescing", 00:26:09.470 "vhost_get_controllers", 00:26:09.470 "vhost_delete_controller", 00:26:09.470 "vhost_create_blk_controller", 00:26:09.470 "vhost_scsi_controller_remove_target", 00:26:09.470 "vhost_scsi_controller_add_target", 00:26:09.470 "vhost_start_scsi_controller", 00:26:09.470 "vhost_create_scsi_controller", 00:26:09.470 "thread_set_cpumask", 00:26:09.470 "scheduler_set_options", 00:26:09.470 "framework_get_governor", 00:26:09.470 "framework_get_scheduler", 00:26:09.470 "framework_set_scheduler", 00:26:09.470 "framework_get_reactors", 00:26:09.470 "thread_get_io_channels", 00:26:09.470 "thread_get_pollers", 00:26:09.470 "thread_get_stats", 00:26:09.470 "framework_monitor_context_switch", 00:26:09.470 "spdk_kill_instance", 00:26:09.470 "log_enable_timestamps", 00:26:09.470 "log_get_flags", 00:26:09.470 "log_clear_flag", 00:26:09.470 "log_set_flag", 00:26:09.470 "log_get_level", 00:26:09.470 "log_set_level", 00:26:09.470 "log_get_print_level", 00:26:09.470 "log_set_print_level", 00:26:09.470 "framework_enable_cpumask_locks", 00:26:09.470 "framework_disable_cpumask_locks", 00:26:09.470 "framework_wait_init", 00:26:09.470 "framework_start_init", 00:26:09.470 "scsi_get_devices", 00:26:09.470 "bdev_get_histogram", 00:26:09.470 "bdev_enable_histogram", 00:26:09.470 "bdev_set_qos_limit", 00:26:09.470 "bdev_set_qd_sampling_period", 00:26:09.470 "bdev_get_bdevs", 00:26:09.470 "bdev_reset_iostat", 00:26:09.470 "bdev_get_iostat", 00:26:09.470 "bdev_examine", 00:26:09.470 "bdev_wait_for_examine", 00:26:09.470 "bdev_set_options", 00:26:09.470 "accel_get_stats", 00:26:09.470 "accel_set_options", 00:26:09.470 "accel_set_driver", 00:26:09.470 "accel_crypto_key_destroy", 00:26:09.470 "accel_crypto_keys_get", 00:26:09.470 "accel_crypto_key_create", 00:26:09.470 "accel_assign_opc", 00:26:09.470 "accel_get_module_info", 00:26:09.470 "accel_get_opc_assignments", 00:26:09.470 "vmd_rescan", 00:26:09.470 "vmd_remove_device", 00:26:09.470 "vmd_enable", 00:26:09.470 "sock_get_default_impl", 00:26:09.470 "sock_set_default_impl", 00:26:09.470 "sock_impl_set_options", 00:26:09.470 "sock_impl_get_options", 00:26:09.470 "iobuf_get_stats", 00:26:09.470 "iobuf_set_options", 00:26:09.470 "keyring_get_keys", 00:26:09.470 "framework_get_pci_devices", 00:26:09.470 "framework_get_config", 00:26:09.470 "framework_get_subsystems", 00:26:09.470 "fsdev_set_opts", 00:26:09.470 "fsdev_get_opts", 00:26:09.470 "trace_get_info", 00:26:09.470 "trace_get_tpoint_group_mask", 00:26:09.470 "trace_disable_tpoint_group", 00:26:09.470 "trace_enable_tpoint_group", 00:26:09.470 "trace_clear_tpoint_mask", 00:26:09.470 "trace_set_tpoint_mask", 00:26:09.470 "notify_get_notifications", 00:26:09.470 "notify_get_types", 00:26:09.470 "spdk_get_version", 00:26:09.470 "rpc_get_methods" 00:26:09.470 ] 00:26:09.470 17:22:46 spdkcli_tcp -- spdkcli/tcp.sh@35 -- # timing_exit run_spdk_tgt_tcp 00:26:09.470 17:22:46 spdkcli_tcp -- common/autotest_common.sh@732 -- # xtrace_disable 00:26:09.470 17:22:46 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:26:09.470 17:22:46 spdkcli_tcp -- spdkcli/tcp.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:26:09.470 17:22:46 spdkcli_tcp -- spdkcli/tcp.sh@38 -- # killprocess 58084 00:26:09.470 17:22:46 spdkcli_tcp -- common/autotest_common.sh@954 -- # '[' -z 58084 ']' 00:26:09.470 17:22:46 spdkcli_tcp -- common/autotest_common.sh@958 -- # kill -0 58084 00:26:09.470 17:22:46 spdkcli_tcp -- common/autotest_common.sh@959 -- # uname 00:26:09.470 17:22:46 spdkcli_tcp -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:26:09.470 17:22:46 spdkcli_tcp -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 58084 00:26:09.728 killing process with pid 58084 00:26:09.728 17:22:46 spdkcli_tcp -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:26:09.728 17:22:46 spdkcli_tcp -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:26:09.728 17:22:46 spdkcli_tcp -- common/autotest_common.sh@972 -- # echo 'killing process with pid 58084' 00:26:09.728 17:22:46 spdkcli_tcp -- common/autotest_common.sh@973 -- # kill 58084 00:26:09.728 17:22:46 spdkcli_tcp -- common/autotest_common.sh@978 -- # wait 58084 00:26:12.257 ************************************ 00:26:12.257 END TEST spdkcli_tcp 00:26:12.257 ************************************ 00:26:12.257 00:26:12.257 real 0m4.881s 00:26:12.257 user 0m8.940s 00:26:12.257 sys 0m0.752s 00:26:12.257 17:22:49 spdkcli_tcp -- common/autotest_common.sh@1130 -- # xtrace_disable 00:26:12.257 17:22:49 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:26:12.257 17:22:49 -- spdk/autotest.sh@167 -- # run_test dpdk_mem_utility /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:26:12.257 17:22:49 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:26:12.257 17:22:49 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:26:12.257 17:22:49 -- common/autotest_common.sh@10 -- # set +x 00:26:12.257 ************************************ 00:26:12.257 START TEST dpdk_mem_utility 00:26:12.257 ************************************ 00:26:12.257 17:22:49 dpdk_mem_utility -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:26:12.516 * Looking for test storage... 00:26:12.516 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility 00:26:12.516 17:22:49 dpdk_mem_utility -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:26:12.516 17:22:49 dpdk_mem_utility -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:26:12.516 17:22:49 dpdk_mem_utility -- common/autotest_common.sh@1693 -- # lcov --version 00:26:12.516 17:22:49 dpdk_mem_utility -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:26:12.516 17:22:49 dpdk_mem_utility -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:26:12.516 17:22:49 dpdk_mem_utility -- scripts/common.sh@333 -- # local ver1 ver1_l 00:26:12.516 17:22:49 dpdk_mem_utility -- scripts/common.sh@334 -- # local ver2 ver2_l 00:26:12.516 17:22:49 dpdk_mem_utility -- scripts/common.sh@336 -- # IFS=.-: 00:26:12.516 17:22:49 dpdk_mem_utility -- scripts/common.sh@336 -- # read -ra ver1 00:26:12.516 17:22:49 dpdk_mem_utility -- scripts/common.sh@337 -- # IFS=.-: 00:26:12.516 17:22:49 dpdk_mem_utility -- scripts/common.sh@337 -- # read -ra ver2 00:26:12.516 17:22:49 dpdk_mem_utility -- scripts/common.sh@338 -- # local 'op=<' 00:26:12.516 17:22:49 dpdk_mem_utility -- scripts/common.sh@340 -- # ver1_l=2 00:26:12.516 17:22:49 dpdk_mem_utility -- scripts/common.sh@341 -- # ver2_l=1 00:26:12.516 17:22:49 dpdk_mem_utility -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:26:12.516 17:22:49 dpdk_mem_utility -- scripts/common.sh@344 -- # case "$op" in 00:26:12.516 17:22:49 dpdk_mem_utility -- scripts/common.sh@345 -- # : 1 00:26:12.516 17:22:49 dpdk_mem_utility -- scripts/common.sh@364 -- # (( v = 0 )) 00:26:12.516 17:22:49 dpdk_mem_utility -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:26:12.516 17:22:49 dpdk_mem_utility -- scripts/common.sh@365 -- # decimal 1 00:26:12.516 17:22:49 dpdk_mem_utility -- scripts/common.sh@353 -- # local d=1 00:26:12.516 17:22:49 dpdk_mem_utility -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:26:12.516 17:22:49 dpdk_mem_utility -- scripts/common.sh@355 -- # echo 1 00:26:12.516 17:22:49 dpdk_mem_utility -- scripts/common.sh@365 -- # ver1[v]=1 00:26:12.516 17:22:49 dpdk_mem_utility -- scripts/common.sh@366 -- # decimal 2 00:26:12.516 17:22:49 dpdk_mem_utility -- scripts/common.sh@353 -- # local d=2 00:26:12.516 17:22:49 dpdk_mem_utility -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:26:12.516 17:22:49 dpdk_mem_utility -- scripts/common.sh@355 -- # echo 2 00:26:12.516 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:26:12.516 17:22:49 dpdk_mem_utility -- scripts/common.sh@366 -- # ver2[v]=2 00:26:12.516 17:22:49 dpdk_mem_utility -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:26:12.516 17:22:49 dpdk_mem_utility -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:26:12.516 17:22:49 dpdk_mem_utility -- scripts/common.sh@368 -- # return 0 00:26:12.516 17:22:49 dpdk_mem_utility -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:26:12.516 17:22:49 dpdk_mem_utility -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:26:12.516 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:12.516 --rc genhtml_branch_coverage=1 00:26:12.516 --rc genhtml_function_coverage=1 00:26:12.516 --rc genhtml_legend=1 00:26:12.516 --rc geninfo_all_blocks=1 00:26:12.516 --rc geninfo_unexecuted_blocks=1 00:26:12.516 00:26:12.516 ' 00:26:12.516 17:22:49 dpdk_mem_utility -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:26:12.516 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:12.516 --rc genhtml_branch_coverage=1 00:26:12.516 --rc genhtml_function_coverage=1 00:26:12.516 --rc genhtml_legend=1 00:26:12.516 --rc geninfo_all_blocks=1 00:26:12.516 --rc geninfo_unexecuted_blocks=1 00:26:12.516 00:26:12.516 ' 00:26:12.516 17:22:49 dpdk_mem_utility -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:26:12.516 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:12.516 --rc genhtml_branch_coverage=1 00:26:12.516 --rc genhtml_function_coverage=1 00:26:12.516 --rc genhtml_legend=1 00:26:12.516 --rc geninfo_all_blocks=1 00:26:12.516 --rc geninfo_unexecuted_blocks=1 00:26:12.516 00:26:12.516 ' 00:26:12.516 17:22:49 dpdk_mem_utility -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:26:12.516 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:12.516 --rc genhtml_branch_coverage=1 00:26:12.516 --rc genhtml_function_coverage=1 00:26:12.516 --rc genhtml_legend=1 00:26:12.516 --rc geninfo_all_blocks=1 00:26:12.516 --rc geninfo_unexecuted_blocks=1 00:26:12.516 00:26:12.516 ' 00:26:12.516 17:22:49 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@10 -- # MEM_SCRIPT=/home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py 00:26:12.516 17:22:49 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@13 -- # spdkpid=58211 00:26:12.516 17:22:49 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@15 -- # waitforlisten 58211 00:26:12.516 17:22:49 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:26:12.516 17:22:49 dpdk_mem_utility -- common/autotest_common.sh@835 -- # '[' -z 58211 ']' 00:26:12.516 17:22:49 dpdk_mem_utility -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:26:12.516 17:22:49 dpdk_mem_utility -- common/autotest_common.sh@840 -- # local max_retries=100 00:26:12.516 17:22:49 dpdk_mem_utility -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:26:12.516 17:22:49 dpdk_mem_utility -- common/autotest_common.sh@844 -- # xtrace_disable 00:26:12.516 17:22:49 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:26:12.774 [2024-11-26 17:22:49.974530] Starting SPDK v25.01-pre git sha1 f7ce15267 / DPDK 24.03.0 initialization... 00:26:12.774 [2024-11-26 17:22:49.974866] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58211 ] 00:26:12.774 [2024-11-26 17:22:50.155199] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:13.032 [2024-11-26 17:22:50.285491] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:26:13.965 17:22:51 dpdk_mem_utility -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:26:13.965 17:22:51 dpdk_mem_utility -- common/autotest_common.sh@868 -- # return 0 00:26:13.965 17:22:51 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@17 -- # trap 'killprocess $spdkpid' SIGINT SIGTERM EXIT 00:26:13.966 17:22:51 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@19 -- # rpc_cmd env_dpdk_get_mem_stats 00:26:13.966 17:22:51 dpdk_mem_utility -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:13.966 17:22:51 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:26:13.966 { 00:26:13.966 "filename": "/tmp/spdk_mem_dump.txt" 00:26:13.966 } 00:26:13.966 17:22:51 dpdk_mem_utility -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:13.966 17:22:51 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@21 -- # /home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py 00:26:13.966 DPDK memory size 824.000000 MiB in 1 heap(s) 00:26:13.966 1 heaps totaling size 824.000000 MiB 00:26:13.966 size: 824.000000 MiB heap id: 0 00:26:13.966 end heaps---------- 00:26:13.966 9 mempools totaling size 603.782043 MiB 00:26:13.966 size: 212.674988 MiB name: PDU_immediate_data_Pool 00:26:13.966 size: 158.602051 MiB name: PDU_data_out_Pool 00:26:13.966 size: 100.555481 MiB name: bdev_io_58211 00:26:13.966 size: 50.003479 MiB name: msgpool_58211 00:26:13.966 size: 36.509338 MiB name: fsdev_io_58211 00:26:13.966 size: 21.763794 MiB name: PDU_Pool 00:26:13.966 size: 19.513306 MiB name: SCSI_TASK_Pool 00:26:13.966 size: 4.133484 MiB name: evtpool_58211 00:26:13.966 size: 0.026123 MiB name: Session_Pool 00:26:13.966 end mempools------- 00:26:13.966 6 memzones totaling size 4.142822 MiB 00:26:13.966 size: 1.000366 MiB name: RG_ring_0_58211 00:26:13.966 size: 1.000366 MiB name: RG_ring_1_58211 00:26:13.966 size: 1.000366 MiB name: RG_ring_4_58211 00:26:13.966 size: 1.000366 MiB name: RG_ring_5_58211 00:26:13.966 size: 0.125366 MiB name: RG_ring_2_58211 00:26:13.966 size: 0.015991 MiB name: RG_ring_3_58211 00:26:13.966 end memzones------- 00:26:13.966 17:22:51 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@23 -- # /home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py -m 0 00:26:14.225 heap id: 0 total size: 824.000000 MiB number of busy elements: 312 number of free elements: 18 00:26:14.225 list of free elements. size: 16.782104 MiB 00:26:14.225 element at address: 0x200006400000 with size: 1.995972 MiB 00:26:14.225 element at address: 0x20000a600000 with size: 1.995972 MiB 00:26:14.225 element at address: 0x200003e00000 with size: 1.991028 MiB 00:26:14.225 element at address: 0x200019500040 with size: 0.999939 MiB 00:26:14.225 element at address: 0x200019900040 with size: 0.999939 MiB 00:26:14.225 element at address: 0x200019a00000 with size: 0.999084 MiB 00:26:14.225 element at address: 0x200032600000 with size: 0.994324 MiB 00:26:14.225 element at address: 0x200000400000 with size: 0.992004 MiB 00:26:14.225 element at address: 0x200019200000 with size: 0.959656 MiB 00:26:14.225 element at address: 0x200019d00040 with size: 0.936401 MiB 00:26:14.225 element at address: 0x200000200000 with size: 0.716980 MiB 00:26:14.225 element at address: 0x20001b400000 with size: 0.563660 MiB 00:26:14.225 element at address: 0x200000c00000 with size: 0.489197 MiB 00:26:14.225 element at address: 0x200019600000 with size: 0.487976 MiB 00:26:14.225 element at address: 0x200019e00000 with size: 0.485413 MiB 00:26:14.225 element at address: 0x200012c00000 with size: 0.433228 MiB 00:26:14.225 element at address: 0x200028800000 with size: 0.390442 MiB 00:26:14.225 element at address: 0x200000800000 with size: 0.350891 MiB 00:26:14.225 list of standard malloc elements. size: 199.286987 MiB 00:26:14.225 element at address: 0x20000a7fef80 with size: 132.000183 MiB 00:26:14.225 element at address: 0x2000065fef80 with size: 64.000183 MiB 00:26:14.225 element at address: 0x2000193fff80 with size: 1.000183 MiB 00:26:14.225 element at address: 0x2000197fff80 with size: 1.000183 MiB 00:26:14.225 element at address: 0x200019bfff80 with size: 1.000183 MiB 00:26:14.225 element at address: 0x2000003d9e80 with size: 0.140808 MiB 00:26:14.225 element at address: 0x200019deff40 with size: 0.062683 MiB 00:26:14.225 element at address: 0x2000003fdf40 with size: 0.007996 MiB 00:26:14.225 element at address: 0x20000a5ff040 with size: 0.000427 MiB 00:26:14.225 element at address: 0x200019defdc0 with size: 0.000366 MiB 00:26:14.225 element at address: 0x200012bff040 with size: 0.000305 MiB 00:26:14.225 element at address: 0x2000002d7b00 with size: 0.000244 MiB 00:26:14.225 element at address: 0x2000003d9d80 with size: 0.000244 MiB 00:26:14.225 element at address: 0x2000004fdf40 with size: 0.000244 MiB 00:26:14.225 element at address: 0x2000004fe040 with size: 0.000244 MiB 00:26:14.225 element at address: 0x2000004fe140 with size: 0.000244 MiB 00:26:14.225 element at address: 0x2000004fe240 with size: 0.000244 MiB 00:26:14.225 element at address: 0x2000004fe340 with size: 0.000244 MiB 00:26:14.225 element at address: 0x2000004fe440 with size: 0.000244 MiB 00:26:14.225 element at address: 0x2000004fe540 with size: 0.000244 MiB 00:26:14.225 element at address: 0x2000004fe640 with size: 0.000244 MiB 00:26:14.225 element at address: 0x2000004fe740 with size: 0.000244 MiB 00:26:14.225 element at address: 0x2000004fe840 with size: 0.000244 MiB 00:26:14.225 element at address: 0x2000004fe940 with size: 0.000244 MiB 00:26:14.225 element at address: 0x2000004fea40 with size: 0.000244 MiB 00:26:14.225 element at address: 0x2000004feb40 with size: 0.000244 MiB 00:26:14.225 element at address: 0x2000004fec40 with size: 0.000244 MiB 00:26:14.225 element at address: 0x2000004fed40 with size: 0.000244 MiB 00:26:14.225 element at address: 0x2000004fee40 with size: 0.000244 MiB 00:26:14.225 element at address: 0x2000004fef40 with size: 0.000244 MiB 00:26:14.225 element at address: 0x2000004ff040 with size: 0.000244 MiB 00:26:14.225 element at address: 0x2000004ff140 with size: 0.000244 MiB 00:26:14.225 element at address: 0x2000004ff240 with size: 0.000244 MiB 00:26:14.226 element at address: 0x2000004ff340 with size: 0.000244 MiB 00:26:14.226 element at address: 0x2000004ff440 with size: 0.000244 MiB 00:26:14.226 element at address: 0x2000004ff540 with size: 0.000244 MiB 00:26:14.226 element at address: 0x2000004ff640 with size: 0.000244 MiB 00:26:14.226 element at address: 0x2000004ff740 with size: 0.000244 MiB 00:26:14.226 element at address: 0x2000004ff840 with size: 0.000244 MiB 00:26:14.226 element at address: 0x2000004ff940 with size: 0.000244 MiB 00:26:14.226 element at address: 0x2000004ffbc0 with size: 0.000244 MiB 00:26:14.226 element at address: 0x2000004ffcc0 with size: 0.000244 MiB 00:26:14.226 element at address: 0x2000004ffdc0 with size: 0.000244 MiB 00:26:14.226 element at address: 0x20000087e1c0 with size: 0.000244 MiB 00:26:14.226 element at address: 0x20000087e2c0 with size: 0.000244 MiB 00:26:14.226 element at address: 0x20000087e3c0 with size: 0.000244 MiB 00:26:14.226 element at address: 0x20000087e4c0 with size: 0.000244 MiB 00:26:14.226 element at address: 0x20000087e5c0 with size: 0.000244 MiB 00:26:14.226 element at address: 0x20000087e6c0 with size: 0.000244 MiB 00:26:14.226 element at address: 0x20000087e7c0 with size: 0.000244 MiB 00:26:14.226 element at address: 0x20000087e8c0 with size: 0.000244 MiB 00:26:14.226 element at address: 0x20000087e9c0 with size: 0.000244 MiB 00:26:14.226 element at address: 0x20000087eac0 with size: 0.000244 MiB 00:26:14.226 element at address: 0x20000087ebc0 with size: 0.000244 MiB 00:26:14.226 element at address: 0x20000087ecc0 with size: 0.000244 MiB 00:26:14.226 element at address: 0x20000087edc0 with size: 0.000244 MiB 00:26:14.226 element at address: 0x20000087eec0 with size: 0.000244 MiB 00:26:14.226 element at address: 0x20000087efc0 with size: 0.000244 MiB 00:26:14.226 element at address: 0x20000087f0c0 with size: 0.000244 MiB 00:26:14.226 element at address: 0x20000087f1c0 with size: 0.000244 MiB 00:26:14.226 element at address: 0x20000087f2c0 with size: 0.000244 MiB 00:26:14.226 element at address: 0x20000087f3c0 with size: 0.000244 MiB 00:26:14.226 element at address: 0x20000087f4c0 with size: 0.000244 MiB 00:26:14.226 element at address: 0x2000008ff800 with size: 0.000244 MiB 00:26:14.226 element at address: 0x2000008ffa80 with size: 0.000244 MiB 00:26:14.226 element at address: 0x200000c7d3c0 with size: 0.000244 MiB 00:26:14.226 element at address: 0x200000c7d4c0 with size: 0.000244 MiB 00:26:14.226 element at address: 0x200000c7d5c0 with size: 0.000244 MiB 00:26:14.226 element at address: 0x200000c7d6c0 with size: 0.000244 MiB 00:26:14.226 element at address: 0x200000c7d7c0 with size: 0.000244 MiB 00:26:14.226 element at address: 0x200000c7d8c0 with size: 0.000244 MiB 00:26:14.226 element at address: 0x200000c7d9c0 with size: 0.000244 MiB 00:26:14.226 element at address: 0x200000c7dac0 with size: 0.000244 MiB 00:26:14.226 element at address: 0x200000c7dbc0 with size: 0.000244 MiB 00:26:14.226 element at address: 0x200000c7dcc0 with size: 0.000244 MiB 00:26:14.226 element at address: 0x200000c7ddc0 with size: 0.000244 MiB 00:26:14.226 element at address: 0x200000c7dec0 with size: 0.000244 MiB 00:26:14.226 element at address: 0x200000c7dfc0 with size: 0.000244 MiB 00:26:14.226 element at address: 0x200000c7e0c0 with size: 0.000244 MiB 00:26:14.226 element at address: 0x200000c7e1c0 with size: 0.000244 MiB 00:26:14.226 element at address: 0x200000c7e2c0 with size: 0.000244 MiB 00:26:14.226 element at address: 0x200000c7e3c0 with size: 0.000244 MiB 00:26:14.226 element at address: 0x200000c7e4c0 with size: 0.000244 MiB 00:26:14.226 element at address: 0x200000c7e5c0 with size: 0.000244 MiB 00:26:14.226 element at address: 0x200000c7e6c0 with size: 0.000244 MiB 00:26:14.226 element at address: 0x200000c7e7c0 with size: 0.000244 MiB 00:26:14.226 element at address: 0x200000c7e8c0 with size: 0.000244 MiB 00:26:14.226 element at address: 0x200000c7e9c0 with size: 0.000244 MiB 00:26:14.226 element at address: 0x200000c7eac0 with size: 0.000244 MiB 00:26:14.226 element at address: 0x200000c7ebc0 with size: 0.000244 MiB 00:26:14.226 element at address: 0x200000cfef00 with size: 0.000244 MiB 00:26:14.226 element at address: 0x200000cff000 with size: 0.000244 MiB 00:26:14.226 element at address: 0x20000a5ff200 with size: 0.000244 MiB 00:26:14.226 element at address: 0x20000a5ff300 with size: 0.000244 MiB 00:26:14.226 element at address: 0x20000a5ff400 with size: 0.000244 MiB 00:26:14.226 element at address: 0x20000a5ff500 with size: 0.000244 MiB 00:26:14.226 element at address: 0x20000a5ff600 with size: 0.000244 MiB 00:26:14.226 element at address: 0x20000a5ff700 with size: 0.000244 MiB 00:26:14.226 element at address: 0x20000a5ff800 with size: 0.000244 MiB 00:26:14.226 element at address: 0x20000a5ff900 with size: 0.000244 MiB 00:26:14.226 element at address: 0x20000a5ffa00 with size: 0.000244 MiB 00:26:14.226 element at address: 0x20000a5ffb00 with size: 0.000244 MiB 00:26:14.226 element at address: 0x20000a5ffc00 with size: 0.000244 MiB 00:26:14.226 element at address: 0x20000a5ffd00 with size: 0.000244 MiB 00:26:14.226 element at address: 0x20000a5ffe00 with size: 0.000244 MiB 00:26:14.226 element at address: 0x20000a5fff00 with size: 0.000244 MiB 00:26:14.226 element at address: 0x200012bff180 with size: 0.000244 MiB 00:26:14.226 element at address: 0x200012bff280 with size: 0.000244 MiB 00:26:14.226 element at address: 0x200012bff380 with size: 0.000244 MiB 00:26:14.226 element at address: 0x200012bff480 with size: 0.000244 MiB 00:26:14.226 element at address: 0x200012bff580 with size: 0.000244 MiB 00:26:14.226 element at address: 0x200012bff680 with size: 0.000244 MiB 00:26:14.226 element at address: 0x200012bff780 with size: 0.000244 MiB 00:26:14.226 element at address: 0x200012bff880 with size: 0.000244 MiB 00:26:14.226 element at address: 0x200012bff980 with size: 0.000244 MiB 00:26:14.226 element at address: 0x200012bffa80 with size: 0.000244 MiB 00:26:14.226 element at address: 0x200012bffb80 with size: 0.000244 MiB 00:26:14.226 element at address: 0x200012bffc80 with size: 0.000244 MiB 00:26:14.226 element at address: 0x200012bfff00 with size: 0.000244 MiB 00:26:14.226 element at address: 0x200012c6ee80 with size: 0.000244 MiB 00:26:14.226 element at address: 0x200012c6ef80 with size: 0.000244 MiB 00:26:14.226 element at address: 0x200012c6f080 with size: 0.000244 MiB 00:26:14.226 element at address: 0x200012c6f180 with size: 0.000244 MiB 00:26:14.226 element at address: 0x200012c6f280 with size: 0.000244 MiB 00:26:14.226 element at address: 0x200012c6f380 with size: 0.000244 MiB 00:26:14.226 element at address: 0x200012c6f480 with size: 0.000244 MiB 00:26:14.226 element at address: 0x200012c6f580 with size: 0.000244 MiB 00:26:14.226 element at address: 0x200012c6f680 with size: 0.000244 MiB 00:26:14.226 element at address: 0x200012c6f780 with size: 0.000244 MiB 00:26:14.226 element at address: 0x200012c6f880 with size: 0.000244 MiB 00:26:14.226 element at address: 0x200012cefbc0 with size: 0.000244 MiB 00:26:14.227 element at address: 0x2000192fdd00 with size: 0.000244 MiB 00:26:14.227 element at address: 0x20001967cec0 with size: 0.000244 MiB 00:26:14.227 element at address: 0x20001967cfc0 with size: 0.000244 MiB 00:26:14.227 element at address: 0x20001967d0c0 with size: 0.000244 MiB 00:26:14.227 element at address: 0x20001967d1c0 with size: 0.000244 MiB 00:26:14.227 element at address: 0x20001967d2c0 with size: 0.000244 MiB 00:26:14.227 element at address: 0x20001967d3c0 with size: 0.000244 MiB 00:26:14.227 element at address: 0x20001967d4c0 with size: 0.000244 MiB 00:26:14.227 element at address: 0x20001967d5c0 with size: 0.000244 MiB 00:26:14.227 element at address: 0x20001967d6c0 with size: 0.000244 MiB 00:26:14.227 element at address: 0x20001967d7c0 with size: 0.000244 MiB 00:26:14.227 element at address: 0x20001967d8c0 with size: 0.000244 MiB 00:26:14.227 element at address: 0x20001967d9c0 with size: 0.000244 MiB 00:26:14.227 element at address: 0x2000196fdd00 with size: 0.000244 MiB 00:26:14.227 element at address: 0x200019affc40 with size: 0.000244 MiB 00:26:14.227 element at address: 0x200019defbc0 with size: 0.000244 MiB 00:26:14.227 element at address: 0x200019defcc0 with size: 0.000244 MiB 00:26:14.227 element at address: 0x200019ebc680 with size: 0.000244 MiB 00:26:14.227 element at address: 0x20001b4904c0 with size: 0.000244 MiB 00:26:14.227 element at address: 0x20001b4905c0 with size: 0.000244 MiB 00:26:14.227 element at address: 0x20001b4906c0 with size: 0.000244 MiB 00:26:14.227 element at address: 0x20001b4907c0 with size: 0.000244 MiB 00:26:14.227 element at address: 0x20001b4908c0 with size: 0.000244 MiB 00:26:14.227 element at address: 0x20001b4909c0 with size: 0.000244 MiB 00:26:14.227 element at address: 0x20001b490ac0 with size: 0.000244 MiB 00:26:14.227 element at address: 0x20001b490bc0 with size: 0.000244 MiB 00:26:14.227 element at address: 0x20001b490cc0 with size: 0.000244 MiB 00:26:14.227 element at address: 0x20001b490dc0 with size: 0.000244 MiB 00:26:14.227 element at address: 0x20001b490ec0 with size: 0.000244 MiB 00:26:14.227 element at address: 0x20001b490fc0 with size: 0.000244 MiB 00:26:14.227 element at address: 0x20001b4910c0 with size: 0.000244 MiB 00:26:14.227 element at address: 0x20001b4911c0 with size: 0.000244 MiB 00:26:14.227 element at address: 0x20001b4912c0 with size: 0.000244 MiB 00:26:14.227 element at address: 0x20001b4913c0 with size: 0.000244 MiB 00:26:14.227 element at address: 0x20001b4914c0 with size: 0.000244 MiB 00:26:14.227 element at address: 0x20001b4915c0 with size: 0.000244 MiB 00:26:14.227 element at address: 0x20001b4916c0 with size: 0.000244 MiB 00:26:14.227 element at address: 0x20001b4917c0 with size: 0.000244 MiB 00:26:14.227 element at address: 0x20001b4918c0 with size: 0.000244 MiB 00:26:14.227 element at address: 0x20001b4919c0 with size: 0.000244 MiB 00:26:14.227 element at address: 0x20001b491ac0 with size: 0.000244 MiB 00:26:14.227 element at address: 0x20001b491bc0 with size: 0.000244 MiB 00:26:14.227 element at address: 0x20001b491cc0 with size: 0.000244 MiB 00:26:14.227 element at address: 0x20001b491dc0 with size: 0.000244 MiB 00:26:14.227 element at address: 0x20001b491ec0 with size: 0.000244 MiB 00:26:14.227 element at address: 0x20001b491fc0 with size: 0.000244 MiB 00:26:14.227 element at address: 0x20001b4920c0 with size: 0.000244 MiB 00:26:14.227 element at address: 0x20001b4921c0 with size: 0.000244 MiB 00:26:14.227 element at address: 0x20001b4922c0 with size: 0.000244 MiB 00:26:14.227 element at address: 0x20001b4923c0 with size: 0.000244 MiB 00:26:14.227 element at address: 0x20001b4924c0 with size: 0.000244 MiB 00:26:14.227 element at address: 0x20001b4925c0 with size: 0.000244 MiB 00:26:14.227 element at address: 0x20001b4926c0 with size: 0.000244 MiB 00:26:14.227 element at address: 0x20001b4927c0 with size: 0.000244 MiB 00:26:14.227 element at address: 0x20001b4928c0 with size: 0.000244 MiB 00:26:14.227 element at address: 0x20001b4929c0 with size: 0.000244 MiB 00:26:14.227 element at address: 0x20001b492ac0 with size: 0.000244 MiB 00:26:14.227 element at address: 0x20001b492bc0 with size: 0.000244 MiB 00:26:14.227 element at address: 0x20001b492cc0 with size: 0.000244 MiB 00:26:14.227 element at address: 0x20001b492dc0 with size: 0.000244 MiB 00:26:14.227 element at address: 0x20001b492ec0 with size: 0.000244 MiB 00:26:14.227 element at address: 0x20001b492fc0 with size: 0.000244 MiB 00:26:14.227 element at address: 0x20001b4930c0 with size: 0.000244 MiB 00:26:14.227 element at address: 0x20001b4931c0 with size: 0.000244 MiB 00:26:14.227 element at address: 0x20001b4932c0 with size: 0.000244 MiB 00:26:14.227 element at address: 0x20001b4933c0 with size: 0.000244 MiB 00:26:14.227 element at address: 0x20001b4934c0 with size: 0.000244 MiB 00:26:14.227 element at address: 0x20001b4935c0 with size: 0.000244 MiB 00:26:14.227 element at address: 0x20001b4936c0 with size: 0.000244 MiB 00:26:14.227 element at address: 0x20001b4937c0 with size: 0.000244 MiB 00:26:14.227 element at address: 0x20001b4938c0 with size: 0.000244 MiB 00:26:14.227 element at address: 0x20001b4939c0 with size: 0.000244 MiB 00:26:14.227 element at address: 0x20001b493ac0 with size: 0.000244 MiB 00:26:14.227 element at address: 0x20001b493bc0 with size: 0.000244 MiB 00:26:14.227 element at address: 0x20001b493cc0 with size: 0.000244 MiB 00:26:14.227 element at address: 0x20001b493dc0 with size: 0.000244 MiB 00:26:14.227 element at address: 0x20001b493ec0 with size: 0.000244 MiB 00:26:14.227 element at address: 0x20001b493fc0 with size: 0.000244 MiB 00:26:14.227 element at address: 0x20001b4940c0 with size: 0.000244 MiB 00:26:14.227 element at address: 0x20001b4941c0 with size: 0.000244 MiB 00:26:14.227 element at address: 0x20001b4942c0 with size: 0.000244 MiB 00:26:14.227 element at address: 0x20001b4943c0 with size: 0.000244 MiB 00:26:14.227 element at address: 0x20001b4944c0 with size: 0.000244 MiB 00:26:14.227 element at address: 0x20001b4945c0 with size: 0.000244 MiB 00:26:14.227 element at address: 0x20001b4946c0 with size: 0.000244 MiB 00:26:14.227 element at address: 0x20001b4947c0 with size: 0.000244 MiB 00:26:14.227 element at address: 0x20001b4948c0 with size: 0.000244 MiB 00:26:14.227 element at address: 0x20001b4949c0 with size: 0.000244 MiB 00:26:14.227 element at address: 0x20001b494ac0 with size: 0.000244 MiB 00:26:14.227 element at address: 0x20001b494bc0 with size: 0.000244 MiB 00:26:14.227 element at address: 0x20001b494cc0 with size: 0.000244 MiB 00:26:14.227 element at address: 0x20001b494dc0 with size: 0.000244 MiB 00:26:14.227 element at address: 0x20001b494ec0 with size: 0.000244 MiB 00:26:14.227 element at address: 0x20001b494fc0 with size: 0.000244 MiB 00:26:14.227 element at address: 0x20001b4950c0 with size: 0.000244 MiB 00:26:14.227 element at address: 0x20001b4951c0 with size: 0.000244 MiB 00:26:14.227 element at address: 0x20001b4952c0 with size: 0.000244 MiB 00:26:14.227 element at address: 0x20001b4953c0 with size: 0.000244 MiB 00:26:14.227 element at address: 0x200028863f40 with size: 0.000244 MiB 00:26:14.227 element at address: 0x200028864040 with size: 0.000244 MiB 00:26:14.227 element at address: 0x20002886ad00 with size: 0.000244 MiB 00:26:14.227 element at address: 0x20002886af80 with size: 0.000244 MiB 00:26:14.227 element at address: 0x20002886b080 with size: 0.000244 MiB 00:26:14.227 element at address: 0x20002886b180 with size: 0.000244 MiB 00:26:14.227 element at address: 0x20002886b280 with size: 0.000244 MiB 00:26:14.227 element at address: 0x20002886b380 with size: 0.000244 MiB 00:26:14.227 element at address: 0x20002886b480 with size: 0.000244 MiB 00:26:14.227 element at address: 0x20002886b580 with size: 0.000244 MiB 00:26:14.227 element at address: 0x20002886b680 with size: 0.000244 MiB 00:26:14.227 element at address: 0x20002886b780 with size: 0.000244 MiB 00:26:14.227 element at address: 0x20002886b880 with size: 0.000244 MiB 00:26:14.227 element at address: 0x20002886b980 with size: 0.000244 MiB 00:26:14.227 element at address: 0x20002886ba80 with size: 0.000244 MiB 00:26:14.228 element at address: 0x20002886bb80 with size: 0.000244 MiB 00:26:14.228 element at address: 0x20002886bc80 with size: 0.000244 MiB 00:26:14.228 element at address: 0x20002886bd80 with size: 0.000244 MiB 00:26:14.228 element at address: 0x20002886be80 with size: 0.000244 MiB 00:26:14.228 element at address: 0x20002886bf80 with size: 0.000244 MiB 00:26:14.228 element at address: 0x20002886c080 with size: 0.000244 MiB 00:26:14.228 element at address: 0x20002886c180 with size: 0.000244 MiB 00:26:14.228 element at address: 0x20002886c280 with size: 0.000244 MiB 00:26:14.228 element at address: 0x20002886c380 with size: 0.000244 MiB 00:26:14.228 element at address: 0x20002886c480 with size: 0.000244 MiB 00:26:14.228 element at address: 0x20002886c580 with size: 0.000244 MiB 00:26:14.228 element at address: 0x20002886c680 with size: 0.000244 MiB 00:26:14.228 element at address: 0x20002886c780 with size: 0.000244 MiB 00:26:14.228 element at address: 0x20002886c880 with size: 0.000244 MiB 00:26:14.228 element at address: 0x20002886c980 with size: 0.000244 MiB 00:26:14.228 element at address: 0x20002886ca80 with size: 0.000244 MiB 00:26:14.228 element at address: 0x20002886cb80 with size: 0.000244 MiB 00:26:14.228 element at address: 0x20002886cc80 with size: 0.000244 MiB 00:26:14.228 element at address: 0x20002886cd80 with size: 0.000244 MiB 00:26:14.228 element at address: 0x20002886ce80 with size: 0.000244 MiB 00:26:14.228 element at address: 0x20002886cf80 with size: 0.000244 MiB 00:26:14.228 element at address: 0x20002886d080 with size: 0.000244 MiB 00:26:14.228 element at address: 0x20002886d180 with size: 0.000244 MiB 00:26:14.228 element at address: 0x20002886d280 with size: 0.000244 MiB 00:26:14.228 element at address: 0x20002886d380 with size: 0.000244 MiB 00:26:14.228 element at address: 0x20002886d480 with size: 0.000244 MiB 00:26:14.228 element at address: 0x20002886d580 with size: 0.000244 MiB 00:26:14.228 element at address: 0x20002886d680 with size: 0.000244 MiB 00:26:14.228 element at address: 0x20002886d780 with size: 0.000244 MiB 00:26:14.228 element at address: 0x20002886d880 with size: 0.000244 MiB 00:26:14.228 element at address: 0x20002886d980 with size: 0.000244 MiB 00:26:14.228 element at address: 0x20002886da80 with size: 0.000244 MiB 00:26:14.228 element at address: 0x20002886db80 with size: 0.000244 MiB 00:26:14.228 element at address: 0x20002886dc80 with size: 0.000244 MiB 00:26:14.228 element at address: 0x20002886dd80 with size: 0.000244 MiB 00:26:14.228 element at address: 0x20002886de80 with size: 0.000244 MiB 00:26:14.228 element at address: 0x20002886df80 with size: 0.000244 MiB 00:26:14.228 element at address: 0x20002886e080 with size: 0.000244 MiB 00:26:14.228 element at address: 0x20002886e180 with size: 0.000244 MiB 00:26:14.228 element at address: 0x20002886e280 with size: 0.000244 MiB 00:26:14.228 element at address: 0x20002886e380 with size: 0.000244 MiB 00:26:14.228 element at address: 0x20002886e480 with size: 0.000244 MiB 00:26:14.228 element at address: 0x20002886e580 with size: 0.000244 MiB 00:26:14.228 element at address: 0x20002886e680 with size: 0.000244 MiB 00:26:14.228 element at address: 0x20002886e780 with size: 0.000244 MiB 00:26:14.228 element at address: 0x20002886e880 with size: 0.000244 MiB 00:26:14.228 element at address: 0x20002886e980 with size: 0.000244 MiB 00:26:14.228 element at address: 0x20002886ea80 with size: 0.000244 MiB 00:26:14.228 element at address: 0x20002886eb80 with size: 0.000244 MiB 00:26:14.228 element at address: 0x20002886ec80 with size: 0.000244 MiB 00:26:14.228 element at address: 0x20002886ed80 with size: 0.000244 MiB 00:26:14.228 element at address: 0x20002886ee80 with size: 0.000244 MiB 00:26:14.228 element at address: 0x20002886ef80 with size: 0.000244 MiB 00:26:14.228 element at address: 0x20002886f080 with size: 0.000244 MiB 00:26:14.228 element at address: 0x20002886f180 with size: 0.000244 MiB 00:26:14.228 element at address: 0x20002886f280 with size: 0.000244 MiB 00:26:14.228 element at address: 0x20002886f380 with size: 0.000244 MiB 00:26:14.228 element at address: 0x20002886f480 with size: 0.000244 MiB 00:26:14.228 element at address: 0x20002886f580 with size: 0.000244 MiB 00:26:14.228 element at address: 0x20002886f680 with size: 0.000244 MiB 00:26:14.228 element at address: 0x20002886f780 with size: 0.000244 MiB 00:26:14.228 element at address: 0x20002886f880 with size: 0.000244 MiB 00:26:14.228 element at address: 0x20002886f980 with size: 0.000244 MiB 00:26:14.228 element at address: 0x20002886fa80 with size: 0.000244 MiB 00:26:14.228 element at address: 0x20002886fb80 with size: 0.000244 MiB 00:26:14.228 element at address: 0x20002886fc80 with size: 0.000244 MiB 00:26:14.228 element at address: 0x20002886fd80 with size: 0.000244 MiB 00:26:14.228 element at address: 0x20002886fe80 with size: 0.000244 MiB 00:26:14.228 list of memzone associated elements. size: 607.930908 MiB 00:26:14.228 element at address: 0x20001b4954c0 with size: 211.416809 MiB 00:26:14.228 associated memzone info: size: 211.416626 MiB name: MP_PDU_immediate_data_Pool_0 00:26:14.228 element at address: 0x20002886ff80 with size: 157.562622 MiB 00:26:14.228 associated memzone info: size: 157.562439 MiB name: MP_PDU_data_out_Pool_0 00:26:14.228 element at address: 0x200012df1e40 with size: 100.055115 MiB 00:26:14.228 associated memzone info: size: 100.054932 MiB name: MP_bdev_io_58211_0 00:26:14.228 element at address: 0x200000dff340 with size: 48.003113 MiB 00:26:14.228 associated memzone info: size: 48.002930 MiB name: MP_msgpool_58211_0 00:26:14.228 element at address: 0x200003ffdb40 with size: 36.008972 MiB 00:26:14.228 associated memzone info: size: 36.008789 MiB name: MP_fsdev_io_58211_0 00:26:14.228 element at address: 0x200019fbe900 with size: 20.255615 MiB 00:26:14.228 associated memzone info: size: 20.255432 MiB name: MP_PDU_Pool_0 00:26:14.228 element at address: 0x2000327feb00 with size: 18.005127 MiB 00:26:14.228 associated memzone info: size: 18.004944 MiB name: MP_SCSI_TASK_Pool_0 00:26:14.228 element at address: 0x2000004ffec0 with size: 3.000305 MiB 00:26:14.228 associated memzone info: size: 3.000122 MiB name: MP_evtpool_58211_0 00:26:14.228 element at address: 0x2000009ffdc0 with size: 2.000549 MiB 00:26:14.228 associated memzone info: size: 2.000366 MiB name: RG_MP_msgpool_58211 00:26:14.228 element at address: 0x2000002d7c00 with size: 1.008179 MiB 00:26:14.228 associated memzone info: size: 1.007996 MiB name: MP_evtpool_58211 00:26:14.228 element at address: 0x2000196fde00 with size: 1.008179 MiB 00:26:14.228 associated memzone info: size: 1.007996 MiB name: MP_PDU_Pool 00:26:14.228 element at address: 0x200019ebc780 with size: 1.008179 MiB 00:26:14.228 associated memzone info: size: 1.007996 MiB name: MP_PDU_immediate_data_Pool 00:26:14.228 element at address: 0x2000192fde00 with size: 1.008179 MiB 00:26:14.228 associated memzone info: size: 1.007996 MiB name: MP_PDU_data_out_Pool 00:26:14.228 element at address: 0x200012cefcc0 with size: 1.008179 MiB 00:26:14.228 associated memzone info: size: 1.007996 MiB name: MP_SCSI_TASK_Pool 00:26:14.228 element at address: 0x200000cff100 with size: 1.000549 MiB 00:26:14.228 associated memzone info: size: 1.000366 MiB name: RG_ring_0_58211 00:26:14.228 element at address: 0x2000008ffb80 with size: 1.000549 MiB 00:26:14.228 associated memzone info: size: 1.000366 MiB name: RG_ring_1_58211 00:26:14.228 element at address: 0x200019affd40 with size: 1.000549 MiB 00:26:14.228 associated memzone info: size: 1.000366 MiB name: RG_ring_4_58211 00:26:14.228 element at address: 0x2000326fe8c0 with size: 1.000549 MiB 00:26:14.228 associated memzone info: size: 1.000366 MiB name: RG_ring_5_58211 00:26:14.228 element at address: 0x20000087f5c0 with size: 0.500549 MiB 00:26:14.228 associated memzone info: size: 0.500366 MiB name: RG_MP_fsdev_io_58211 00:26:14.228 element at address: 0x200000c7ecc0 with size: 0.500549 MiB 00:26:14.228 associated memzone info: size: 0.500366 MiB name: RG_MP_bdev_io_58211 00:26:14.228 element at address: 0x20001967dac0 with size: 0.500549 MiB 00:26:14.228 associated memzone info: size: 0.500366 MiB name: RG_MP_PDU_Pool 00:26:14.228 element at address: 0x200012c6f980 with size: 0.500549 MiB 00:26:14.228 associated memzone info: size: 0.500366 MiB name: RG_MP_SCSI_TASK_Pool 00:26:14.228 element at address: 0x200019e7c440 with size: 0.250549 MiB 00:26:14.228 associated memzone info: size: 0.250366 MiB name: RG_MP_PDU_immediate_data_Pool 00:26:14.228 element at address: 0x2000002b78c0 with size: 0.125549 MiB 00:26:14.228 associated memzone info: size: 0.125366 MiB name: RG_MP_evtpool_58211 00:26:14.228 element at address: 0x20000085df80 with size: 0.125549 MiB 00:26:14.228 associated memzone info: size: 0.125366 MiB name: RG_ring_2_58211 00:26:14.228 element at address: 0x2000192f5ac0 with size: 0.031799 MiB 00:26:14.228 associated memzone info: size: 0.031616 MiB name: RG_MP_PDU_data_out_Pool 00:26:14.228 element at address: 0x200028864140 with size: 0.023804 MiB 00:26:14.228 associated memzone info: size: 0.023621 MiB name: MP_Session_Pool_0 00:26:14.228 element at address: 0x200000859d40 with size: 0.016174 MiB 00:26:14.228 associated memzone info: size: 0.015991 MiB name: RG_ring_3_58211 00:26:14.228 element at address: 0x20002886a2c0 with size: 0.002502 MiB 00:26:14.228 associated memzone info: size: 0.002319 MiB name: RG_MP_Session_Pool 00:26:14.228 element at address: 0x2000004ffa40 with size: 0.000366 MiB 00:26:14.228 associated memzone info: size: 0.000183 MiB name: MP_msgpool_58211 00:26:14.228 element at address: 0x2000008ff900 with size: 0.000366 MiB 00:26:14.228 associated memzone info: size: 0.000183 MiB name: MP_fsdev_io_58211 00:26:14.228 element at address: 0x200012bffd80 with size: 0.000366 MiB 00:26:14.228 associated memzone info: size: 0.000183 MiB name: MP_bdev_io_58211 00:26:14.228 element at address: 0x20002886ae00 with size: 0.000366 MiB 00:26:14.228 associated memzone info: size: 0.000183 MiB name: MP_Session_Pool 00:26:14.228 17:22:51 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@25 -- # trap - SIGINT SIGTERM EXIT 00:26:14.228 17:22:51 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@26 -- # killprocess 58211 00:26:14.228 17:22:51 dpdk_mem_utility -- common/autotest_common.sh@954 -- # '[' -z 58211 ']' 00:26:14.228 17:22:51 dpdk_mem_utility -- common/autotest_common.sh@958 -- # kill -0 58211 00:26:14.228 17:22:51 dpdk_mem_utility -- common/autotest_common.sh@959 -- # uname 00:26:14.228 17:22:51 dpdk_mem_utility -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:26:14.228 17:22:51 dpdk_mem_utility -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 58211 00:26:14.228 17:22:51 dpdk_mem_utility -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:26:14.229 17:22:51 dpdk_mem_utility -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:26:14.229 17:22:51 dpdk_mem_utility -- common/autotest_common.sh@972 -- # echo 'killing process with pid 58211' 00:26:14.229 killing process with pid 58211 00:26:14.229 17:22:51 dpdk_mem_utility -- common/autotest_common.sh@973 -- # kill 58211 00:26:14.229 17:22:51 dpdk_mem_utility -- common/autotest_common.sh@978 -- # wait 58211 00:26:17.515 00:26:17.515 real 0m4.584s 00:26:17.515 user 0m4.643s 00:26:17.515 sys 0m0.622s 00:26:17.515 ************************************ 00:26:17.515 END TEST dpdk_mem_utility 00:26:17.515 ************************************ 00:26:17.515 17:22:54 dpdk_mem_utility -- common/autotest_common.sh@1130 -- # xtrace_disable 00:26:17.515 17:22:54 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:26:17.515 17:22:54 -- spdk/autotest.sh@168 -- # run_test event /home/vagrant/spdk_repo/spdk/test/event/event.sh 00:26:17.515 17:22:54 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:26:17.515 17:22:54 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:26:17.515 17:22:54 -- common/autotest_common.sh@10 -- # set +x 00:26:17.515 ************************************ 00:26:17.515 START TEST event 00:26:17.515 ************************************ 00:26:17.515 17:22:54 event -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/event/event.sh 00:26:17.515 * Looking for test storage... 00:26:17.515 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event 00:26:17.515 17:22:54 event -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:26:17.515 17:22:54 event -- common/autotest_common.sh@1693 -- # lcov --version 00:26:17.515 17:22:54 event -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:26:17.515 17:22:54 event -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:26:17.515 17:22:54 event -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:26:17.515 17:22:54 event -- scripts/common.sh@333 -- # local ver1 ver1_l 00:26:17.515 17:22:54 event -- scripts/common.sh@334 -- # local ver2 ver2_l 00:26:17.515 17:22:54 event -- scripts/common.sh@336 -- # IFS=.-: 00:26:17.515 17:22:54 event -- scripts/common.sh@336 -- # read -ra ver1 00:26:17.515 17:22:54 event -- scripts/common.sh@337 -- # IFS=.-: 00:26:17.515 17:22:54 event -- scripts/common.sh@337 -- # read -ra ver2 00:26:17.515 17:22:54 event -- scripts/common.sh@338 -- # local 'op=<' 00:26:17.515 17:22:54 event -- scripts/common.sh@340 -- # ver1_l=2 00:26:17.515 17:22:54 event -- scripts/common.sh@341 -- # ver2_l=1 00:26:17.515 17:22:54 event -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:26:17.515 17:22:54 event -- scripts/common.sh@344 -- # case "$op" in 00:26:17.515 17:22:54 event -- scripts/common.sh@345 -- # : 1 00:26:17.515 17:22:54 event -- scripts/common.sh@364 -- # (( v = 0 )) 00:26:17.515 17:22:54 event -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:26:17.515 17:22:54 event -- scripts/common.sh@365 -- # decimal 1 00:26:17.515 17:22:54 event -- scripts/common.sh@353 -- # local d=1 00:26:17.515 17:22:54 event -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:26:17.515 17:22:54 event -- scripts/common.sh@355 -- # echo 1 00:26:17.515 17:22:54 event -- scripts/common.sh@365 -- # ver1[v]=1 00:26:17.515 17:22:54 event -- scripts/common.sh@366 -- # decimal 2 00:26:17.515 17:22:54 event -- scripts/common.sh@353 -- # local d=2 00:26:17.515 17:22:54 event -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:26:17.515 17:22:54 event -- scripts/common.sh@355 -- # echo 2 00:26:17.515 17:22:54 event -- scripts/common.sh@366 -- # ver2[v]=2 00:26:17.515 17:22:54 event -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:26:17.515 17:22:54 event -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:26:17.515 17:22:54 event -- scripts/common.sh@368 -- # return 0 00:26:17.515 17:22:54 event -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:26:17.515 17:22:54 event -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:26:17.515 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:17.515 --rc genhtml_branch_coverage=1 00:26:17.515 --rc genhtml_function_coverage=1 00:26:17.515 --rc genhtml_legend=1 00:26:17.515 --rc geninfo_all_blocks=1 00:26:17.515 --rc geninfo_unexecuted_blocks=1 00:26:17.515 00:26:17.515 ' 00:26:17.515 17:22:54 event -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:26:17.515 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:17.515 --rc genhtml_branch_coverage=1 00:26:17.515 --rc genhtml_function_coverage=1 00:26:17.515 --rc genhtml_legend=1 00:26:17.515 --rc geninfo_all_blocks=1 00:26:17.515 --rc geninfo_unexecuted_blocks=1 00:26:17.515 00:26:17.515 ' 00:26:17.515 17:22:54 event -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:26:17.515 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:17.515 --rc genhtml_branch_coverage=1 00:26:17.515 --rc genhtml_function_coverage=1 00:26:17.515 --rc genhtml_legend=1 00:26:17.515 --rc geninfo_all_blocks=1 00:26:17.515 --rc geninfo_unexecuted_blocks=1 00:26:17.515 00:26:17.515 ' 00:26:17.515 17:22:54 event -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:26:17.515 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:17.515 --rc genhtml_branch_coverage=1 00:26:17.515 --rc genhtml_function_coverage=1 00:26:17.515 --rc genhtml_legend=1 00:26:17.515 --rc geninfo_all_blocks=1 00:26:17.515 --rc geninfo_unexecuted_blocks=1 00:26:17.515 00:26:17.515 ' 00:26:17.515 17:22:54 event -- event/event.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:26:17.515 17:22:54 event -- bdev/nbd_common.sh@6 -- # set -e 00:26:17.515 17:22:54 event -- event/event.sh@45 -- # run_test event_perf /home/vagrant/spdk_repo/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:26:17.515 17:22:54 event -- common/autotest_common.sh@1105 -- # '[' 6 -le 1 ']' 00:26:17.515 17:22:54 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:26:17.515 17:22:54 event -- common/autotest_common.sh@10 -- # set +x 00:26:17.515 ************************************ 00:26:17.515 START TEST event_perf 00:26:17.515 ************************************ 00:26:17.515 17:22:54 event.event_perf -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:26:17.515 Running I/O for 1 seconds...[2024-11-26 17:22:54.600722] Starting SPDK v25.01-pre git sha1 f7ce15267 / DPDK 24.03.0 initialization... 00:26:17.515 [2024-11-26 17:22:54.601085] [ DPDK EAL parameters: event_perf --no-shconf -c 0xF --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58330 ] 00:26:17.515 [2024-11-26 17:22:54.808988] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:26:17.774 [2024-11-26 17:22:54.990179] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:26:17.774 [2024-11-26 17:22:54.990275] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:26:17.774 [2024-11-26 17:22:54.990410] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:26:17.774 Running I/O for 1 seconds...[2024-11-26 17:22:54.990431] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:26:19.146 00:26:19.146 lcore 0: 174380 00:26:19.146 lcore 1: 174378 00:26:19.146 lcore 2: 174377 00:26:19.146 lcore 3: 174378 00:26:19.146 done. 00:26:19.146 00:26:19.146 real 0m1.699s 00:26:19.146 user 0m4.438s 00:26:19.146 sys 0m0.128s 00:26:19.146 17:22:56 event.event_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:26:19.146 17:22:56 event.event_perf -- common/autotest_common.sh@10 -- # set +x 00:26:19.146 ************************************ 00:26:19.146 END TEST event_perf 00:26:19.146 ************************************ 00:26:19.146 17:22:56 event -- event/event.sh@46 -- # run_test event_reactor /home/vagrant/spdk_repo/spdk/test/event/reactor/reactor -t 1 00:26:19.146 17:22:56 event -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:26:19.146 17:22:56 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:26:19.146 17:22:56 event -- common/autotest_common.sh@10 -- # set +x 00:26:19.146 ************************************ 00:26:19.146 START TEST event_reactor 00:26:19.146 ************************************ 00:26:19.146 17:22:56 event.event_reactor -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/event/reactor/reactor -t 1 00:26:19.146 [2024-11-26 17:22:56.352427] Starting SPDK v25.01-pre git sha1 f7ce15267 / DPDK 24.03.0 initialization... 00:26:19.146 [2024-11-26 17:22:56.352575] [ DPDK EAL parameters: reactor --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58370 ] 00:26:19.146 [2024-11-26 17:22:56.523714] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:19.404 [2024-11-26 17:22:56.659414] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:26:20.778 test_start 00:26:20.778 oneshot 00:26:20.778 tick 100 00:26:20.778 tick 100 00:26:20.778 tick 250 00:26:20.778 tick 100 00:26:20.778 tick 100 00:26:20.778 tick 100 00:26:20.778 tick 250 00:26:20.778 tick 500 00:26:20.778 tick 100 00:26:20.778 tick 100 00:26:20.778 tick 250 00:26:20.778 tick 100 00:26:20.778 tick 100 00:26:20.778 test_end 00:26:20.778 00:26:20.778 real 0m1.604s 00:26:20.778 user 0m1.388s 00:26:20.778 sys 0m0.106s 00:26:20.778 ************************************ 00:26:20.778 END TEST event_reactor 00:26:20.778 ************************************ 00:26:20.778 17:22:57 event.event_reactor -- common/autotest_common.sh@1130 -- # xtrace_disable 00:26:20.778 17:22:57 event.event_reactor -- common/autotest_common.sh@10 -- # set +x 00:26:20.778 17:22:57 event -- event/event.sh@47 -- # run_test event_reactor_perf /home/vagrant/spdk_repo/spdk/test/event/reactor_perf/reactor_perf -t 1 00:26:20.778 17:22:57 event -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:26:20.778 17:22:57 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:26:20.778 17:22:57 event -- common/autotest_common.sh@10 -- # set +x 00:26:20.778 ************************************ 00:26:20.778 START TEST event_reactor_perf 00:26:20.778 ************************************ 00:26:20.778 17:22:57 event.event_reactor_perf -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/event/reactor_perf/reactor_perf -t 1 00:26:20.778 [2024-11-26 17:22:58.032091] Starting SPDK v25.01-pre git sha1 f7ce15267 / DPDK 24.03.0 initialization... 00:26:20.778 [2024-11-26 17:22:58.032273] [ DPDK EAL parameters: reactor_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58406 ] 00:26:21.036 [2024-11-26 17:22:58.235785] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:21.036 [2024-11-26 17:22:58.403816] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:26:22.416 test_start 00:26:22.416 test_end 00:26:22.416 Performance: 309131 events per second 00:26:22.417 00:26:22.417 real 0m1.693s 00:26:22.417 user 0m1.461s 00:26:22.417 sys 0m0.119s 00:26:22.417 ************************************ 00:26:22.417 END TEST event_reactor_perf 00:26:22.417 ************************************ 00:26:22.417 17:22:59 event.event_reactor_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:26:22.417 17:22:59 event.event_reactor_perf -- common/autotest_common.sh@10 -- # set +x 00:26:22.417 17:22:59 event -- event/event.sh@49 -- # uname -s 00:26:22.417 17:22:59 event -- event/event.sh@49 -- # '[' Linux = Linux ']' 00:26:22.417 17:22:59 event -- event/event.sh@50 -- # run_test event_scheduler /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler.sh 00:26:22.417 17:22:59 event -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:26:22.417 17:22:59 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:26:22.417 17:22:59 event -- common/autotest_common.sh@10 -- # set +x 00:26:22.417 ************************************ 00:26:22.417 START TEST event_scheduler 00:26:22.417 ************************************ 00:26:22.417 17:22:59 event.event_scheduler -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler.sh 00:26:22.417 * Looking for test storage... 00:26:22.417 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event/scheduler 00:26:22.417 17:22:59 event.event_scheduler -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:26:22.417 17:22:59 event.event_scheduler -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:26:22.417 17:22:59 event.event_scheduler -- common/autotest_common.sh@1693 -- # lcov --version 00:26:22.674 17:22:59 event.event_scheduler -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:26:22.674 17:22:59 event.event_scheduler -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:26:22.674 17:22:59 event.event_scheduler -- scripts/common.sh@333 -- # local ver1 ver1_l 00:26:22.674 17:22:59 event.event_scheduler -- scripts/common.sh@334 -- # local ver2 ver2_l 00:26:22.674 17:22:59 event.event_scheduler -- scripts/common.sh@336 -- # IFS=.-: 00:26:22.674 17:22:59 event.event_scheduler -- scripts/common.sh@336 -- # read -ra ver1 00:26:22.674 17:22:59 event.event_scheduler -- scripts/common.sh@337 -- # IFS=.-: 00:26:22.674 17:22:59 event.event_scheduler -- scripts/common.sh@337 -- # read -ra ver2 00:26:22.674 17:22:59 event.event_scheduler -- scripts/common.sh@338 -- # local 'op=<' 00:26:22.674 17:22:59 event.event_scheduler -- scripts/common.sh@340 -- # ver1_l=2 00:26:22.674 17:22:59 event.event_scheduler -- scripts/common.sh@341 -- # ver2_l=1 00:26:22.674 17:22:59 event.event_scheduler -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:26:22.674 17:22:59 event.event_scheduler -- scripts/common.sh@344 -- # case "$op" in 00:26:22.674 17:22:59 event.event_scheduler -- scripts/common.sh@345 -- # : 1 00:26:22.674 17:22:59 event.event_scheduler -- scripts/common.sh@364 -- # (( v = 0 )) 00:26:22.674 17:22:59 event.event_scheduler -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:26:22.674 17:22:59 event.event_scheduler -- scripts/common.sh@365 -- # decimal 1 00:26:22.674 17:22:59 event.event_scheduler -- scripts/common.sh@353 -- # local d=1 00:26:22.674 17:22:59 event.event_scheduler -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:26:22.674 17:22:59 event.event_scheduler -- scripts/common.sh@355 -- # echo 1 00:26:22.674 17:22:59 event.event_scheduler -- scripts/common.sh@365 -- # ver1[v]=1 00:26:22.674 17:22:59 event.event_scheduler -- scripts/common.sh@366 -- # decimal 2 00:26:22.674 17:22:59 event.event_scheduler -- scripts/common.sh@353 -- # local d=2 00:26:22.674 17:22:59 event.event_scheduler -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:26:22.674 17:22:59 event.event_scheduler -- scripts/common.sh@355 -- # echo 2 00:26:22.674 17:22:59 event.event_scheduler -- scripts/common.sh@366 -- # ver2[v]=2 00:26:22.674 17:22:59 event.event_scheduler -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:26:22.674 17:22:59 event.event_scheduler -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:26:22.674 17:22:59 event.event_scheduler -- scripts/common.sh@368 -- # return 0 00:26:22.675 17:22:59 event.event_scheduler -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:26:22.675 17:22:59 event.event_scheduler -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:26:22.675 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:22.675 --rc genhtml_branch_coverage=1 00:26:22.675 --rc genhtml_function_coverage=1 00:26:22.675 --rc genhtml_legend=1 00:26:22.675 --rc geninfo_all_blocks=1 00:26:22.675 --rc geninfo_unexecuted_blocks=1 00:26:22.675 00:26:22.675 ' 00:26:22.675 17:22:59 event.event_scheduler -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:26:22.675 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:22.675 --rc genhtml_branch_coverage=1 00:26:22.675 --rc genhtml_function_coverage=1 00:26:22.675 --rc genhtml_legend=1 00:26:22.675 --rc geninfo_all_blocks=1 00:26:22.675 --rc geninfo_unexecuted_blocks=1 00:26:22.675 00:26:22.675 ' 00:26:22.675 17:22:59 event.event_scheduler -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:26:22.675 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:22.675 --rc genhtml_branch_coverage=1 00:26:22.675 --rc genhtml_function_coverage=1 00:26:22.675 --rc genhtml_legend=1 00:26:22.675 --rc geninfo_all_blocks=1 00:26:22.675 --rc geninfo_unexecuted_blocks=1 00:26:22.675 00:26:22.675 ' 00:26:22.675 17:22:59 event.event_scheduler -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:26:22.675 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:22.675 --rc genhtml_branch_coverage=1 00:26:22.675 --rc genhtml_function_coverage=1 00:26:22.675 --rc genhtml_legend=1 00:26:22.675 --rc geninfo_all_blocks=1 00:26:22.675 --rc geninfo_unexecuted_blocks=1 00:26:22.675 00:26:22.675 ' 00:26:22.675 17:22:59 event.event_scheduler -- scheduler/scheduler.sh@29 -- # rpc=rpc_cmd 00:26:22.675 17:22:59 event.event_scheduler -- scheduler/scheduler.sh@35 -- # scheduler_pid=58484 00:26:22.675 17:22:59 event.event_scheduler -- scheduler/scheduler.sh@34 -- # /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler -m 0xF -p 0x2 --wait-for-rpc -f 00:26:22.675 17:22:59 event.event_scheduler -- scheduler/scheduler.sh@36 -- # trap 'killprocess $scheduler_pid; exit 1' SIGINT SIGTERM EXIT 00:26:22.675 17:22:59 event.event_scheduler -- scheduler/scheduler.sh@37 -- # waitforlisten 58484 00:26:22.675 17:22:59 event.event_scheduler -- common/autotest_common.sh@835 -- # '[' -z 58484 ']' 00:26:22.675 17:22:59 event.event_scheduler -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:26:22.675 17:22:59 event.event_scheduler -- common/autotest_common.sh@840 -- # local max_retries=100 00:26:22.675 17:22:59 event.event_scheduler -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:26:22.675 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:26:22.675 17:22:59 event.event_scheduler -- common/autotest_common.sh@844 -- # xtrace_disable 00:26:22.675 17:22:59 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:26:22.675 [2024-11-26 17:23:00.068651] Starting SPDK v25.01-pre git sha1 f7ce15267 / DPDK 24.03.0 initialization... 00:26:22.675 [2024-11-26 17:23:00.069176] [ DPDK EAL parameters: scheduler --no-shconf -c 0xF --main-lcore=2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58484 ] 00:26:22.933 [2024-11-26 17:23:00.284150] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:26:23.192 [2024-11-26 17:23:00.471104] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:26:23.192 [2024-11-26 17:23:00.471159] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:26:23.192 [2024-11-26 17:23:00.471240] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:26:23.192 [2024-11-26 17:23:00.471243] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:26:23.757 17:23:01 event.event_scheduler -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:26:23.757 17:23:01 event.event_scheduler -- common/autotest_common.sh@868 -- # return 0 00:26:23.757 17:23:01 event.event_scheduler -- scheduler/scheduler.sh@39 -- # rpc_cmd framework_set_scheduler dynamic 00:26:23.757 17:23:01 event.event_scheduler -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:23.757 17:23:01 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:26:23.757 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:26:23.757 POWER: Cannot set governor of lcore 0 to userspace 00:26:23.757 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:26:23.757 POWER: Cannot set governor of lcore 0 to performance 00:26:23.757 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:26:23.757 POWER: Cannot set governor of lcore 0 to userspace 00:26:23.757 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:26:23.757 POWER: Cannot set governor of lcore 0 to userspace 00:26:23.757 GUEST_CHANNEL: Opening channel '/dev/virtio-ports/virtio.serial.port.poweragent.0' for lcore 0 00:26:23.757 GUEST_CHANNEL: Unable to connect to '/dev/virtio-ports/virtio.serial.port.poweragent.0' with error No such file or directory 00:26:23.757 POWER: Unable to set Power Management Environment for lcore 0 00:26:23.757 [2024-11-26 17:23:01.046549] dpdk_governor.c: 135:_init_core: *ERROR*: Failed to initialize on core0 00:26:23.758 [2024-11-26 17:23:01.046578] dpdk_governor.c: 196:_init: *ERROR*: Failed to initialize on core0 00:26:23.758 [2024-11-26 17:23:01.046593] scheduler_dynamic.c: 280:init: *NOTICE*: Unable to initialize dpdk governor 00:26:23.758 [2024-11-26 17:23:01.046625] scheduler_dynamic.c: 427:set_opts: *NOTICE*: Setting scheduler load limit to 20 00:26:23.758 [2024-11-26 17:23:01.046638] scheduler_dynamic.c: 429:set_opts: *NOTICE*: Setting scheduler core limit to 80 00:26:23.758 [2024-11-26 17:23:01.046653] scheduler_dynamic.c: 431:set_opts: *NOTICE*: Setting scheduler core busy to 95 00:26:23.758 17:23:01 event.event_scheduler -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:23.758 17:23:01 event.event_scheduler -- scheduler/scheduler.sh@40 -- # rpc_cmd framework_start_init 00:26:23.758 17:23:01 event.event_scheduler -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:23.758 17:23:01 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:26:24.016 [2024-11-26 17:23:01.449857] scheduler.c: 382:test_start: *NOTICE*: Scheduler test application started. 00:26:24.016 17:23:01 event.event_scheduler -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:24.016 17:23:01 event.event_scheduler -- scheduler/scheduler.sh@43 -- # run_test scheduler_create_thread scheduler_create_thread 00:26:24.016 17:23:01 event.event_scheduler -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:26:24.016 17:23:01 event.event_scheduler -- common/autotest_common.sh@1111 -- # xtrace_disable 00:26:24.016 17:23:01 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:26:24.275 ************************************ 00:26:24.275 START TEST scheduler_create_thread 00:26:24.275 ************************************ 00:26:24.275 17:23:01 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1129 -- # scheduler_create_thread 00:26:24.275 17:23:01 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@12 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x1 -a 100 00:26:24.275 17:23:01 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:24.275 17:23:01 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:26:24.275 2 00:26:24.275 17:23:01 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:24.275 17:23:01 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@13 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x2 -a 100 00:26:24.275 17:23:01 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:24.275 17:23:01 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:26:24.275 3 00:26:24.275 17:23:01 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:24.275 17:23:01 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@14 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x4 -a 100 00:26:24.275 17:23:01 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:24.275 17:23:01 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:26:24.275 4 00:26:24.275 17:23:01 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:24.275 17:23:01 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@15 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x8 -a 100 00:26:24.275 17:23:01 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:24.275 17:23:01 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:26:24.275 5 00:26:24.275 17:23:01 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:24.275 17:23:01 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@16 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x1 -a 0 00:26:24.275 17:23:01 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:24.275 17:23:01 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:26:24.275 6 00:26:24.275 17:23:01 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:24.275 17:23:01 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@17 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x2 -a 0 00:26:24.275 17:23:01 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:24.275 17:23:01 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:26:24.275 7 00:26:24.275 17:23:01 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:24.275 17:23:01 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@18 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x4 -a 0 00:26:24.275 17:23:01 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:24.275 17:23:01 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:26:24.275 8 00:26:24.275 17:23:01 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:24.275 17:23:01 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@19 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x8 -a 0 00:26:24.275 17:23:01 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:24.275 17:23:01 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:26:24.275 9 00:26:24.275 17:23:01 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:24.275 17:23:01 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@21 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n one_third_active -a 30 00:26:24.275 17:23:01 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:24.275 17:23:01 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:26:24.275 10 00:26:24.275 17:23:01 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:24.275 17:23:01 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n half_active -a 0 00:26:24.275 17:23:01 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:24.275 17:23:01 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:26:24.275 17:23:01 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:24.275 17:23:01 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # thread_id=11 00:26:24.275 17:23:01 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@23 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_set_active 11 50 00:26:24.275 17:23:01 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:24.275 17:23:01 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:26:24.275 17:23:01 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:24.275 17:23:01 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n deleted -a 100 00:26:24.275 17:23:01 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:24.276 17:23:01 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:26:25.210 17:23:02 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:25.210 17:23:02 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # thread_id=12 00:26:25.210 17:23:02 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@26 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_delete 12 00:26:25.210 17:23:02 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:25.210 17:23:02 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:26:26.586 ************************************ 00:26:26.586 END TEST scheduler_create_thread 00:26:26.586 ************************************ 00:26:26.586 17:23:03 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:26.586 00:26:26.586 real 0m2.141s 00:26:26.586 user 0m0.021s 00:26:26.586 sys 0m0.005s 00:26:26.586 17:23:03 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1130 -- # xtrace_disable 00:26:26.586 17:23:03 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:26:26.586 17:23:03 event.event_scheduler -- scheduler/scheduler.sh@45 -- # trap - SIGINT SIGTERM EXIT 00:26:26.586 17:23:03 event.event_scheduler -- scheduler/scheduler.sh@46 -- # killprocess 58484 00:26:26.586 17:23:03 event.event_scheduler -- common/autotest_common.sh@954 -- # '[' -z 58484 ']' 00:26:26.586 17:23:03 event.event_scheduler -- common/autotest_common.sh@958 -- # kill -0 58484 00:26:26.586 17:23:03 event.event_scheduler -- common/autotest_common.sh@959 -- # uname 00:26:26.586 17:23:03 event.event_scheduler -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:26:26.586 17:23:03 event.event_scheduler -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 58484 00:26:26.586 killing process with pid 58484 00:26:26.586 17:23:03 event.event_scheduler -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:26:26.586 17:23:03 event.event_scheduler -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:26:26.586 17:23:03 event.event_scheduler -- common/autotest_common.sh@972 -- # echo 'killing process with pid 58484' 00:26:26.586 17:23:03 event.event_scheduler -- common/autotest_common.sh@973 -- # kill 58484 00:26:26.586 17:23:03 event.event_scheduler -- common/autotest_common.sh@978 -- # wait 58484 00:26:26.844 [2024-11-26 17:23:04.084919] scheduler.c: 360:test_shutdown: *NOTICE*: Scheduler test application stopped. 00:26:28.258 00:26:28.258 real 0m5.698s 00:26:28.258 user 0m10.014s 00:26:28.258 sys 0m0.599s 00:26:28.258 17:23:05 event.event_scheduler -- common/autotest_common.sh@1130 -- # xtrace_disable 00:26:28.258 ************************************ 00:26:28.258 END TEST event_scheduler 00:26:28.258 ************************************ 00:26:28.258 17:23:05 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:26:28.258 17:23:05 event -- event/event.sh@51 -- # modprobe -n nbd 00:26:28.258 17:23:05 event -- event/event.sh@52 -- # run_test app_repeat app_repeat_test 00:26:28.258 17:23:05 event -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:26:28.258 17:23:05 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:26:28.258 17:23:05 event -- common/autotest_common.sh@10 -- # set +x 00:26:28.258 ************************************ 00:26:28.258 START TEST app_repeat 00:26:28.258 ************************************ 00:26:28.258 17:23:05 event.app_repeat -- common/autotest_common.sh@1129 -- # app_repeat_test 00:26:28.258 17:23:05 event.app_repeat -- event/event.sh@12 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:26:28.258 17:23:05 event.app_repeat -- event/event.sh@13 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:26:28.258 17:23:05 event.app_repeat -- event/event.sh@13 -- # local nbd_list 00:26:28.258 17:23:05 event.app_repeat -- event/event.sh@14 -- # bdev_list=('Malloc0' 'Malloc1') 00:26:28.258 17:23:05 event.app_repeat -- event/event.sh@14 -- # local bdev_list 00:26:28.258 17:23:05 event.app_repeat -- event/event.sh@15 -- # local repeat_times=4 00:26:28.258 17:23:05 event.app_repeat -- event/event.sh@17 -- # modprobe nbd 00:26:28.258 17:23:05 event.app_repeat -- event/event.sh@19 -- # repeat_pid=58596 00:26:28.258 17:23:05 event.app_repeat -- event/event.sh@18 -- # /home/vagrant/spdk_repo/spdk/test/event/app_repeat/app_repeat -r /var/tmp/spdk-nbd.sock -m 0x3 -t 4 00:26:28.258 17:23:05 event.app_repeat -- event/event.sh@20 -- # trap 'killprocess $repeat_pid; exit 1' SIGINT SIGTERM EXIT 00:26:28.258 Process app_repeat pid: 58596 00:26:28.258 17:23:05 event.app_repeat -- event/event.sh@21 -- # echo 'Process app_repeat pid: 58596' 00:26:28.258 17:23:05 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:26:28.258 spdk_app_start Round 0 00:26:28.258 17:23:05 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 0' 00:26:28.258 17:23:05 event.app_repeat -- event/event.sh@25 -- # waitforlisten 58596 /var/tmp/spdk-nbd.sock 00:26:28.258 17:23:05 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 58596 ']' 00:26:28.258 17:23:05 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:26:28.258 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:26:28.258 17:23:05 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100 00:26:28.258 17:23:05 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:26:28.258 17:23:05 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable 00:26:28.258 17:23:05 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:26:28.258 [2024-11-26 17:23:05.584065] Starting SPDK v25.01-pre git sha1 f7ce15267 / DPDK 24.03.0 initialization... 00:26:28.258 [2024-11-26 17:23:05.584248] [ DPDK EAL parameters: app_repeat --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58596 ] 00:26:28.517 [2024-11-26 17:23:05.785366] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:26:28.517 [2024-11-26 17:23:05.939586] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:26:28.517 [2024-11-26 17:23:05.939631] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:26:29.449 17:23:06 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:26:29.449 17:23:06 event.app_repeat -- common/autotest_common.sh@868 -- # return 0 00:26:29.449 17:23:06 event.app_repeat -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:26:29.707 Malloc0 00:26:29.707 17:23:06 event.app_repeat -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:26:29.965 Malloc1 00:26:29.965 17:23:07 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:26:29.965 17:23:07 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:26:29.965 17:23:07 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:26:29.965 17:23:07 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:26:29.965 17:23:07 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:26:29.965 17:23:07 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:26:29.965 17:23:07 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:26:29.965 17:23:07 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:26:29.965 17:23:07 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:26:29.965 17:23:07 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:26:29.965 17:23:07 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:26:29.965 17:23:07 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:26:29.965 17:23:07 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:26:29.965 17:23:07 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:26:29.965 17:23:07 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:26:29.965 17:23:07 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:26:30.226 /dev/nbd0 00:26:30.226 17:23:07 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:26:30.226 17:23:07 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:26:30.226 17:23:07 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:26:30.226 17:23:07 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:26:30.226 17:23:07 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:26:30.226 17:23:07 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:26:30.226 17:23:07 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:26:30.226 17:23:07 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:26:30.226 17:23:07 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:26:30.226 17:23:07 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:26:30.226 17:23:07 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:26:30.226 1+0 records in 00:26:30.226 1+0 records out 00:26:30.226 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000253643 s, 16.1 MB/s 00:26:30.226 17:23:07 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:26:30.226 17:23:07 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:26:30.226 17:23:07 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:26:30.226 17:23:07 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:26:30.226 17:23:07 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:26:30.226 17:23:07 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:26:30.226 17:23:07 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:26:30.226 17:23:07 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:26:30.484 /dev/nbd1 00:26:30.484 17:23:07 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:26:30.484 17:23:07 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:26:30.484 17:23:07 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:26:30.484 17:23:07 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:26:30.484 17:23:07 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:26:30.484 17:23:07 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:26:30.484 17:23:07 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:26:30.484 17:23:07 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:26:30.484 17:23:07 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:26:30.484 17:23:07 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:26:30.484 17:23:07 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:26:30.484 1+0 records in 00:26:30.484 1+0 records out 00:26:30.484 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00034517 s, 11.9 MB/s 00:26:30.484 17:23:07 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:26:30.484 17:23:07 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:26:30.484 17:23:07 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:26:30.484 17:23:07 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:26:30.484 17:23:07 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:26:30.484 17:23:07 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:26:30.484 17:23:07 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:26:30.484 17:23:07 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:26:30.484 17:23:07 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:26:30.484 17:23:07 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:26:30.741 17:23:08 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:26:30.741 { 00:26:30.741 "nbd_device": "/dev/nbd0", 00:26:30.741 "bdev_name": "Malloc0" 00:26:30.741 }, 00:26:30.741 { 00:26:30.741 "nbd_device": "/dev/nbd1", 00:26:30.741 "bdev_name": "Malloc1" 00:26:30.741 } 00:26:30.741 ]' 00:26:30.741 17:23:08 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:26:30.741 { 00:26:30.741 "nbd_device": "/dev/nbd0", 00:26:30.742 "bdev_name": "Malloc0" 00:26:30.742 }, 00:26:30.742 { 00:26:30.742 "nbd_device": "/dev/nbd1", 00:26:30.742 "bdev_name": "Malloc1" 00:26:30.742 } 00:26:30.742 ]' 00:26:30.742 17:23:08 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:26:31.046 17:23:08 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:26:31.046 /dev/nbd1' 00:26:31.046 17:23:08 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:26:31.046 /dev/nbd1' 00:26:31.046 17:23:08 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:26:31.046 17:23:08 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:26:31.046 17:23:08 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:26:31.046 17:23:08 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:26:31.046 17:23:08 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:26:31.046 17:23:08 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:26:31.046 17:23:08 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:26:31.046 17:23:08 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:26:31.046 17:23:08 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:26:31.046 17:23:08 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:26:31.046 17:23:08 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:26:31.046 17:23:08 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:26:31.046 256+0 records in 00:26:31.046 256+0 records out 00:26:31.046 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00659673 s, 159 MB/s 00:26:31.046 17:23:08 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:26:31.046 17:23:08 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:26:31.046 256+0 records in 00:26:31.046 256+0 records out 00:26:31.046 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0273405 s, 38.4 MB/s 00:26:31.046 17:23:08 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:26:31.046 17:23:08 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:26:31.046 256+0 records in 00:26:31.046 256+0 records out 00:26:31.046 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0384293 s, 27.3 MB/s 00:26:31.046 17:23:08 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:26:31.046 17:23:08 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:26:31.046 17:23:08 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:26:31.046 17:23:08 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:26:31.046 17:23:08 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:26:31.046 17:23:08 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:26:31.046 17:23:08 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:26:31.046 17:23:08 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:26:31.046 17:23:08 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:26:31.046 17:23:08 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:26:31.046 17:23:08 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:26:31.046 17:23:08 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:26:31.046 17:23:08 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:26:31.046 17:23:08 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:26:31.046 17:23:08 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:26:31.046 17:23:08 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:26:31.046 17:23:08 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:26:31.046 17:23:08 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:26:31.046 17:23:08 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:26:31.304 17:23:08 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:26:31.304 17:23:08 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:26:31.304 17:23:08 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:26:31.304 17:23:08 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:26:31.304 17:23:08 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:26:31.304 17:23:08 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:26:31.304 17:23:08 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:26:31.304 17:23:08 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:26:31.304 17:23:08 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:26:31.304 17:23:08 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:26:31.562 17:23:08 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:26:31.562 17:23:08 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:26:31.562 17:23:08 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:26:31.562 17:23:08 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:26:31.562 17:23:08 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:26:31.562 17:23:08 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:26:31.562 17:23:08 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:26:31.562 17:23:08 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:26:31.562 17:23:08 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:26:31.562 17:23:08 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:26:31.562 17:23:08 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:26:32.128 17:23:09 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:26:32.128 17:23:09 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:26:32.128 17:23:09 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:26:32.128 17:23:09 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:26:32.128 17:23:09 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:26:32.128 17:23:09 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:26:32.128 17:23:09 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:26:32.128 17:23:09 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:26:32.128 17:23:09 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:26:32.128 17:23:09 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:26:32.128 17:23:09 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:26:32.128 17:23:09 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:26:32.128 17:23:09 event.app_repeat -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:26:32.693 17:23:09 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:26:34.066 [2024-11-26 17:23:11.347581] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:26:34.066 [2024-11-26 17:23:11.478431] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:26:34.066 [2024-11-26 17:23:11.478440] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:26:34.323 [2024-11-26 17:23:11.711835] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:26:34.323 [2024-11-26 17:23:11.711946] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:26:35.698 spdk_app_start Round 1 00:26:35.698 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:26:35.698 17:23:12 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:26:35.698 17:23:12 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 1' 00:26:35.698 17:23:12 event.app_repeat -- event/event.sh@25 -- # waitforlisten 58596 /var/tmp/spdk-nbd.sock 00:26:35.698 17:23:12 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 58596 ']' 00:26:35.698 17:23:12 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:26:35.698 17:23:12 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100 00:26:35.698 17:23:12 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:26:35.698 17:23:12 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable 00:26:35.698 17:23:12 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:26:35.957 17:23:13 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:26:35.957 17:23:13 event.app_repeat -- common/autotest_common.sh@868 -- # return 0 00:26:35.957 17:23:13 event.app_repeat -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:26:36.214 Malloc0 00:26:36.215 17:23:13 event.app_repeat -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:26:36.779 Malloc1 00:26:36.779 17:23:13 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:26:36.779 17:23:13 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:26:36.779 17:23:13 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:26:36.779 17:23:13 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:26:36.779 17:23:13 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:26:36.779 17:23:13 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:26:36.779 17:23:13 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:26:36.779 17:23:13 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:26:36.779 17:23:13 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:26:36.779 17:23:13 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:26:36.779 17:23:13 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:26:36.779 17:23:13 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:26:36.779 17:23:13 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:26:36.779 17:23:13 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:26:36.779 17:23:13 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:26:36.779 17:23:13 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:26:37.037 /dev/nbd0 00:26:37.037 17:23:14 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:26:37.038 17:23:14 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:26:37.038 17:23:14 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:26:37.038 17:23:14 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:26:37.038 17:23:14 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:26:37.038 17:23:14 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:26:37.038 17:23:14 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:26:37.038 17:23:14 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:26:37.038 17:23:14 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:26:37.038 17:23:14 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:26:37.038 17:23:14 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:26:37.038 1+0 records in 00:26:37.038 1+0 records out 00:26:37.038 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00036366 s, 11.3 MB/s 00:26:37.038 17:23:14 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:26:37.038 17:23:14 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:26:37.038 17:23:14 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:26:37.038 17:23:14 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:26:37.038 17:23:14 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:26:37.038 17:23:14 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:26:37.038 17:23:14 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:26:37.038 17:23:14 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:26:37.296 /dev/nbd1 00:26:37.296 17:23:14 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:26:37.296 17:23:14 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:26:37.296 17:23:14 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:26:37.296 17:23:14 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:26:37.296 17:23:14 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:26:37.296 17:23:14 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:26:37.296 17:23:14 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:26:37.296 17:23:14 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:26:37.296 17:23:14 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:26:37.296 17:23:14 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:26:37.296 17:23:14 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:26:37.296 1+0 records in 00:26:37.296 1+0 records out 00:26:37.296 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000269809 s, 15.2 MB/s 00:26:37.296 17:23:14 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:26:37.296 17:23:14 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:26:37.296 17:23:14 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:26:37.296 17:23:14 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:26:37.296 17:23:14 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:26:37.296 17:23:14 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:26:37.296 17:23:14 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:26:37.296 17:23:14 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:26:37.296 17:23:14 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:26:37.296 17:23:14 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:26:37.554 17:23:14 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:26:37.554 { 00:26:37.554 "nbd_device": "/dev/nbd0", 00:26:37.554 "bdev_name": "Malloc0" 00:26:37.554 }, 00:26:37.554 { 00:26:37.554 "nbd_device": "/dev/nbd1", 00:26:37.554 "bdev_name": "Malloc1" 00:26:37.554 } 00:26:37.554 ]' 00:26:37.554 17:23:14 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:26:37.554 17:23:14 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:26:37.554 { 00:26:37.554 "nbd_device": "/dev/nbd0", 00:26:37.554 "bdev_name": "Malloc0" 00:26:37.554 }, 00:26:37.554 { 00:26:37.554 "nbd_device": "/dev/nbd1", 00:26:37.554 "bdev_name": "Malloc1" 00:26:37.554 } 00:26:37.554 ]' 00:26:37.812 17:23:15 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:26:37.812 /dev/nbd1' 00:26:37.812 17:23:15 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:26:37.812 17:23:15 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:26:37.812 /dev/nbd1' 00:26:37.812 17:23:15 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:26:37.812 17:23:15 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:26:37.812 17:23:15 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:26:37.812 17:23:15 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:26:37.812 17:23:15 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:26:37.812 17:23:15 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:26:37.812 17:23:15 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:26:37.812 17:23:15 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:26:37.812 17:23:15 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:26:37.812 17:23:15 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:26:37.812 17:23:15 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:26:37.812 256+0 records in 00:26:37.812 256+0 records out 00:26:37.812 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00650888 s, 161 MB/s 00:26:37.812 17:23:15 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:26:37.812 17:23:15 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:26:37.812 256+0 records in 00:26:37.812 256+0 records out 00:26:37.812 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0337337 s, 31.1 MB/s 00:26:37.812 17:23:15 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:26:37.812 17:23:15 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:26:37.812 256+0 records in 00:26:37.812 256+0 records out 00:26:37.812 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0327654 s, 32.0 MB/s 00:26:37.812 17:23:15 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:26:37.812 17:23:15 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:26:37.812 17:23:15 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:26:37.812 17:23:15 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:26:37.812 17:23:15 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:26:37.812 17:23:15 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:26:37.812 17:23:15 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:26:37.812 17:23:15 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:26:37.812 17:23:15 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:26:37.812 17:23:15 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:26:37.812 17:23:15 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:26:37.812 17:23:15 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:26:37.812 17:23:15 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:26:37.812 17:23:15 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:26:37.812 17:23:15 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:26:37.812 17:23:15 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:26:37.812 17:23:15 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:26:37.812 17:23:15 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:26:37.812 17:23:15 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:26:38.070 17:23:15 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:26:38.070 17:23:15 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:26:38.070 17:23:15 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:26:38.070 17:23:15 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:26:38.070 17:23:15 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:26:38.070 17:23:15 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:26:38.070 17:23:15 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:26:38.070 17:23:15 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:26:38.070 17:23:15 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:26:38.070 17:23:15 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:26:38.328 17:23:15 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:26:38.328 17:23:15 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:26:38.328 17:23:15 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:26:38.328 17:23:15 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:26:38.586 17:23:15 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:26:38.586 17:23:15 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:26:38.586 17:23:15 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:26:38.586 17:23:15 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:26:38.586 17:23:15 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:26:38.586 17:23:15 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:26:38.586 17:23:15 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:26:38.845 17:23:16 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:26:38.845 17:23:16 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:26:38.845 17:23:16 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:26:38.845 17:23:16 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:26:38.845 17:23:16 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:26:38.845 17:23:16 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:26:38.845 17:23:16 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:26:38.845 17:23:16 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:26:38.845 17:23:16 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:26:38.845 17:23:16 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:26:38.845 17:23:16 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:26:38.845 17:23:16 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:26:38.845 17:23:16 event.app_repeat -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:26:39.412 17:23:16 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:26:40.786 [2024-11-26 17:23:18.039454] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:26:40.786 [2024-11-26 17:23:18.168873] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:26:40.786 [2024-11-26 17:23:18.168890] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:26:41.043 [2024-11-26 17:23:18.394781] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:26:41.043 [2024-11-26 17:23:18.394889] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:26:42.414 spdk_app_start Round 2 00:26:42.414 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:26:42.414 17:23:19 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:26:42.414 17:23:19 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 2' 00:26:42.414 17:23:19 event.app_repeat -- event/event.sh@25 -- # waitforlisten 58596 /var/tmp/spdk-nbd.sock 00:26:42.414 17:23:19 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 58596 ']' 00:26:42.414 17:23:19 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:26:42.414 17:23:19 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100 00:26:42.414 17:23:19 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:26:42.414 17:23:19 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable 00:26:42.414 17:23:19 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:26:42.671 17:23:20 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:26:42.671 17:23:20 event.app_repeat -- common/autotest_common.sh@868 -- # return 0 00:26:42.671 17:23:20 event.app_repeat -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:26:43.235 Malloc0 00:26:43.235 17:23:20 event.app_repeat -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:26:43.492 Malloc1 00:26:43.492 17:23:20 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:26:43.492 17:23:20 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:26:43.492 17:23:20 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:26:43.492 17:23:20 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:26:43.492 17:23:20 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:26:43.492 17:23:20 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:26:43.492 17:23:20 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:26:43.492 17:23:20 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:26:43.492 17:23:20 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:26:43.492 17:23:20 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:26:43.492 17:23:20 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:26:43.492 17:23:20 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:26:43.492 17:23:20 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:26:43.492 17:23:20 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:26:43.492 17:23:20 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:26:43.492 17:23:20 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:26:43.749 /dev/nbd0 00:26:43.749 17:23:20 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:26:43.749 17:23:20 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:26:43.749 17:23:20 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:26:43.749 17:23:20 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:26:43.749 17:23:20 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:26:43.749 17:23:20 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:26:43.749 17:23:20 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:26:43.749 17:23:20 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:26:43.749 17:23:20 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:26:43.749 17:23:20 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:26:43.749 17:23:20 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:26:43.749 1+0 records in 00:26:43.749 1+0 records out 00:26:43.749 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000256895 s, 15.9 MB/s 00:26:43.749 17:23:21 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:26:43.749 17:23:21 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:26:43.749 17:23:21 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:26:43.749 17:23:21 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:26:43.749 17:23:21 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:26:43.749 17:23:21 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:26:43.749 17:23:21 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:26:43.749 17:23:21 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:26:44.006 /dev/nbd1 00:26:44.006 17:23:21 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:26:44.006 17:23:21 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:26:44.006 17:23:21 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:26:44.006 17:23:21 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:26:44.006 17:23:21 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:26:44.006 17:23:21 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:26:44.006 17:23:21 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:26:44.006 17:23:21 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:26:44.006 17:23:21 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:26:44.006 17:23:21 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:26:44.006 17:23:21 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:26:44.006 1+0 records in 00:26:44.006 1+0 records out 00:26:44.006 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000312699 s, 13.1 MB/s 00:26:44.006 17:23:21 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:26:44.006 17:23:21 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:26:44.006 17:23:21 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:26:44.006 17:23:21 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:26:44.006 17:23:21 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:26:44.006 17:23:21 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:26:44.006 17:23:21 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:26:44.006 17:23:21 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:26:44.006 17:23:21 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:26:44.006 17:23:21 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:26:44.262 17:23:21 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:26:44.262 { 00:26:44.262 "nbd_device": "/dev/nbd0", 00:26:44.262 "bdev_name": "Malloc0" 00:26:44.262 }, 00:26:44.262 { 00:26:44.262 "nbd_device": "/dev/nbd1", 00:26:44.262 "bdev_name": "Malloc1" 00:26:44.262 } 00:26:44.262 ]' 00:26:44.262 17:23:21 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:26:44.262 { 00:26:44.262 "nbd_device": "/dev/nbd0", 00:26:44.262 "bdev_name": "Malloc0" 00:26:44.262 }, 00:26:44.262 { 00:26:44.262 "nbd_device": "/dev/nbd1", 00:26:44.262 "bdev_name": "Malloc1" 00:26:44.262 } 00:26:44.262 ]' 00:26:44.262 17:23:21 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:26:44.519 17:23:21 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:26:44.519 /dev/nbd1' 00:26:44.520 17:23:21 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:26:44.520 /dev/nbd1' 00:26:44.520 17:23:21 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:26:44.520 17:23:21 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:26:44.520 17:23:21 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:26:44.520 17:23:21 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:26:44.520 17:23:21 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:26:44.520 17:23:21 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:26:44.520 17:23:21 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:26:44.520 17:23:21 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:26:44.520 17:23:21 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:26:44.520 17:23:21 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:26:44.520 17:23:21 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:26:44.520 17:23:21 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:26:44.520 256+0 records in 00:26:44.520 256+0 records out 00:26:44.520 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00863018 s, 122 MB/s 00:26:44.520 17:23:21 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:26:44.520 17:23:21 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:26:44.520 256+0 records in 00:26:44.520 256+0 records out 00:26:44.520 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0279176 s, 37.6 MB/s 00:26:44.520 17:23:21 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:26:44.520 17:23:21 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:26:44.520 256+0 records in 00:26:44.520 256+0 records out 00:26:44.520 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0325626 s, 32.2 MB/s 00:26:44.520 17:23:21 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:26:44.520 17:23:21 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:26:44.520 17:23:21 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:26:44.520 17:23:21 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:26:44.520 17:23:21 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:26:44.520 17:23:21 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:26:44.520 17:23:21 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:26:44.520 17:23:21 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:26:44.520 17:23:21 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:26:44.520 17:23:21 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:26:44.520 17:23:21 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:26:44.520 17:23:21 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:26:44.520 17:23:21 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:26:44.520 17:23:21 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:26:44.520 17:23:21 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:26:44.520 17:23:21 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:26:44.520 17:23:21 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:26:44.520 17:23:21 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:26:44.520 17:23:21 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:26:44.778 17:23:22 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:26:44.778 17:23:22 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:26:44.778 17:23:22 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:26:44.778 17:23:22 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:26:44.778 17:23:22 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:26:44.778 17:23:22 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:26:44.778 17:23:22 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:26:44.778 17:23:22 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:26:44.778 17:23:22 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:26:44.778 17:23:22 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:26:45.035 17:23:22 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:26:45.035 17:23:22 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:26:45.035 17:23:22 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:26:45.035 17:23:22 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:26:45.035 17:23:22 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:26:45.035 17:23:22 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:26:45.035 17:23:22 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:26:45.035 17:23:22 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:26:45.035 17:23:22 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:26:45.035 17:23:22 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:26:45.035 17:23:22 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:26:45.292 17:23:22 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:26:45.292 17:23:22 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:26:45.292 17:23:22 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:26:45.292 17:23:22 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:26:45.292 17:23:22 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:26:45.292 17:23:22 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:26:45.292 17:23:22 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:26:45.292 17:23:22 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:26:45.292 17:23:22 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:26:45.292 17:23:22 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:26:45.292 17:23:22 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:26:45.292 17:23:22 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:26:45.292 17:23:22 event.app_repeat -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:26:45.856 17:23:23 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:26:47.227 [2024-11-26 17:23:24.588177] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:26:47.483 [2024-11-26 17:23:24.715164] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:26:47.483 [2024-11-26 17:23:24.715164] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:26:47.742 [2024-11-26 17:23:24.942860] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:26:47.742 [2024-11-26 17:23:24.942947] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:26:49.114 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:26:49.115 17:23:26 event.app_repeat -- event/event.sh@38 -- # waitforlisten 58596 /var/tmp/spdk-nbd.sock 00:26:49.115 17:23:26 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 58596 ']' 00:26:49.115 17:23:26 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:26:49.115 17:23:26 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100 00:26:49.115 17:23:26 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:26:49.115 17:23:26 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable 00:26:49.115 17:23:26 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:26:49.372 17:23:26 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:26:49.372 17:23:26 event.app_repeat -- common/autotest_common.sh@868 -- # return 0 00:26:49.372 17:23:26 event.app_repeat -- event/event.sh@39 -- # killprocess 58596 00:26:49.372 17:23:26 event.app_repeat -- common/autotest_common.sh@954 -- # '[' -z 58596 ']' 00:26:49.372 17:23:26 event.app_repeat -- common/autotest_common.sh@958 -- # kill -0 58596 00:26:49.372 17:23:26 event.app_repeat -- common/autotest_common.sh@959 -- # uname 00:26:49.372 17:23:26 event.app_repeat -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:26:49.372 17:23:26 event.app_repeat -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 58596 00:26:49.372 killing process with pid 58596 00:26:49.372 17:23:26 event.app_repeat -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:26:49.372 17:23:26 event.app_repeat -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:26:49.372 17:23:26 event.app_repeat -- common/autotest_common.sh@972 -- # echo 'killing process with pid 58596' 00:26:49.372 17:23:26 event.app_repeat -- common/autotest_common.sh@973 -- # kill 58596 00:26:49.372 17:23:26 event.app_repeat -- common/autotest_common.sh@978 -- # wait 58596 00:26:50.742 spdk_app_start is called in Round 0. 00:26:50.742 Shutdown signal received, stop current app iteration 00:26:50.742 Starting SPDK v25.01-pre git sha1 f7ce15267 / DPDK 24.03.0 reinitialization... 00:26:50.742 spdk_app_start is called in Round 1. 00:26:50.742 Shutdown signal received, stop current app iteration 00:26:50.742 Starting SPDK v25.01-pre git sha1 f7ce15267 / DPDK 24.03.0 reinitialization... 00:26:50.742 spdk_app_start is called in Round 2. 00:26:50.742 Shutdown signal received, stop current app iteration 00:26:50.742 Starting SPDK v25.01-pre git sha1 f7ce15267 / DPDK 24.03.0 reinitialization... 00:26:50.742 spdk_app_start is called in Round 3. 00:26:50.742 Shutdown signal received, stop current app iteration 00:26:50.742 17:23:27 event.app_repeat -- event/event.sh@40 -- # trap - SIGINT SIGTERM EXIT 00:26:50.742 17:23:27 event.app_repeat -- event/event.sh@42 -- # return 0 00:26:50.742 00:26:50.742 real 0m22.293s 00:26:50.742 user 0m48.663s 00:26:50.742 sys 0m3.710s 00:26:50.742 17:23:27 event.app_repeat -- common/autotest_common.sh@1130 -- # xtrace_disable 00:26:50.742 ************************************ 00:26:50.742 END TEST app_repeat 00:26:50.742 ************************************ 00:26:50.742 17:23:27 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:26:50.742 17:23:27 event -- event/event.sh@54 -- # (( SPDK_TEST_CRYPTO == 0 )) 00:26:50.742 17:23:27 event -- event/event.sh@55 -- # run_test cpu_locks /home/vagrant/spdk_repo/spdk/test/event/cpu_locks.sh 00:26:50.742 17:23:27 event -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:26:50.742 17:23:27 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:26:50.742 17:23:27 event -- common/autotest_common.sh@10 -- # set +x 00:26:50.742 ************************************ 00:26:50.742 START TEST cpu_locks 00:26:50.742 ************************************ 00:26:50.742 17:23:27 event.cpu_locks -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/event/cpu_locks.sh 00:26:50.742 * Looking for test storage... 00:26:50.742 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event 00:26:50.742 17:23:27 event.cpu_locks -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:26:50.742 17:23:27 event.cpu_locks -- common/autotest_common.sh@1693 -- # lcov --version 00:26:50.742 17:23:27 event.cpu_locks -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:26:50.742 17:23:28 event.cpu_locks -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:26:50.742 17:23:28 event.cpu_locks -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:26:50.742 17:23:28 event.cpu_locks -- scripts/common.sh@333 -- # local ver1 ver1_l 00:26:50.743 17:23:28 event.cpu_locks -- scripts/common.sh@334 -- # local ver2 ver2_l 00:26:50.743 17:23:28 event.cpu_locks -- scripts/common.sh@336 -- # IFS=.-: 00:26:50.743 17:23:28 event.cpu_locks -- scripts/common.sh@336 -- # read -ra ver1 00:26:50.743 17:23:28 event.cpu_locks -- scripts/common.sh@337 -- # IFS=.-: 00:26:50.743 17:23:28 event.cpu_locks -- scripts/common.sh@337 -- # read -ra ver2 00:26:50.743 17:23:28 event.cpu_locks -- scripts/common.sh@338 -- # local 'op=<' 00:26:50.743 17:23:28 event.cpu_locks -- scripts/common.sh@340 -- # ver1_l=2 00:26:50.743 17:23:28 event.cpu_locks -- scripts/common.sh@341 -- # ver2_l=1 00:26:50.743 17:23:28 event.cpu_locks -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:26:50.743 17:23:28 event.cpu_locks -- scripts/common.sh@344 -- # case "$op" in 00:26:50.743 17:23:28 event.cpu_locks -- scripts/common.sh@345 -- # : 1 00:26:50.743 17:23:28 event.cpu_locks -- scripts/common.sh@364 -- # (( v = 0 )) 00:26:50.743 17:23:28 event.cpu_locks -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:26:50.743 17:23:28 event.cpu_locks -- scripts/common.sh@365 -- # decimal 1 00:26:50.743 17:23:28 event.cpu_locks -- scripts/common.sh@353 -- # local d=1 00:26:50.743 17:23:28 event.cpu_locks -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:26:50.743 17:23:28 event.cpu_locks -- scripts/common.sh@355 -- # echo 1 00:26:50.743 17:23:28 event.cpu_locks -- scripts/common.sh@365 -- # ver1[v]=1 00:26:50.743 17:23:28 event.cpu_locks -- scripts/common.sh@366 -- # decimal 2 00:26:50.743 17:23:28 event.cpu_locks -- scripts/common.sh@353 -- # local d=2 00:26:50.743 17:23:28 event.cpu_locks -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:26:50.743 17:23:28 event.cpu_locks -- scripts/common.sh@355 -- # echo 2 00:26:50.743 17:23:28 event.cpu_locks -- scripts/common.sh@366 -- # ver2[v]=2 00:26:50.743 17:23:28 event.cpu_locks -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:26:50.743 17:23:28 event.cpu_locks -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:26:50.743 17:23:28 event.cpu_locks -- scripts/common.sh@368 -- # return 0 00:26:50.743 17:23:28 event.cpu_locks -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:26:50.743 17:23:28 event.cpu_locks -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:26:50.743 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:50.743 --rc genhtml_branch_coverage=1 00:26:50.743 --rc genhtml_function_coverage=1 00:26:50.743 --rc genhtml_legend=1 00:26:50.743 --rc geninfo_all_blocks=1 00:26:50.743 --rc geninfo_unexecuted_blocks=1 00:26:50.743 00:26:50.743 ' 00:26:50.743 17:23:28 event.cpu_locks -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:26:50.743 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:50.743 --rc genhtml_branch_coverage=1 00:26:50.743 --rc genhtml_function_coverage=1 00:26:50.743 --rc genhtml_legend=1 00:26:50.743 --rc geninfo_all_blocks=1 00:26:50.743 --rc geninfo_unexecuted_blocks=1 00:26:50.743 00:26:50.743 ' 00:26:50.743 17:23:28 event.cpu_locks -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:26:50.743 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:50.743 --rc genhtml_branch_coverage=1 00:26:50.743 --rc genhtml_function_coverage=1 00:26:50.743 --rc genhtml_legend=1 00:26:50.743 --rc geninfo_all_blocks=1 00:26:50.743 --rc geninfo_unexecuted_blocks=1 00:26:50.743 00:26:50.743 ' 00:26:50.743 17:23:28 event.cpu_locks -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:26:50.743 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:50.743 --rc genhtml_branch_coverage=1 00:26:50.743 --rc genhtml_function_coverage=1 00:26:50.743 --rc genhtml_legend=1 00:26:50.743 --rc geninfo_all_blocks=1 00:26:50.743 --rc geninfo_unexecuted_blocks=1 00:26:50.743 00:26:50.743 ' 00:26:50.743 17:23:28 event.cpu_locks -- event/cpu_locks.sh@11 -- # rpc_sock1=/var/tmp/spdk.sock 00:26:50.743 17:23:28 event.cpu_locks -- event/cpu_locks.sh@12 -- # rpc_sock2=/var/tmp/spdk2.sock 00:26:50.743 17:23:28 event.cpu_locks -- event/cpu_locks.sh@164 -- # trap cleanup EXIT SIGTERM SIGINT 00:26:50.743 17:23:28 event.cpu_locks -- event/cpu_locks.sh@166 -- # run_test default_locks default_locks 00:26:50.743 17:23:28 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:26:50.743 17:23:28 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:26:50.743 17:23:28 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:26:50.743 ************************************ 00:26:50.743 START TEST default_locks 00:26:50.743 ************************************ 00:26:50.743 17:23:28 event.cpu_locks.default_locks -- common/autotest_common.sh@1129 -- # default_locks 00:26:50.743 17:23:28 event.cpu_locks.default_locks -- event/cpu_locks.sh@46 -- # spdk_tgt_pid=59081 00:26:50.743 17:23:28 event.cpu_locks.default_locks -- event/cpu_locks.sh@47 -- # waitforlisten 59081 00:26:50.743 17:23:28 event.cpu_locks.default_locks -- common/autotest_common.sh@835 -- # '[' -z 59081 ']' 00:26:50.743 17:23:28 event.cpu_locks.default_locks -- event/cpu_locks.sh@45 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:26:50.743 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:26:50.743 17:23:28 event.cpu_locks.default_locks -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:26:50.743 17:23:28 event.cpu_locks.default_locks -- common/autotest_common.sh@840 -- # local max_retries=100 00:26:50.743 17:23:28 event.cpu_locks.default_locks -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:26:50.743 17:23:28 event.cpu_locks.default_locks -- common/autotest_common.sh@844 -- # xtrace_disable 00:26:50.743 17:23:28 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:26:51.000 [2024-11-26 17:23:28.249139] Starting SPDK v25.01-pre git sha1 f7ce15267 / DPDK 24.03.0 initialization... 00:26:51.000 [2024-11-26 17:23:28.249700] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59081 ] 00:26:51.257 [2024-11-26 17:23:28.469233] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:51.257 [2024-11-26 17:23:28.608487] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:26:52.187 17:23:29 event.cpu_locks.default_locks -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:26:52.187 17:23:29 event.cpu_locks.default_locks -- common/autotest_common.sh@868 -- # return 0 00:26:52.187 17:23:29 event.cpu_locks.default_locks -- event/cpu_locks.sh@49 -- # locks_exist 59081 00:26:52.187 17:23:29 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # lslocks -p 59081 00:26:52.188 17:23:29 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:26:52.752 17:23:30 event.cpu_locks.default_locks -- event/cpu_locks.sh@50 -- # killprocess 59081 00:26:52.752 17:23:30 event.cpu_locks.default_locks -- common/autotest_common.sh@954 -- # '[' -z 59081 ']' 00:26:52.752 17:23:30 event.cpu_locks.default_locks -- common/autotest_common.sh@958 -- # kill -0 59081 00:26:52.752 17:23:30 event.cpu_locks.default_locks -- common/autotest_common.sh@959 -- # uname 00:26:52.752 17:23:30 event.cpu_locks.default_locks -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:26:52.752 17:23:30 event.cpu_locks.default_locks -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 59081 00:26:53.056 killing process with pid 59081 00:26:53.056 17:23:30 event.cpu_locks.default_locks -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:26:53.056 17:23:30 event.cpu_locks.default_locks -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:26:53.056 17:23:30 event.cpu_locks.default_locks -- common/autotest_common.sh@972 -- # echo 'killing process with pid 59081' 00:26:53.056 17:23:30 event.cpu_locks.default_locks -- common/autotest_common.sh@973 -- # kill 59081 00:26:53.056 17:23:30 event.cpu_locks.default_locks -- common/autotest_common.sh@978 -- # wait 59081 00:26:55.583 17:23:32 event.cpu_locks.default_locks -- event/cpu_locks.sh@52 -- # NOT waitforlisten 59081 00:26:55.583 17:23:32 event.cpu_locks.default_locks -- common/autotest_common.sh@652 -- # local es=0 00:26:55.583 17:23:32 event.cpu_locks.default_locks -- common/autotest_common.sh@654 -- # valid_exec_arg waitforlisten 59081 00:26:55.583 17:23:32 event.cpu_locks.default_locks -- common/autotest_common.sh@640 -- # local arg=waitforlisten 00:26:55.583 17:23:32 event.cpu_locks.default_locks -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:26:55.583 17:23:32 event.cpu_locks.default_locks -- common/autotest_common.sh@644 -- # type -t waitforlisten 00:26:55.583 17:23:32 event.cpu_locks.default_locks -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:26:55.583 17:23:32 event.cpu_locks.default_locks -- common/autotest_common.sh@655 -- # waitforlisten 59081 00:26:55.583 17:23:32 event.cpu_locks.default_locks -- common/autotest_common.sh@835 -- # '[' -z 59081 ']' 00:26:55.583 17:23:32 event.cpu_locks.default_locks -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:26:55.583 17:23:32 event.cpu_locks.default_locks -- common/autotest_common.sh@840 -- # local max_retries=100 00:26:55.583 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:26:55.583 17:23:32 event.cpu_locks.default_locks -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:26:55.583 17:23:32 event.cpu_locks.default_locks -- common/autotest_common.sh@844 -- # xtrace_disable 00:26:55.583 17:23:32 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:26:55.583 ERROR: process (pid: 59081) is no longer running 00:26:55.583 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 850: kill: (59081) - No such process 00:26:55.583 17:23:32 event.cpu_locks.default_locks -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:26:55.583 17:23:32 event.cpu_locks.default_locks -- common/autotest_common.sh@868 -- # return 1 00:26:55.583 17:23:32 event.cpu_locks.default_locks -- common/autotest_common.sh@655 -- # es=1 00:26:55.583 17:23:32 event.cpu_locks.default_locks -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:26:55.583 17:23:32 event.cpu_locks.default_locks -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:26:55.583 17:23:32 event.cpu_locks.default_locks -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:26:55.583 17:23:32 event.cpu_locks.default_locks -- event/cpu_locks.sh@54 -- # no_locks 00:26:55.583 17:23:32 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # lock_files=() 00:26:55.583 17:23:32 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # local lock_files 00:26:55.583 17:23:32 event.cpu_locks.default_locks -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:26:55.583 00:26:55.583 real 0m4.887s 00:26:55.583 user 0m4.807s 00:26:55.583 sys 0m0.843s 00:26:55.583 17:23:32 event.cpu_locks.default_locks -- common/autotest_common.sh@1130 -- # xtrace_disable 00:26:55.583 ************************************ 00:26:55.583 END TEST default_locks 00:26:55.583 ************************************ 00:26:55.583 17:23:32 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:26:55.583 17:23:33 event.cpu_locks -- event/cpu_locks.sh@167 -- # run_test default_locks_via_rpc default_locks_via_rpc 00:26:55.583 17:23:33 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:26:55.583 17:23:33 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:26:55.583 17:23:33 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:26:55.583 ************************************ 00:26:55.583 START TEST default_locks_via_rpc 00:26:55.583 ************************************ 00:26:55.583 17:23:33 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1129 -- # default_locks_via_rpc 00:26:55.583 17:23:33 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@62 -- # spdk_tgt_pid=59162 00:26:55.583 17:23:33 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@63 -- # waitforlisten 59162 00:26:55.583 17:23:33 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 59162 ']' 00:26:55.841 17:23:33 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@61 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:26:55.841 17:23:33 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:26:55.841 17:23:33 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:26:55.841 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:26:55.841 17:23:33 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:26:55.841 17:23:33 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:26:55.842 17:23:33 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:26:55.842 [2024-11-26 17:23:33.137811] Starting SPDK v25.01-pre git sha1 f7ce15267 / DPDK 24.03.0 initialization... 00:26:55.842 [2024-11-26 17:23:33.137972] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59162 ] 00:26:56.100 [2024-11-26 17:23:33.318466] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:56.100 [2024-11-26 17:23:33.490114] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:26:57.135 17:23:34 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:26:57.135 17:23:34 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:26:57.135 17:23:34 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@65 -- # rpc_cmd framework_disable_cpumask_locks 00:26:57.135 17:23:34 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:57.135 17:23:34 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:26:57.135 17:23:34 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:57.135 17:23:34 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@67 -- # no_locks 00:26:57.135 17:23:34 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # lock_files=() 00:26:57.135 17:23:34 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # local lock_files 00:26:57.135 17:23:34 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:26:57.135 17:23:34 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@69 -- # rpc_cmd framework_enable_cpumask_locks 00:26:57.135 17:23:34 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:57.135 17:23:34 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:26:57.135 17:23:34 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:57.135 17:23:34 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@71 -- # locks_exist 59162 00:26:57.135 17:23:34 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # lslocks -p 59162 00:26:57.135 17:23:34 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:26:57.703 17:23:34 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@73 -- # killprocess 59162 00:26:57.703 17:23:34 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@954 -- # '[' -z 59162 ']' 00:26:57.703 17:23:34 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@958 -- # kill -0 59162 00:26:57.703 17:23:34 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@959 -- # uname 00:26:57.703 17:23:34 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:26:57.703 17:23:34 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 59162 00:26:57.703 killing process with pid 59162 00:26:57.703 17:23:34 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:26:57.703 17:23:34 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:26:57.703 17:23:34 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 59162' 00:26:57.703 17:23:34 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@973 -- # kill 59162 00:26:57.703 17:23:34 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@978 -- # wait 59162 00:27:00.230 ************************************ 00:27:00.230 END TEST default_locks_via_rpc 00:27:00.230 ************************************ 00:27:00.230 00:27:00.230 real 0m4.568s 00:27:00.230 user 0m4.677s 00:27:00.230 sys 0m0.668s 00:27:00.230 17:23:37 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:27:00.230 17:23:37 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:27:00.230 17:23:37 event.cpu_locks -- event/cpu_locks.sh@168 -- # run_test non_locking_app_on_locked_coremask non_locking_app_on_locked_coremask 00:27:00.230 17:23:37 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:27:00.230 17:23:37 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:27:00.230 17:23:37 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:27:00.230 ************************************ 00:27:00.230 START TEST non_locking_app_on_locked_coremask 00:27:00.230 ************************************ 00:27:00.230 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:27:00.230 17:23:37 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1129 -- # non_locking_app_on_locked_coremask 00:27:00.230 17:23:37 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@80 -- # spdk_tgt_pid=59241 00:27:00.230 17:23:37 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@81 -- # waitforlisten 59241 /var/tmp/spdk.sock 00:27:00.230 17:23:37 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@79 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:27:00.230 17:23:37 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 59241 ']' 00:27:00.230 17:23:37 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:00.230 17:23:37 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:27:00.230 17:23:37 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:00.230 17:23:37 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:27:00.230 17:23:37 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:27:00.490 [2024-11-26 17:23:37.756231] Starting SPDK v25.01-pre git sha1 f7ce15267 / DPDK 24.03.0 initialization... 00:27:00.490 [2024-11-26 17:23:37.756586] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59241 ] 00:27:00.490 [2024-11-26 17:23:37.928094] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:00.834 [2024-11-26 17:23:38.050385] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:27:01.771 17:23:38 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:27:01.771 17:23:38 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 0 00:27:01.771 17:23:38 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@84 -- # spdk_tgt_pid2=59257 00:27:01.771 17:23:38 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@83 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks -r /var/tmp/spdk2.sock 00:27:01.771 17:23:38 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@85 -- # waitforlisten 59257 /var/tmp/spdk2.sock 00:27:01.771 17:23:38 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 59257 ']' 00:27:01.771 17:23:38 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:27:01.771 17:23:38 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:27:01.771 17:23:38 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:27:01.771 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:27:01.771 17:23:38 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:27:01.771 17:23:38 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:27:01.771 [2024-11-26 17:23:39.078926] Starting SPDK v25.01-pre git sha1 f7ce15267 / DPDK 24.03.0 initialization... 00:27:01.771 [2024-11-26 17:23:39.079090] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59257 ] 00:27:02.029 [2024-11-26 17:23:39.275561] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:27:02.029 [2024-11-26 17:23:39.275638] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:02.287 [2024-11-26 17:23:39.532018] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:27:04.926 17:23:41 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:27:04.926 17:23:41 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 0 00:27:04.926 17:23:41 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@87 -- # locks_exist 59241 00:27:04.926 17:23:41 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 59241 00:27:04.926 17:23:41 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:27:05.493 17:23:42 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@89 -- # killprocess 59241 00:27:05.493 17:23:42 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # '[' -z 59241 ']' 00:27:05.493 17:23:42 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # kill -0 59241 00:27:05.493 17:23:42 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # uname 00:27:05.493 17:23:42 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:27:05.493 17:23:42 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 59241 00:27:05.493 killing process with pid 59241 00:27:05.493 17:23:42 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:27:05.493 17:23:42 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:27:05.493 17:23:42 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 59241' 00:27:05.493 17:23:42 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@973 -- # kill 59241 00:27:05.493 17:23:42 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@978 -- # wait 59241 00:27:10.760 17:23:47 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@90 -- # killprocess 59257 00:27:10.760 17:23:47 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # '[' -z 59257 ']' 00:27:10.760 17:23:48 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # kill -0 59257 00:27:10.760 17:23:48 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # uname 00:27:10.760 17:23:48 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:27:10.760 17:23:48 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 59257 00:27:10.760 killing process with pid 59257 00:27:10.760 17:23:48 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:27:10.760 17:23:48 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:27:10.760 17:23:48 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 59257' 00:27:10.760 17:23:48 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@973 -- # kill 59257 00:27:10.760 17:23:48 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@978 -- # wait 59257 00:27:14.046 ************************************ 00:27:14.046 END TEST non_locking_app_on_locked_coremask 00:27:14.046 ************************************ 00:27:14.046 00:27:14.046 real 0m13.132s 00:27:14.046 user 0m13.546s 00:27:14.046 sys 0m1.498s 00:27:14.046 17:23:50 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable 00:27:14.046 17:23:50 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:27:14.046 17:23:50 event.cpu_locks -- event/cpu_locks.sh@169 -- # run_test locking_app_on_unlocked_coremask locking_app_on_unlocked_coremask 00:27:14.046 17:23:50 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:27:14.046 17:23:50 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:27:14.046 17:23:50 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:27:14.046 ************************************ 00:27:14.046 START TEST locking_app_on_unlocked_coremask 00:27:14.046 ************************************ 00:27:14.046 17:23:50 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1129 -- # locking_app_on_unlocked_coremask 00:27:14.046 17:23:50 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@98 -- # spdk_tgt_pid=59422 00:27:14.046 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:27:14.046 17:23:50 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@99 -- # waitforlisten 59422 /var/tmp/spdk.sock 00:27:14.046 17:23:50 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@835 -- # '[' -z 59422 ']' 00:27:14.046 17:23:50 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:14.046 17:23:50 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:27:14.046 17:23:50 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:14.046 17:23:50 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@97 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks 00:27:14.046 17:23:50 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:27:14.046 17:23:50 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:27:14.046 [2024-11-26 17:23:50.994489] Starting SPDK v25.01-pre git sha1 f7ce15267 / DPDK 24.03.0 initialization... 00:27:14.046 [2024-11-26 17:23:50.994662] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59422 ] 00:27:14.046 [2024-11-26 17:23:51.196987] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:27:14.046 [2024-11-26 17:23:51.197067] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:14.046 [2024-11-26 17:23:51.352323] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:27:14.980 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:27:14.980 17:23:52 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:27:14.980 17:23:52 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@868 -- # return 0 00:27:14.980 17:23:52 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@102 -- # spdk_tgt_pid2=59443 00:27:14.980 17:23:52 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@101 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:27:14.980 17:23:52 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@103 -- # waitforlisten 59443 /var/tmp/spdk2.sock 00:27:14.980 17:23:52 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@835 -- # '[' -z 59443 ']' 00:27:14.980 17:23:52 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:27:14.980 17:23:52 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:27:14.980 17:23:52 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:27:14.980 17:23:52 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:27:14.980 17:23:52 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:27:15.238 [2024-11-26 17:23:52.459173] Starting SPDK v25.01-pre git sha1 f7ce15267 / DPDK 24.03.0 initialization... 00:27:15.238 [2024-11-26 17:23:52.459593] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59443 ] 00:27:15.238 [2024-11-26 17:23:52.654755] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:15.495 [2024-11-26 17:23:52.918096] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:27:18.070 17:23:55 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:27:18.070 17:23:55 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@868 -- # return 0 00:27:18.070 17:23:55 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@105 -- # locks_exist 59443 00:27:18.070 17:23:55 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 59443 00:27:18.070 17:23:55 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:27:19.004 17:23:56 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@107 -- # killprocess 59422 00:27:19.004 17:23:56 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # '[' -z 59422 ']' 00:27:19.004 17:23:56 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@958 -- # kill -0 59422 00:27:19.004 17:23:56 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # uname 00:27:19.004 17:23:56 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:27:19.004 17:23:56 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 59422 00:27:19.004 17:23:56 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:27:19.004 17:23:56 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:27:19.004 17:23:56 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 59422' 00:27:19.004 killing process with pid 59422 00:27:19.004 17:23:56 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@973 -- # kill 59422 00:27:19.004 17:23:56 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@978 -- # wait 59422 00:27:25.604 17:24:01 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@108 -- # killprocess 59443 00:27:25.604 17:24:01 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # '[' -z 59443 ']' 00:27:25.604 17:24:01 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@958 -- # kill -0 59443 00:27:25.604 17:24:01 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # uname 00:27:25.604 17:24:01 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:27:25.604 17:24:01 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 59443 00:27:25.604 killing process with pid 59443 00:27:25.604 17:24:01 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:27:25.604 17:24:01 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:27:25.604 17:24:01 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 59443' 00:27:25.604 17:24:01 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@973 -- # kill 59443 00:27:25.604 17:24:01 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@978 -- # wait 59443 00:27:27.506 00:27:27.506 real 0m13.923s 00:27:27.506 user 0m14.445s 00:27:27.506 sys 0m1.692s 00:27:27.506 17:24:04 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable 00:27:27.506 17:24:04 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:27:27.506 ************************************ 00:27:27.506 END TEST locking_app_on_unlocked_coremask 00:27:27.506 ************************************ 00:27:27.506 17:24:04 event.cpu_locks -- event/cpu_locks.sh@170 -- # run_test locking_app_on_locked_coremask locking_app_on_locked_coremask 00:27:27.506 17:24:04 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:27:27.506 17:24:04 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:27:27.506 17:24:04 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:27:27.506 ************************************ 00:27:27.506 START TEST locking_app_on_locked_coremask 00:27:27.506 ************************************ 00:27:27.506 17:24:04 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1129 -- # locking_app_on_locked_coremask 00:27:27.506 17:24:04 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@115 -- # spdk_tgt_pid=59614 00:27:27.506 17:24:04 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@116 -- # waitforlisten 59614 /var/tmp/spdk.sock 00:27:27.506 17:24:04 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@114 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:27:27.506 17:24:04 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 59614 ']' 00:27:27.506 17:24:04 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:27.506 17:24:04 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:27:27.506 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:27:27.506 17:24:04 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:27.506 17:24:04 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:27:27.506 17:24:04 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:27:27.506 [2024-11-26 17:24:04.940632] Starting SPDK v25.01-pre git sha1 f7ce15267 / DPDK 24.03.0 initialization... 00:27:27.506 [2024-11-26 17:24:04.940830] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59614 ] 00:27:27.764 [2024-11-26 17:24:05.132743] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:28.022 [2024-11-26 17:24:05.301550] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:27:29.413 17:24:06 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:27:29.413 17:24:06 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 0 00:27:29.413 17:24:06 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@119 -- # spdk_tgt_pid2=59635 00:27:29.413 17:24:06 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@118 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:27:29.413 17:24:06 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@120 -- # NOT waitforlisten 59635 /var/tmp/spdk2.sock 00:27:29.413 17:24:06 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@652 -- # local es=0 00:27:29.413 17:24:06 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@654 -- # valid_exec_arg waitforlisten 59635 /var/tmp/spdk2.sock 00:27:29.413 17:24:06 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@640 -- # local arg=waitforlisten 00:27:29.413 17:24:06 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:27:29.413 17:24:06 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@644 -- # type -t waitforlisten 00:27:29.413 17:24:06 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:27:29.413 17:24:06 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@655 -- # waitforlisten 59635 /var/tmp/spdk2.sock 00:27:29.413 17:24:06 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 59635 ']' 00:27:29.413 17:24:06 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:27:29.413 17:24:06 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:27:29.413 17:24:06 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:27:29.413 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:27:29.413 17:24:06 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:27:29.413 17:24:06 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:27:29.413 [2024-11-26 17:24:06.688971] Starting SPDK v25.01-pre git sha1 f7ce15267 / DPDK 24.03.0 initialization... 00:27:29.413 [2024-11-26 17:24:06.689654] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59635 ] 00:27:29.671 [2024-11-26 17:24:06.911034] app.c: 781:claim_cpu_cores: *ERROR*: Cannot create lock on core 0, probably process 59614 has claimed it. 00:27:29.671 [2024-11-26 17:24:06.911154] app.c: 912:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:27:29.931 ERROR: process (pid: 59635) is no longer running 00:27:29.931 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 850: kill: (59635) - No such process 00:27:29.931 17:24:07 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:27:29.931 17:24:07 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 1 00:27:29.931 17:24:07 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@655 -- # es=1 00:27:29.931 17:24:07 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:27:29.931 17:24:07 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:27:29.931 17:24:07 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:27:29.931 17:24:07 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@122 -- # locks_exist 59614 00:27:29.931 17:24:07 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:27:29.931 17:24:07 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 59614 00:27:30.497 17:24:07 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@124 -- # killprocess 59614 00:27:30.497 17:24:07 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # '[' -z 59614 ']' 00:27:30.497 17:24:07 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # kill -0 59614 00:27:30.497 17:24:07 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # uname 00:27:30.497 17:24:07 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:27:30.497 17:24:07 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 59614 00:27:30.497 killing process with pid 59614 00:27:30.497 17:24:07 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:27:30.497 17:24:07 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:27:30.497 17:24:07 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 59614' 00:27:30.497 17:24:07 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@973 -- # kill 59614 00:27:30.497 17:24:07 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@978 -- # wait 59614 00:27:33.776 ************************************ 00:27:33.777 END TEST locking_app_on_locked_coremask 00:27:33.777 ************************************ 00:27:33.777 00:27:33.777 real 0m5.977s 00:27:33.777 user 0m6.661s 00:27:33.777 sys 0m1.045s 00:27:33.777 17:24:10 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable 00:27:33.777 17:24:10 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:27:33.777 17:24:10 event.cpu_locks -- event/cpu_locks.sh@171 -- # run_test locking_overlapped_coremask locking_overlapped_coremask 00:27:33.777 17:24:10 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:27:33.777 17:24:10 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:27:33.777 17:24:10 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:27:33.777 ************************************ 00:27:33.777 START TEST locking_overlapped_coremask 00:27:33.777 ************************************ 00:27:33.777 17:24:10 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1129 -- # locking_overlapped_coremask 00:27:33.777 17:24:10 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@131 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x7 00:27:33.777 17:24:10 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@132 -- # spdk_tgt_pid=59710 00:27:33.777 17:24:10 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@133 -- # waitforlisten 59710 /var/tmp/spdk.sock 00:27:33.777 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:27:33.777 17:24:10 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@835 -- # '[' -z 59710 ']' 00:27:33.777 17:24:10 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:33.777 17:24:10 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:27:33.777 17:24:10 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:33.777 17:24:10 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:27:33.777 17:24:10 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:27:33.777 [2024-11-26 17:24:11.010787] Starting SPDK v25.01-pre git sha1 f7ce15267 / DPDK 24.03.0 initialization... 00:27:33.777 [2024-11-26 17:24:11.012002] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59710 ] 00:27:33.777 [2024-11-26 17:24:11.217577] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:27:34.035 [2024-11-26 17:24:11.404655] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:27:34.035 [2024-11-26 17:24:11.404707] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:27:34.035 [2024-11-26 17:24:11.404711] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:27:35.408 17:24:12 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:27:35.408 17:24:12 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@868 -- # return 0 00:27:35.408 17:24:12 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@136 -- # spdk_tgt_pid2=59739 00:27:35.408 17:24:12 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@137 -- # NOT waitforlisten 59739 /var/tmp/spdk2.sock 00:27:35.408 17:24:12 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@135 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock 00:27:35.408 17:24:12 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@652 -- # local es=0 00:27:35.408 17:24:12 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@654 -- # valid_exec_arg waitforlisten 59739 /var/tmp/spdk2.sock 00:27:35.408 17:24:12 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@640 -- # local arg=waitforlisten 00:27:35.408 17:24:12 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:27:35.408 17:24:12 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@644 -- # type -t waitforlisten 00:27:35.408 17:24:12 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:27:35.408 17:24:12 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@655 -- # waitforlisten 59739 /var/tmp/spdk2.sock 00:27:35.408 17:24:12 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@835 -- # '[' -z 59739 ']' 00:27:35.408 17:24:12 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:27:35.408 17:24:12 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:27:35.408 17:24:12 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:27:35.408 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:27:35.408 17:24:12 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:27:35.408 17:24:12 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:27:35.408 [2024-11-26 17:24:12.629320] Starting SPDK v25.01-pre git sha1 f7ce15267 / DPDK 24.03.0 initialization... 00:27:35.408 [2024-11-26 17:24:12.629832] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59739 ] 00:27:35.408 [2024-11-26 17:24:12.841000] app.c: 781:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 59710 has claimed it. 00:27:35.408 [2024-11-26 17:24:12.841249] app.c: 912:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:27:35.984 ERROR: process (pid: 59739) is no longer running 00:27:35.984 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 850: kill: (59739) - No such process 00:27:35.984 17:24:13 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:27:35.984 17:24:13 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@868 -- # return 1 00:27:35.984 17:24:13 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@655 -- # es=1 00:27:35.984 17:24:13 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:27:35.984 17:24:13 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:27:35.984 17:24:13 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:27:35.984 17:24:13 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@139 -- # check_remaining_locks 00:27:35.984 17:24:13 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:27:35.984 17:24:13 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:27:35.984 17:24:13 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:27:35.984 17:24:13 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@141 -- # killprocess 59710 00:27:35.984 17:24:13 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@954 -- # '[' -z 59710 ']' 00:27:35.984 17:24:13 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@958 -- # kill -0 59710 00:27:35.984 17:24:13 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@959 -- # uname 00:27:35.984 17:24:13 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:27:35.984 17:24:13 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 59710 00:27:35.984 17:24:13 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:27:35.984 killing process with pid 59710 00:27:35.984 17:24:13 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:27:35.984 17:24:13 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 59710' 00:27:35.984 17:24:13 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@973 -- # kill 59710 00:27:35.984 17:24:13 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@978 -- # wait 59710 00:27:39.289 00:27:39.289 real 0m5.330s 00:27:39.289 user 0m14.294s 00:27:39.289 sys 0m0.712s 00:27:39.289 17:24:16 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable 00:27:39.289 17:24:16 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:27:39.289 ************************************ 00:27:39.289 END TEST locking_overlapped_coremask 00:27:39.289 ************************************ 00:27:39.289 17:24:16 event.cpu_locks -- event/cpu_locks.sh@172 -- # run_test locking_overlapped_coremask_via_rpc locking_overlapped_coremask_via_rpc 00:27:39.289 17:24:16 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:27:39.289 17:24:16 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:27:39.289 17:24:16 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:27:39.289 ************************************ 00:27:39.289 START TEST locking_overlapped_coremask_via_rpc 00:27:39.289 ************************************ 00:27:39.289 17:24:16 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1129 -- # locking_overlapped_coremask_via_rpc 00:27:39.290 17:24:16 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@147 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x7 --disable-cpumask-locks 00:27:39.290 17:24:16 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@148 -- # spdk_tgt_pid=59809 00:27:39.290 17:24:16 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@149 -- # waitforlisten 59809 /var/tmp/spdk.sock 00:27:39.290 17:24:16 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 59809 ']' 00:27:39.290 17:24:16 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:39.290 17:24:16 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:27:39.290 17:24:16 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:39.290 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:27:39.290 17:24:16 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:27:39.290 17:24:16 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:27:39.290 [2024-11-26 17:24:16.406670] Starting SPDK v25.01-pre git sha1 f7ce15267 / DPDK 24.03.0 initialization... 00:27:39.290 [2024-11-26 17:24:16.406856] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59809 ] 00:27:39.290 [2024-11-26 17:24:16.599340] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:27:39.290 [2024-11-26 17:24:16.599616] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:27:39.548 [2024-11-26 17:24:16.741394] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:27:39.548 [2024-11-26 17:24:16.741536] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:27:39.548 [2024-11-26 17:24:16.741572] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:27:40.483 17:24:17 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:27:40.483 17:24:17 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:27:40.483 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:27:40.483 17:24:17 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@152 -- # spdk_tgt_pid2=59832 00:27:40.483 17:24:17 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@153 -- # waitforlisten 59832 /var/tmp/spdk2.sock 00:27:40.483 17:24:17 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 59832 ']' 00:27:40.483 17:24:17 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:27:40.483 17:24:17 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:27:40.483 17:24:17 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:27:40.483 17:24:17 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:27:40.483 17:24:17 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:27:40.483 17:24:17 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@151 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock --disable-cpumask-locks 00:27:40.483 [2024-11-26 17:24:17.877270] Starting SPDK v25.01-pre git sha1 f7ce15267 / DPDK 24.03.0 initialization... 00:27:40.483 [2024-11-26 17:24:17.877438] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59832 ] 00:27:40.741 [2024-11-26 17:24:18.090551] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:27:40.741 [2024-11-26 17:24:18.090787] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:27:41.000 [2024-11-26 17:24:18.373331] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:27:41.000 [2024-11-26 17:24:18.373435] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:27:41.000 [2024-11-26 17:24:18.373404] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:27:43.534 17:24:20 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:27:43.534 17:24:20 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:27:43.534 17:24:20 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@155 -- # rpc_cmd framework_enable_cpumask_locks 00:27:43.534 17:24:20 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:43.534 17:24:20 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:27:43.534 17:24:20 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:43.534 17:24:20 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@156 -- # NOT rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:27:43.535 17:24:20 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@652 -- # local es=0 00:27:43.535 17:24:20 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:27:43.535 17:24:20 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:27:43.535 17:24:20 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:27:43.535 17:24:20 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:27:43.535 17:24:20 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:27:43.535 17:24:20 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@655 -- # rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:27:43.535 17:24:20 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:43.535 17:24:20 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:27:43.535 [2024-11-26 17:24:20.716292] app.c: 781:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 59809 has claimed it. 00:27:43.535 request: 00:27:43.535 { 00:27:43.535 "method": "framework_enable_cpumask_locks", 00:27:43.535 "req_id": 1 00:27:43.535 } 00:27:43.535 Got JSON-RPC error response 00:27:43.535 response: 00:27:43.535 { 00:27:43.535 "code": -32603, 00:27:43.535 "message": "Failed to claim CPU core: 2" 00:27:43.535 } 00:27:43.535 17:24:20 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:27:43.535 17:24:20 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@655 -- # es=1 00:27:43.535 17:24:20 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:27:43.535 17:24:20 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:27:43.535 17:24:20 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:27:43.535 17:24:20 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@158 -- # waitforlisten 59809 /var/tmp/spdk.sock 00:27:43.535 17:24:20 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 59809 ']' 00:27:43.535 17:24:20 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:43.535 17:24:20 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:27:43.535 17:24:20 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:43.535 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:27:43.535 17:24:20 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:27:43.535 17:24:20 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:27:43.535 17:24:20 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:27:43.535 17:24:20 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:27:43.535 17:24:20 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@159 -- # waitforlisten 59832 /var/tmp/spdk2.sock 00:27:43.535 17:24:20 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 59832 ']' 00:27:43.535 17:24:20 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:27:43.535 17:24:20 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:27:43.535 17:24:20 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:27:43.535 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:27:43.535 17:24:20 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:27:43.535 17:24:20 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:27:44.109 17:24:21 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:27:44.109 17:24:21 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:27:44.109 ************************************ 00:27:44.109 END TEST locking_overlapped_coremask_via_rpc 00:27:44.109 ************************************ 00:27:44.109 17:24:21 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@161 -- # check_remaining_locks 00:27:44.109 17:24:21 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:27:44.109 17:24:21 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:27:44.109 17:24:21 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:27:44.109 00:27:44.109 real 0m5.085s 00:27:44.109 user 0m1.802s 00:27:44.109 sys 0m0.299s 00:27:44.109 17:24:21 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:27:44.109 17:24:21 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:27:44.109 17:24:21 event.cpu_locks -- event/cpu_locks.sh@174 -- # cleanup 00:27:44.109 17:24:21 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 59809 ]] 00:27:44.109 17:24:21 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 59809 00:27:44.109 17:24:21 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 59809 ']' 00:27:44.109 17:24:21 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 59809 00:27:44.109 17:24:21 event.cpu_locks -- common/autotest_common.sh@959 -- # uname 00:27:44.109 17:24:21 event.cpu_locks -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:27:44.109 17:24:21 event.cpu_locks -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 59809 00:27:44.109 17:24:21 event.cpu_locks -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:27:44.109 killing process with pid 59809 00:27:44.109 17:24:21 event.cpu_locks -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:27:44.109 17:24:21 event.cpu_locks -- common/autotest_common.sh@972 -- # echo 'killing process with pid 59809' 00:27:44.109 17:24:21 event.cpu_locks -- common/autotest_common.sh@973 -- # kill 59809 00:27:44.109 17:24:21 event.cpu_locks -- common/autotest_common.sh@978 -- # wait 59809 00:27:47.387 17:24:24 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 59832 ]] 00:27:47.387 17:24:24 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 59832 00:27:47.387 17:24:24 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 59832 ']' 00:27:47.387 17:24:24 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 59832 00:27:47.387 17:24:24 event.cpu_locks -- common/autotest_common.sh@959 -- # uname 00:27:47.387 17:24:24 event.cpu_locks -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:27:47.387 17:24:24 event.cpu_locks -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 59832 00:27:47.387 killing process with pid 59832 00:27:47.387 17:24:24 event.cpu_locks -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:27:47.387 17:24:24 event.cpu_locks -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:27:47.387 17:24:24 event.cpu_locks -- common/autotest_common.sh@972 -- # echo 'killing process with pid 59832' 00:27:47.387 17:24:24 event.cpu_locks -- common/autotest_common.sh@973 -- # kill 59832 00:27:47.387 17:24:24 event.cpu_locks -- common/autotest_common.sh@978 -- # wait 59832 00:27:49.915 17:24:26 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:27:49.915 Process with pid 59809 is not found 00:27:49.915 Process with pid 59832 is not found 00:27:49.915 17:24:26 event.cpu_locks -- event/cpu_locks.sh@1 -- # cleanup 00:27:49.915 17:24:26 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 59809 ]] 00:27:49.915 17:24:26 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 59809 00:27:49.915 17:24:26 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 59809 ']' 00:27:49.915 17:24:26 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 59809 00:27:49.915 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 958: kill: (59809) - No such process 00:27:49.915 17:24:26 event.cpu_locks -- common/autotest_common.sh@981 -- # echo 'Process with pid 59809 is not found' 00:27:49.915 17:24:26 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 59832 ]] 00:27:49.915 17:24:26 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 59832 00:27:49.915 17:24:26 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 59832 ']' 00:27:49.915 17:24:26 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 59832 00:27:49.915 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 958: kill: (59832) - No such process 00:27:49.915 17:24:26 event.cpu_locks -- common/autotest_common.sh@981 -- # echo 'Process with pid 59832 is not found' 00:27:49.915 17:24:26 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:27:49.915 00:27:49.915 real 0m59.118s 00:27:49.915 user 1m41.435s 00:27:49.915 sys 0m8.041s 00:27:49.915 17:24:26 event.cpu_locks -- common/autotest_common.sh@1130 -- # xtrace_disable 00:27:49.915 17:24:26 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:27:49.915 ************************************ 00:27:49.915 END TEST cpu_locks 00:27:49.915 ************************************ 00:27:49.915 00:27:49.915 real 1m32.704s 00:27:49.915 user 2m47.636s 00:27:49.915 sys 0m13.060s 00:27:49.915 17:24:27 event -- common/autotest_common.sh@1130 -- # xtrace_disable 00:27:49.915 17:24:27 event -- common/autotest_common.sh@10 -- # set +x 00:27:49.915 ************************************ 00:27:49.915 END TEST event 00:27:49.915 ************************************ 00:27:49.915 17:24:27 -- spdk/autotest.sh@169 -- # run_test thread /home/vagrant/spdk_repo/spdk/test/thread/thread.sh 00:27:49.915 17:24:27 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:27:49.915 17:24:27 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:27:49.915 17:24:27 -- common/autotest_common.sh@10 -- # set +x 00:27:49.915 ************************************ 00:27:49.915 START TEST thread 00:27:49.915 ************************************ 00:27:49.915 17:24:27 thread -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/thread/thread.sh 00:27:49.915 * Looking for test storage... 00:27:49.915 * Found test storage at /home/vagrant/spdk_repo/spdk/test/thread 00:27:49.915 17:24:27 thread -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:27:49.915 17:24:27 thread -- common/autotest_common.sh@1693 -- # lcov --version 00:27:49.915 17:24:27 thread -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:27:49.915 17:24:27 thread -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:27:49.915 17:24:27 thread -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:27:49.915 17:24:27 thread -- scripts/common.sh@333 -- # local ver1 ver1_l 00:27:49.915 17:24:27 thread -- scripts/common.sh@334 -- # local ver2 ver2_l 00:27:49.915 17:24:27 thread -- scripts/common.sh@336 -- # IFS=.-: 00:27:49.915 17:24:27 thread -- scripts/common.sh@336 -- # read -ra ver1 00:27:49.915 17:24:27 thread -- scripts/common.sh@337 -- # IFS=.-: 00:27:49.915 17:24:27 thread -- scripts/common.sh@337 -- # read -ra ver2 00:27:49.915 17:24:27 thread -- scripts/common.sh@338 -- # local 'op=<' 00:27:49.915 17:24:27 thread -- scripts/common.sh@340 -- # ver1_l=2 00:27:49.915 17:24:27 thread -- scripts/common.sh@341 -- # ver2_l=1 00:27:49.915 17:24:27 thread -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:27:49.915 17:24:27 thread -- scripts/common.sh@344 -- # case "$op" in 00:27:49.915 17:24:27 thread -- scripts/common.sh@345 -- # : 1 00:27:49.915 17:24:27 thread -- scripts/common.sh@364 -- # (( v = 0 )) 00:27:49.915 17:24:27 thread -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:27:49.915 17:24:27 thread -- scripts/common.sh@365 -- # decimal 1 00:27:49.915 17:24:27 thread -- scripts/common.sh@353 -- # local d=1 00:27:49.915 17:24:27 thread -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:27:49.915 17:24:27 thread -- scripts/common.sh@355 -- # echo 1 00:27:49.915 17:24:27 thread -- scripts/common.sh@365 -- # ver1[v]=1 00:27:49.915 17:24:27 thread -- scripts/common.sh@366 -- # decimal 2 00:27:49.915 17:24:27 thread -- scripts/common.sh@353 -- # local d=2 00:27:49.915 17:24:27 thread -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:27:49.915 17:24:27 thread -- scripts/common.sh@355 -- # echo 2 00:27:49.915 17:24:27 thread -- scripts/common.sh@366 -- # ver2[v]=2 00:27:49.915 17:24:27 thread -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:27:49.915 17:24:27 thread -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:27:49.915 17:24:27 thread -- scripts/common.sh@368 -- # return 0 00:27:49.915 17:24:27 thread -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:27:49.915 17:24:27 thread -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:27:49.915 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:49.915 --rc genhtml_branch_coverage=1 00:27:49.915 --rc genhtml_function_coverage=1 00:27:49.915 --rc genhtml_legend=1 00:27:49.915 --rc geninfo_all_blocks=1 00:27:49.915 --rc geninfo_unexecuted_blocks=1 00:27:49.915 00:27:49.915 ' 00:27:49.915 17:24:27 thread -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:27:49.915 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:49.915 --rc genhtml_branch_coverage=1 00:27:49.915 --rc genhtml_function_coverage=1 00:27:49.915 --rc genhtml_legend=1 00:27:49.915 --rc geninfo_all_blocks=1 00:27:49.915 --rc geninfo_unexecuted_blocks=1 00:27:49.915 00:27:49.915 ' 00:27:49.915 17:24:27 thread -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:27:49.915 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:49.915 --rc genhtml_branch_coverage=1 00:27:49.915 --rc genhtml_function_coverage=1 00:27:49.915 --rc genhtml_legend=1 00:27:49.915 --rc geninfo_all_blocks=1 00:27:49.915 --rc geninfo_unexecuted_blocks=1 00:27:49.915 00:27:49.915 ' 00:27:49.915 17:24:27 thread -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:27:49.915 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:49.915 --rc genhtml_branch_coverage=1 00:27:49.915 --rc genhtml_function_coverage=1 00:27:49.915 --rc genhtml_legend=1 00:27:49.915 --rc geninfo_all_blocks=1 00:27:49.915 --rc geninfo_unexecuted_blocks=1 00:27:49.915 00:27:49.915 ' 00:27:49.916 17:24:27 thread -- thread/thread.sh@11 -- # run_test thread_poller_perf /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:27:49.916 17:24:27 thread -- common/autotest_common.sh@1105 -- # '[' 8 -le 1 ']' 00:27:49.916 17:24:27 thread -- common/autotest_common.sh@1111 -- # xtrace_disable 00:27:49.916 17:24:27 thread -- common/autotest_common.sh@10 -- # set +x 00:27:49.916 ************************************ 00:27:49.916 START TEST thread_poller_perf 00:27:49.916 ************************************ 00:27:49.916 17:24:27 thread.thread_poller_perf -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:27:49.916 [2024-11-26 17:24:27.349223] Starting SPDK v25.01-pre git sha1 f7ce15267 / DPDK 24.03.0 initialization... 00:27:49.916 [2024-11-26 17:24:27.349606] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60038 ] 00:27:50.174 [2024-11-26 17:24:27.544499] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:50.432 [2024-11-26 17:24:27.674096] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:27:50.432 Running 1000 pollers for 1 seconds with 1 microseconds period. 00:27:51.819 [2024-11-26T17:24:29.266Z] ====================================== 00:27:51.819 [2024-11-26T17:24:29.266Z] busy:2110320426 (cyc) 00:27:51.819 [2024-11-26T17:24:29.266Z] total_run_count: 349000 00:27:51.819 [2024-11-26T17:24:29.266Z] tsc_hz: 2100000000 (cyc) 00:27:51.819 [2024-11-26T17:24:29.266Z] ====================================== 00:27:51.819 [2024-11-26T17:24:29.266Z] poller_cost: 6046 (cyc), 2879 (nsec) 00:27:51.819 00:27:51.819 real 0m1.639s 00:27:51.819 user 0m1.409s 00:27:51.819 sys 0m0.119s 00:27:51.819 17:24:28 thread.thread_poller_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:27:51.819 17:24:28 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:27:51.819 ************************************ 00:27:51.819 END TEST thread_poller_perf 00:27:51.819 ************************************ 00:27:51.819 17:24:28 thread -- thread/thread.sh@12 -- # run_test thread_poller_perf /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:27:51.819 17:24:28 thread -- common/autotest_common.sh@1105 -- # '[' 8 -le 1 ']' 00:27:51.819 17:24:28 thread -- common/autotest_common.sh@1111 -- # xtrace_disable 00:27:51.819 17:24:28 thread -- common/autotest_common.sh@10 -- # set +x 00:27:51.819 ************************************ 00:27:51.819 START TEST thread_poller_perf 00:27:51.819 ************************************ 00:27:51.819 17:24:28 thread.thread_poller_perf -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:27:51.819 [2024-11-26 17:24:29.036718] Starting SPDK v25.01-pre git sha1 f7ce15267 / DPDK 24.03.0 initialization... 00:27:51.819 [2024-11-26 17:24:29.036856] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60075 ] 00:27:51.819 [2024-11-26 17:24:29.209383] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:52.076 Running 1000 pollers for 1 seconds with 0 microseconds period. 00:27:52.076 [2024-11-26 17:24:29.329689] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:27:53.448 [2024-11-26T17:24:30.895Z] ====================================== 00:27:53.448 [2024-11-26T17:24:30.895Z] busy:2103613154 (cyc) 00:27:53.448 [2024-11-26T17:24:30.895Z] total_run_count: 4555000 00:27:53.448 [2024-11-26T17:24:30.895Z] tsc_hz: 2100000000 (cyc) 00:27:53.448 [2024-11-26T17:24:30.895Z] ====================================== 00:27:53.448 [2024-11-26T17:24:30.895Z] poller_cost: 461 (cyc), 219 (nsec) 00:27:53.448 00:27:53.448 real 0m1.583s 00:27:53.448 user 0m1.377s 00:27:53.448 sys 0m0.096s 00:27:53.448 17:24:30 thread.thread_poller_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:27:53.448 ************************************ 00:27:53.448 END TEST thread_poller_perf 00:27:53.448 ************************************ 00:27:53.448 17:24:30 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:27:53.448 17:24:30 thread -- thread/thread.sh@17 -- # [[ y != \y ]] 00:27:53.448 ************************************ 00:27:53.448 END TEST thread 00:27:53.448 ************************************ 00:27:53.448 00:27:53.448 real 0m3.543s 00:27:53.448 user 0m2.940s 00:27:53.448 sys 0m0.390s 00:27:53.448 17:24:30 thread -- common/autotest_common.sh@1130 -- # xtrace_disable 00:27:53.448 17:24:30 thread -- common/autotest_common.sh@10 -- # set +x 00:27:53.448 17:24:30 -- spdk/autotest.sh@171 -- # [[ 0 -eq 1 ]] 00:27:53.449 17:24:30 -- spdk/autotest.sh@176 -- # run_test app_cmdline /home/vagrant/spdk_repo/spdk/test/app/cmdline.sh 00:27:53.449 17:24:30 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:27:53.449 17:24:30 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:27:53.449 17:24:30 -- common/autotest_common.sh@10 -- # set +x 00:27:53.449 ************************************ 00:27:53.449 START TEST app_cmdline 00:27:53.449 ************************************ 00:27:53.449 17:24:30 app_cmdline -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/app/cmdline.sh 00:27:53.449 * Looking for test storage... 00:27:53.449 * Found test storage at /home/vagrant/spdk_repo/spdk/test/app 00:27:53.449 17:24:30 app_cmdline -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:27:53.449 17:24:30 app_cmdline -- common/autotest_common.sh@1693 -- # lcov --version 00:27:53.449 17:24:30 app_cmdline -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:27:53.449 17:24:30 app_cmdline -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:27:53.449 17:24:30 app_cmdline -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:27:53.449 17:24:30 app_cmdline -- scripts/common.sh@333 -- # local ver1 ver1_l 00:27:53.449 17:24:30 app_cmdline -- scripts/common.sh@334 -- # local ver2 ver2_l 00:27:53.449 17:24:30 app_cmdline -- scripts/common.sh@336 -- # IFS=.-: 00:27:53.449 17:24:30 app_cmdline -- scripts/common.sh@336 -- # read -ra ver1 00:27:53.449 17:24:30 app_cmdline -- scripts/common.sh@337 -- # IFS=.-: 00:27:53.449 17:24:30 app_cmdline -- scripts/common.sh@337 -- # read -ra ver2 00:27:53.449 17:24:30 app_cmdline -- scripts/common.sh@338 -- # local 'op=<' 00:27:53.449 17:24:30 app_cmdline -- scripts/common.sh@340 -- # ver1_l=2 00:27:53.449 17:24:30 app_cmdline -- scripts/common.sh@341 -- # ver2_l=1 00:27:53.449 17:24:30 app_cmdline -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:27:53.449 17:24:30 app_cmdline -- scripts/common.sh@344 -- # case "$op" in 00:27:53.449 17:24:30 app_cmdline -- scripts/common.sh@345 -- # : 1 00:27:53.449 17:24:30 app_cmdline -- scripts/common.sh@364 -- # (( v = 0 )) 00:27:53.449 17:24:30 app_cmdline -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:27:53.449 17:24:30 app_cmdline -- scripts/common.sh@365 -- # decimal 1 00:27:53.449 17:24:30 app_cmdline -- scripts/common.sh@353 -- # local d=1 00:27:53.449 17:24:30 app_cmdline -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:27:53.449 17:24:30 app_cmdline -- scripts/common.sh@355 -- # echo 1 00:27:53.449 17:24:30 app_cmdline -- scripts/common.sh@365 -- # ver1[v]=1 00:27:53.707 17:24:30 app_cmdline -- scripts/common.sh@366 -- # decimal 2 00:27:53.707 17:24:30 app_cmdline -- scripts/common.sh@353 -- # local d=2 00:27:53.707 17:24:30 app_cmdline -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:27:53.707 17:24:30 app_cmdline -- scripts/common.sh@355 -- # echo 2 00:27:53.707 17:24:30 app_cmdline -- scripts/common.sh@366 -- # ver2[v]=2 00:27:53.707 17:24:30 app_cmdline -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:27:53.707 17:24:30 app_cmdline -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:27:53.707 17:24:30 app_cmdline -- scripts/common.sh@368 -- # return 0 00:27:53.707 17:24:30 app_cmdline -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:27:53.707 17:24:30 app_cmdline -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:27:53.707 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:53.707 --rc genhtml_branch_coverage=1 00:27:53.707 --rc genhtml_function_coverage=1 00:27:53.707 --rc genhtml_legend=1 00:27:53.707 --rc geninfo_all_blocks=1 00:27:53.707 --rc geninfo_unexecuted_blocks=1 00:27:53.707 00:27:53.707 ' 00:27:53.707 17:24:30 app_cmdline -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:27:53.707 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:53.707 --rc genhtml_branch_coverage=1 00:27:53.707 --rc genhtml_function_coverage=1 00:27:53.707 --rc genhtml_legend=1 00:27:53.707 --rc geninfo_all_blocks=1 00:27:53.707 --rc geninfo_unexecuted_blocks=1 00:27:53.707 00:27:53.707 ' 00:27:53.707 17:24:30 app_cmdline -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:27:53.707 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:53.707 --rc genhtml_branch_coverage=1 00:27:53.707 --rc genhtml_function_coverage=1 00:27:53.707 --rc genhtml_legend=1 00:27:53.707 --rc geninfo_all_blocks=1 00:27:53.707 --rc geninfo_unexecuted_blocks=1 00:27:53.707 00:27:53.707 ' 00:27:53.707 17:24:30 app_cmdline -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:27:53.707 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:53.707 --rc genhtml_branch_coverage=1 00:27:53.707 --rc genhtml_function_coverage=1 00:27:53.707 --rc genhtml_legend=1 00:27:53.707 --rc geninfo_all_blocks=1 00:27:53.707 --rc geninfo_unexecuted_blocks=1 00:27:53.707 00:27:53.707 ' 00:27:53.707 17:24:30 app_cmdline -- app/cmdline.sh@14 -- # trap 'killprocess $spdk_tgt_pid' EXIT 00:27:53.707 17:24:30 app_cmdline -- app/cmdline.sh@17 -- # spdk_tgt_pid=60164 00:27:53.707 17:24:30 app_cmdline -- app/cmdline.sh@18 -- # waitforlisten 60164 00:27:53.707 17:24:30 app_cmdline -- common/autotest_common.sh@835 -- # '[' -z 60164 ']' 00:27:53.707 17:24:30 app_cmdline -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:53.707 17:24:30 app_cmdline -- common/autotest_common.sh@840 -- # local max_retries=100 00:27:53.707 17:24:30 app_cmdline -- app/cmdline.sh@16 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --rpcs-allowed spdk_get_version,rpc_get_methods 00:27:53.707 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:27:53.707 17:24:30 app_cmdline -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:53.707 17:24:30 app_cmdline -- common/autotest_common.sh@844 -- # xtrace_disable 00:27:53.707 17:24:30 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:27:53.707 [2024-11-26 17:24:31.044460] Starting SPDK v25.01-pre git sha1 f7ce15267 / DPDK 24.03.0 initialization... 00:27:53.707 [2024-11-26 17:24:31.044956] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60164 ] 00:27:53.966 [2024-11-26 17:24:31.248917] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:54.224 [2024-11-26 17:24:31.414924] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:27:55.159 17:24:32 app_cmdline -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:27:55.159 17:24:32 app_cmdline -- common/autotest_common.sh@868 -- # return 0 00:27:55.159 17:24:32 app_cmdline -- app/cmdline.sh@20 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py spdk_get_version 00:27:55.417 { 00:27:55.417 "version": "SPDK v25.01-pre git sha1 f7ce15267", 00:27:55.417 "fields": { 00:27:55.417 "major": 25, 00:27:55.417 "minor": 1, 00:27:55.417 "patch": 0, 00:27:55.417 "suffix": "-pre", 00:27:55.417 "commit": "f7ce15267" 00:27:55.417 } 00:27:55.417 } 00:27:55.417 17:24:32 app_cmdline -- app/cmdline.sh@22 -- # expected_methods=() 00:27:55.417 17:24:32 app_cmdline -- app/cmdline.sh@23 -- # expected_methods+=("rpc_get_methods") 00:27:55.417 17:24:32 app_cmdline -- app/cmdline.sh@24 -- # expected_methods+=("spdk_get_version") 00:27:55.417 17:24:32 app_cmdline -- app/cmdline.sh@26 -- # methods=($(rpc_cmd rpc_get_methods | jq -r ".[]" | sort)) 00:27:55.417 17:24:32 app_cmdline -- app/cmdline.sh@26 -- # rpc_cmd rpc_get_methods 00:27:55.417 17:24:32 app_cmdline -- app/cmdline.sh@26 -- # jq -r '.[]' 00:27:55.417 17:24:32 app_cmdline -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:55.417 17:24:32 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:27:55.417 17:24:32 app_cmdline -- app/cmdline.sh@26 -- # sort 00:27:55.417 17:24:32 app_cmdline -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:55.417 17:24:32 app_cmdline -- app/cmdline.sh@27 -- # (( 2 == 2 )) 00:27:55.417 17:24:32 app_cmdline -- app/cmdline.sh@28 -- # [[ rpc_get_methods spdk_get_version == \r\p\c\_\g\e\t\_\m\e\t\h\o\d\s\ \s\p\d\k\_\g\e\t\_\v\e\r\s\i\o\n ]] 00:27:55.417 17:24:32 app_cmdline -- app/cmdline.sh@30 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:27:55.417 17:24:32 app_cmdline -- common/autotest_common.sh@652 -- # local es=0 00:27:55.417 17:24:32 app_cmdline -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:27:55.417 17:24:32 app_cmdline -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:27:55.417 17:24:32 app_cmdline -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:27:55.417 17:24:32 app_cmdline -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:27:55.417 17:24:32 app_cmdline -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:27:55.417 17:24:32 app_cmdline -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:27:55.417 17:24:32 app_cmdline -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:27:55.417 17:24:32 app_cmdline -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:27:55.417 17:24:32 app_cmdline -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:27:55.417 17:24:32 app_cmdline -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:27:55.675 request: 00:27:55.675 { 00:27:55.675 "method": "env_dpdk_get_mem_stats", 00:27:55.675 "req_id": 1 00:27:55.675 } 00:27:55.675 Got JSON-RPC error response 00:27:55.675 response: 00:27:55.675 { 00:27:55.675 "code": -32601, 00:27:55.675 "message": "Method not found" 00:27:55.675 } 00:27:55.675 17:24:33 app_cmdline -- common/autotest_common.sh@655 -- # es=1 00:27:55.675 17:24:33 app_cmdline -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:27:55.675 17:24:33 app_cmdline -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:27:55.675 17:24:33 app_cmdline -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:27:55.675 17:24:33 app_cmdline -- app/cmdline.sh@1 -- # killprocess 60164 00:27:55.675 17:24:33 app_cmdline -- common/autotest_common.sh@954 -- # '[' -z 60164 ']' 00:27:55.675 17:24:33 app_cmdline -- common/autotest_common.sh@958 -- # kill -0 60164 00:27:55.675 17:24:33 app_cmdline -- common/autotest_common.sh@959 -- # uname 00:27:55.675 17:24:33 app_cmdline -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:27:55.675 17:24:33 app_cmdline -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 60164 00:27:55.675 killing process with pid 60164 00:27:55.675 17:24:33 app_cmdline -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:27:55.675 17:24:33 app_cmdline -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:27:55.675 17:24:33 app_cmdline -- common/autotest_common.sh@972 -- # echo 'killing process with pid 60164' 00:27:55.675 17:24:33 app_cmdline -- common/autotest_common.sh@973 -- # kill 60164 00:27:55.675 17:24:33 app_cmdline -- common/autotest_common.sh@978 -- # wait 60164 00:27:59.028 00:27:59.028 real 0m5.173s 00:27:59.028 user 0m5.523s 00:27:59.028 sys 0m0.722s 00:27:59.028 17:24:35 app_cmdline -- common/autotest_common.sh@1130 -- # xtrace_disable 00:27:59.028 ************************************ 00:27:59.028 END TEST app_cmdline 00:27:59.028 ************************************ 00:27:59.028 17:24:35 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:27:59.028 17:24:35 -- spdk/autotest.sh@177 -- # run_test version /home/vagrant/spdk_repo/spdk/test/app/version.sh 00:27:59.028 17:24:35 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:27:59.028 17:24:35 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:27:59.028 17:24:35 -- common/autotest_common.sh@10 -- # set +x 00:27:59.028 ************************************ 00:27:59.028 START TEST version 00:27:59.028 ************************************ 00:27:59.028 17:24:35 version -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/app/version.sh 00:27:59.028 * Looking for test storage... 00:27:59.028 * Found test storage at /home/vagrant/spdk_repo/spdk/test/app 00:27:59.028 17:24:36 version -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:27:59.028 17:24:36 version -- common/autotest_common.sh@1693 -- # lcov --version 00:27:59.028 17:24:36 version -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:27:59.028 17:24:36 version -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:27:59.028 17:24:36 version -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:27:59.028 17:24:36 version -- scripts/common.sh@333 -- # local ver1 ver1_l 00:27:59.028 17:24:36 version -- scripts/common.sh@334 -- # local ver2 ver2_l 00:27:59.028 17:24:36 version -- scripts/common.sh@336 -- # IFS=.-: 00:27:59.028 17:24:36 version -- scripts/common.sh@336 -- # read -ra ver1 00:27:59.028 17:24:36 version -- scripts/common.sh@337 -- # IFS=.-: 00:27:59.028 17:24:36 version -- scripts/common.sh@337 -- # read -ra ver2 00:27:59.028 17:24:36 version -- scripts/common.sh@338 -- # local 'op=<' 00:27:59.028 17:24:36 version -- scripts/common.sh@340 -- # ver1_l=2 00:27:59.028 17:24:36 version -- scripts/common.sh@341 -- # ver2_l=1 00:27:59.028 17:24:36 version -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:27:59.028 17:24:36 version -- scripts/common.sh@344 -- # case "$op" in 00:27:59.028 17:24:36 version -- scripts/common.sh@345 -- # : 1 00:27:59.028 17:24:36 version -- scripts/common.sh@364 -- # (( v = 0 )) 00:27:59.028 17:24:36 version -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:27:59.028 17:24:36 version -- scripts/common.sh@365 -- # decimal 1 00:27:59.028 17:24:36 version -- scripts/common.sh@353 -- # local d=1 00:27:59.028 17:24:36 version -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:27:59.028 17:24:36 version -- scripts/common.sh@355 -- # echo 1 00:27:59.028 17:24:36 version -- scripts/common.sh@365 -- # ver1[v]=1 00:27:59.028 17:24:36 version -- scripts/common.sh@366 -- # decimal 2 00:27:59.028 17:24:36 version -- scripts/common.sh@353 -- # local d=2 00:27:59.028 17:24:36 version -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:27:59.028 17:24:36 version -- scripts/common.sh@355 -- # echo 2 00:27:59.028 17:24:36 version -- scripts/common.sh@366 -- # ver2[v]=2 00:27:59.028 17:24:36 version -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:27:59.028 17:24:36 version -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:27:59.028 17:24:36 version -- scripts/common.sh@368 -- # return 0 00:27:59.028 17:24:36 version -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:27:59.028 17:24:36 version -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:27:59.028 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:59.028 --rc genhtml_branch_coverage=1 00:27:59.028 --rc genhtml_function_coverage=1 00:27:59.028 --rc genhtml_legend=1 00:27:59.028 --rc geninfo_all_blocks=1 00:27:59.028 --rc geninfo_unexecuted_blocks=1 00:27:59.028 00:27:59.028 ' 00:27:59.028 17:24:36 version -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:27:59.028 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:59.028 --rc genhtml_branch_coverage=1 00:27:59.028 --rc genhtml_function_coverage=1 00:27:59.028 --rc genhtml_legend=1 00:27:59.028 --rc geninfo_all_blocks=1 00:27:59.028 --rc geninfo_unexecuted_blocks=1 00:27:59.028 00:27:59.028 ' 00:27:59.028 17:24:36 version -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:27:59.028 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:59.028 --rc genhtml_branch_coverage=1 00:27:59.028 --rc genhtml_function_coverage=1 00:27:59.028 --rc genhtml_legend=1 00:27:59.028 --rc geninfo_all_blocks=1 00:27:59.028 --rc geninfo_unexecuted_blocks=1 00:27:59.028 00:27:59.028 ' 00:27:59.028 17:24:36 version -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:27:59.028 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:59.028 --rc genhtml_branch_coverage=1 00:27:59.028 --rc genhtml_function_coverage=1 00:27:59.028 --rc genhtml_legend=1 00:27:59.028 --rc geninfo_all_blocks=1 00:27:59.028 --rc geninfo_unexecuted_blocks=1 00:27:59.028 00:27:59.028 ' 00:27:59.028 17:24:36 version -- app/version.sh@17 -- # get_header_version major 00:27:59.028 17:24:36 version -- app/version.sh@14 -- # cut -f2 00:27:59.028 17:24:36 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MAJOR[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:27:59.028 17:24:36 version -- app/version.sh@14 -- # tr -d '"' 00:27:59.029 17:24:36 version -- app/version.sh@17 -- # major=25 00:27:59.029 17:24:36 version -- app/version.sh@18 -- # get_header_version minor 00:27:59.029 17:24:36 version -- app/version.sh@14 -- # cut -f2 00:27:59.029 17:24:36 version -- app/version.sh@14 -- # tr -d '"' 00:27:59.029 17:24:36 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MINOR[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:27:59.029 17:24:36 version -- app/version.sh@18 -- # minor=1 00:27:59.029 17:24:36 version -- app/version.sh@19 -- # get_header_version patch 00:27:59.029 17:24:36 version -- app/version.sh@14 -- # cut -f2 00:27:59.029 17:24:36 version -- app/version.sh@14 -- # tr -d '"' 00:27:59.029 17:24:36 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_PATCH[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:27:59.029 17:24:36 version -- app/version.sh@19 -- # patch=0 00:27:59.029 17:24:36 version -- app/version.sh@20 -- # get_header_version suffix 00:27:59.029 17:24:36 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_SUFFIX[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:27:59.029 17:24:36 version -- app/version.sh@14 -- # tr -d '"' 00:27:59.029 17:24:36 version -- app/version.sh@14 -- # cut -f2 00:27:59.029 17:24:36 version -- app/version.sh@20 -- # suffix=-pre 00:27:59.029 17:24:36 version -- app/version.sh@22 -- # version=25.1 00:27:59.029 17:24:36 version -- app/version.sh@25 -- # (( patch != 0 )) 00:27:59.029 17:24:36 version -- app/version.sh@28 -- # version=25.1rc0 00:27:59.029 17:24:36 version -- app/version.sh@30 -- # PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python 00:27:59.029 17:24:36 version -- app/version.sh@30 -- # python3 -c 'import spdk; print(spdk.__version__)' 00:27:59.029 17:24:36 version -- app/version.sh@30 -- # py_version=25.1rc0 00:27:59.029 17:24:36 version -- app/version.sh@31 -- # [[ 25.1rc0 == \2\5\.\1\r\c\0 ]] 00:27:59.029 00:27:59.029 real 0m0.269s 00:27:59.029 user 0m0.176s 00:27:59.029 sys 0m0.136s 00:27:59.029 ************************************ 00:27:59.029 END TEST version 00:27:59.029 ************************************ 00:27:59.029 17:24:36 version -- common/autotest_common.sh@1130 -- # xtrace_disable 00:27:59.029 17:24:36 version -- common/autotest_common.sh@10 -- # set +x 00:27:59.029 17:24:36 -- spdk/autotest.sh@179 -- # '[' 0 -eq 1 ']' 00:27:59.029 17:24:36 -- spdk/autotest.sh@188 -- # [[ 1 -eq 1 ]] 00:27:59.029 17:24:36 -- spdk/autotest.sh@189 -- # run_test bdev_raid /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh 00:27:59.029 17:24:36 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:27:59.029 17:24:36 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:27:59.029 17:24:36 -- common/autotest_common.sh@10 -- # set +x 00:27:59.029 ************************************ 00:27:59.029 START TEST bdev_raid 00:27:59.029 ************************************ 00:27:59.029 17:24:36 bdev_raid -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh 00:27:59.029 * Looking for test storage... 00:27:59.029 * Found test storage at /home/vagrant/spdk_repo/spdk/test/bdev 00:27:59.029 17:24:36 bdev_raid -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:27:59.029 17:24:36 bdev_raid -- common/autotest_common.sh@1693 -- # lcov --version 00:27:59.029 17:24:36 bdev_raid -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:27:59.029 17:24:36 bdev_raid -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:27:59.029 17:24:36 bdev_raid -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:27:59.029 17:24:36 bdev_raid -- scripts/common.sh@333 -- # local ver1 ver1_l 00:27:59.029 17:24:36 bdev_raid -- scripts/common.sh@334 -- # local ver2 ver2_l 00:27:59.029 17:24:36 bdev_raid -- scripts/common.sh@336 -- # IFS=.-: 00:27:59.029 17:24:36 bdev_raid -- scripts/common.sh@336 -- # read -ra ver1 00:27:59.029 17:24:36 bdev_raid -- scripts/common.sh@337 -- # IFS=.-: 00:27:59.029 17:24:36 bdev_raid -- scripts/common.sh@337 -- # read -ra ver2 00:27:59.029 17:24:36 bdev_raid -- scripts/common.sh@338 -- # local 'op=<' 00:27:59.029 17:24:36 bdev_raid -- scripts/common.sh@340 -- # ver1_l=2 00:27:59.029 17:24:36 bdev_raid -- scripts/common.sh@341 -- # ver2_l=1 00:27:59.029 17:24:36 bdev_raid -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:27:59.029 17:24:36 bdev_raid -- scripts/common.sh@344 -- # case "$op" in 00:27:59.029 17:24:36 bdev_raid -- scripts/common.sh@345 -- # : 1 00:27:59.029 17:24:36 bdev_raid -- scripts/common.sh@364 -- # (( v = 0 )) 00:27:59.029 17:24:36 bdev_raid -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:27:59.029 17:24:36 bdev_raid -- scripts/common.sh@365 -- # decimal 1 00:27:59.029 17:24:36 bdev_raid -- scripts/common.sh@353 -- # local d=1 00:27:59.029 17:24:36 bdev_raid -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:27:59.029 17:24:36 bdev_raid -- scripts/common.sh@355 -- # echo 1 00:27:59.029 17:24:36 bdev_raid -- scripts/common.sh@365 -- # ver1[v]=1 00:27:59.029 17:24:36 bdev_raid -- scripts/common.sh@366 -- # decimal 2 00:27:59.029 17:24:36 bdev_raid -- scripts/common.sh@353 -- # local d=2 00:27:59.029 17:24:36 bdev_raid -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:27:59.029 17:24:36 bdev_raid -- scripts/common.sh@355 -- # echo 2 00:27:59.029 17:24:36 bdev_raid -- scripts/common.sh@366 -- # ver2[v]=2 00:27:59.029 17:24:36 bdev_raid -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:27:59.029 17:24:36 bdev_raid -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:27:59.029 17:24:36 bdev_raid -- scripts/common.sh@368 -- # return 0 00:27:59.029 17:24:36 bdev_raid -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:27:59.029 17:24:36 bdev_raid -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:27:59.029 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:59.029 --rc genhtml_branch_coverage=1 00:27:59.029 --rc genhtml_function_coverage=1 00:27:59.029 --rc genhtml_legend=1 00:27:59.029 --rc geninfo_all_blocks=1 00:27:59.029 --rc geninfo_unexecuted_blocks=1 00:27:59.029 00:27:59.029 ' 00:27:59.029 17:24:36 bdev_raid -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:27:59.029 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:59.029 --rc genhtml_branch_coverage=1 00:27:59.029 --rc genhtml_function_coverage=1 00:27:59.029 --rc genhtml_legend=1 00:27:59.029 --rc geninfo_all_blocks=1 00:27:59.029 --rc geninfo_unexecuted_blocks=1 00:27:59.029 00:27:59.029 ' 00:27:59.029 17:24:36 bdev_raid -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:27:59.029 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:59.029 --rc genhtml_branch_coverage=1 00:27:59.029 --rc genhtml_function_coverage=1 00:27:59.029 --rc genhtml_legend=1 00:27:59.029 --rc geninfo_all_blocks=1 00:27:59.029 --rc geninfo_unexecuted_blocks=1 00:27:59.029 00:27:59.029 ' 00:27:59.029 17:24:36 bdev_raid -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:27:59.029 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:59.029 --rc genhtml_branch_coverage=1 00:27:59.029 --rc genhtml_function_coverage=1 00:27:59.029 --rc genhtml_legend=1 00:27:59.029 --rc geninfo_all_blocks=1 00:27:59.029 --rc geninfo_unexecuted_blocks=1 00:27:59.029 00:27:59.029 ' 00:27:59.029 17:24:36 bdev_raid -- bdev/bdev_raid.sh@12 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:27:59.029 17:24:36 bdev_raid -- bdev/nbd_common.sh@6 -- # set -e 00:27:59.029 17:24:36 bdev_raid -- bdev/bdev_raid.sh@14 -- # rpc_py=rpc_cmd 00:27:59.029 17:24:36 bdev_raid -- bdev/bdev_raid.sh@946 -- # mkdir -p /raidtest 00:27:59.029 17:24:36 bdev_raid -- bdev/bdev_raid.sh@947 -- # trap 'cleanup; exit 1' EXIT 00:27:59.030 17:24:36 bdev_raid -- bdev/bdev_raid.sh@949 -- # base_blocklen=512 00:27:59.030 17:24:36 bdev_raid -- bdev/bdev_raid.sh@951 -- # run_test raid1_resize_data_offset_test raid_resize_data_offset_test 00:27:59.030 17:24:36 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:27:59.030 17:24:36 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:27:59.030 17:24:36 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:27:59.288 ************************************ 00:27:59.288 START TEST raid1_resize_data_offset_test 00:27:59.288 ************************************ 00:27:59.288 17:24:36 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@1129 -- # raid_resize_data_offset_test 00:27:59.288 17:24:36 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@917 -- # raid_pid=60359 00:27:59.288 17:24:36 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@916 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:27:59.288 Process raid pid: 60359 00:27:59.288 17:24:36 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@918 -- # echo 'Process raid pid: 60359' 00:27:59.288 17:24:36 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@919 -- # waitforlisten 60359 00:27:59.288 17:24:36 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@835 -- # '[' -z 60359 ']' 00:27:59.288 17:24:36 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:59.288 17:24:36 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:27:59.288 17:24:36 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:59.288 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:27:59.288 17:24:36 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:27:59.288 17:24:36 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@10 -- # set +x 00:27:59.288 [2024-11-26 17:24:36.607416] Starting SPDK v25.01-pre git sha1 f7ce15267 / DPDK 24.03.0 initialization... 00:27:59.288 [2024-11-26 17:24:36.608395] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:27:59.546 [2024-11-26 17:24:36.810595] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:59.805 [2024-11-26 17:24:36.996592] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:28:00.063 [2024-11-26 17:24:37.264010] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:28:00.063 [2024-11-26 17:24:37.264294] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:28:00.321 17:24:37 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:28:00.321 17:24:37 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@868 -- # return 0 00:28:00.321 17:24:37 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@922 -- # rpc_cmd bdev_malloc_create -b malloc0 64 512 -o 16 00:28:00.321 17:24:37 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:00.321 17:24:37 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@10 -- # set +x 00:28:00.321 malloc0 00:28:00.321 17:24:37 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:00.321 17:24:37 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@923 -- # rpc_cmd bdev_malloc_create -b malloc1 64 512 -o 16 00:28:00.321 17:24:37 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:00.321 17:24:37 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@10 -- # set +x 00:28:00.321 malloc1 00:28:00.321 17:24:37 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:00.321 17:24:37 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@924 -- # rpc_cmd bdev_null_create null0 64 512 00:28:00.321 17:24:37 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:00.321 17:24:37 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@10 -- # set +x 00:28:00.321 null0 00:28:00.321 17:24:37 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:00.321 17:24:37 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@926 -- # rpc_cmd bdev_raid_create -n Raid -r 1 -b ''\''malloc0 malloc1 null0'\''' -s 00:28:00.321 17:24:37 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:00.321 17:24:37 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@10 -- # set +x 00:28:00.321 [2024-11-26 17:24:37.729570] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc0 is claimed 00:28:00.321 [2024-11-26 17:24:37.732104] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:28:00.321 [2024-11-26 17:24:37.732169] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev null0 is claimed 00:28:00.321 [2024-11-26 17:24:37.732350] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:28:00.321 [2024-11-26 17:24:37.732370] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 129024, blocklen 512 00:28:00.321 [2024-11-26 17:24:37.732706] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ba0 00:28:00.321 [2024-11-26 17:24:37.732906] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:28:00.321 [2024-11-26 17:24:37.732925] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Raid, raid_bdev 0x617000007780 00:28:00.321 [2024-11-26 17:24:37.733126] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:28:00.321 17:24:37 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:00.321 17:24:37 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@929 -- # rpc_cmd bdev_raid_get_bdevs all 00:28:00.321 17:24:37 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:00.321 17:24:37 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@10 -- # set +x 00:28:00.321 17:24:37 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@929 -- # jq -r '.[].base_bdevs_list[2].data_offset' 00:28:00.321 17:24:37 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:00.580 17:24:37 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@929 -- # (( 2048 == 2048 )) 00:28:00.580 17:24:37 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@931 -- # rpc_cmd bdev_null_delete null0 00:28:00.580 17:24:37 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:00.580 17:24:37 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@10 -- # set +x 00:28:00.580 [2024-11-26 17:24:37.789789] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: null0 00:28:00.580 17:24:37 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:00.580 17:24:37 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@935 -- # rpc_cmd bdev_malloc_create -b malloc2 512 512 -o 30 00:28:00.580 17:24:37 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:00.580 17:24:37 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@10 -- # set +x 00:28:01.146 malloc2 00:28:01.146 17:24:38 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:01.146 17:24:38 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@936 -- # rpc_cmd bdev_raid_add_base_bdev Raid malloc2 00:28:01.146 17:24:38 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:01.146 17:24:38 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@10 -- # set +x 00:28:01.146 [2024-11-26 17:24:38.494226] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:28:01.146 [2024-11-26 17:24:38.515320] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:28:01.146 17:24:38 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:01.146 [2024-11-26 17:24:38.517861] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev Raid 00:28:01.146 17:24:38 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@939 -- # rpc_cmd bdev_raid_get_bdevs all 00:28:01.146 17:24:38 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:01.146 17:24:38 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@10 -- # set +x 00:28:01.146 17:24:38 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@939 -- # jq -r '.[].base_bdevs_list[2].data_offset' 00:28:01.146 17:24:38 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:01.146 17:24:38 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@939 -- # (( 2070 == 2070 )) 00:28:01.146 17:24:38 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@941 -- # killprocess 60359 00:28:01.146 17:24:38 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@954 -- # '[' -z 60359 ']' 00:28:01.146 17:24:38 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@958 -- # kill -0 60359 00:28:01.146 17:24:38 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@959 -- # uname 00:28:01.146 17:24:38 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:28:01.146 17:24:38 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 60359 00:28:01.404 killing process with pid 60359 00:28:01.404 17:24:38 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:28:01.404 17:24:38 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:28:01.404 17:24:38 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 60359' 00:28:01.404 17:24:38 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@973 -- # kill 60359 00:28:01.404 17:24:38 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@978 -- # wait 60359 00:28:01.404 [2024-11-26 17:24:38.616927] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:28:01.404 [2024-11-26 17:24:38.618525] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev Raid: Operation canceled 00:28:01.404 [2024-11-26 17:24:38.618608] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:28:01.404 [2024-11-26 17:24:38.618630] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: malloc2 00:28:01.404 [2024-11-26 17:24:38.662365] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:28:01.404 [2024-11-26 17:24:38.662729] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:28:01.404 [2024-11-26 17:24:38.662751] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Raid, state offline 00:28:03.305 [2024-11-26 17:24:40.661387] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:28:04.680 17:24:41 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@943 -- # return 0 00:28:04.680 00:28:04.680 real 0m5.396s 00:28:04.680 user 0m5.257s 00:28:04.680 sys 0m0.686s 00:28:04.680 17:24:41 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:28:04.680 17:24:41 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@10 -- # set +x 00:28:04.680 ************************************ 00:28:04.680 END TEST raid1_resize_data_offset_test 00:28:04.680 ************************************ 00:28:04.680 17:24:41 bdev_raid -- bdev/bdev_raid.sh@953 -- # run_test raid0_resize_superblock_test raid_resize_superblock_test 0 00:28:04.680 17:24:41 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:28:04.680 17:24:41 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:28:04.680 17:24:41 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:28:04.680 ************************************ 00:28:04.680 START TEST raid0_resize_superblock_test 00:28:04.680 ************************************ 00:28:04.680 17:24:41 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@1129 -- # raid_resize_superblock_test 0 00:28:04.680 17:24:41 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@854 -- # local raid_level=0 00:28:04.680 Process raid pid: 60454 00:28:04.680 17:24:41 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@857 -- # raid_pid=60454 00:28:04.680 17:24:41 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@858 -- # echo 'Process raid pid: 60454' 00:28:04.680 17:24:41 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@859 -- # waitforlisten 60454 00:28:04.680 17:24:41 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@856 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:28:04.680 17:24:41 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@835 -- # '[' -z 60454 ']' 00:28:04.680 17:24:41 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:28:04.680 17:24:41 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:28:04.680 17:24:41 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:28:04.680 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:28:04.680 17:24:41 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:28:04.680 17:24:41 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:28:04.680 [2024-11-26 17:24:42.021677] Starting SPDK v25.01-pre git sha1 f7ce15267 / DPDK 24.03.0 initialization... 00:28:04.680 [2024-11-26 17:24:42.022040] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:28:04.937 [2024-11-26 17:24:42.198842] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:04.937 [2024-11-26 17:24:42.331845] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:28:05.196 [2024-11-26 17:24:42.567279] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:28:05.196 [2024-11-26 17:24:42.567329] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:28:05.764 17:24:42 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:28:05.764 17:24:42 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@868 -- # return 0 00:28:05.764 17:24:42 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@861 -- # rpc_cmd bdev_malloc_create -b malloc0 512 512 00:28:05.764 17:24:42 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:05.764 17:24:42 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:28:06.332 malloc0 00:28:06.332 17:24:43 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:06.332 17:24:43 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@863 -- # rpc_cmd bdev_passthru_create -b malloc0 -p pt0 00:28:06.332 17:24:43 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:06.332 17:24:43 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:28:06.332 [2024-11-26 17:24:43.652191] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc0 00:28:06.332 [2024-11-26 17:24:43.652268] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:28:06.332 [2024-11-26 17:24:43.652297] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:28:06.332 [2024-11-26 17:24:43.652313] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:28:06.332 [2024-11-26 17:24:43.655616] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:28:06.332 [2024-11-26 17:24:43.655688] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt0 00:28:06.332 pt0 00:28:06.332 17:24:43 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:06.332 17:24:43 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@864 -- # rpc_cmd bdev_lvol_create_lvstore pt0 lvs0 00:28:06.332 17:24:43 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:06.332 17:24:43 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:28:06.332 49b85739-d174-4b30-a5fc-21cad10777cc 00:28:06.332 17:24:43 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:06.332 17:24:43 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@866 -- # rpc_cmd bdev_lvol_create -l lvs0 lvol0 64 00:28:06.332 17:24:43 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:06.332 17:24:43 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:28:06.332 11422540-26b1-41a6-9af7-8a5ddc60b509 00:28:06.332 17:24:43 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:06.332 17:24:43 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@867 -- # rpc_cmd bdev_lvol_create -l lvs0 lvol1 64 00:28:06.332 17:24:43 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:06.333 17:24:43 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:28:06.333 d62a1893-49a5-4813-b837-b29a01e02b60 00:28:06.333 17:24:43 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:06.333 17:24:43 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@869 -- # case $raid_level in 00:28:06.333 17:24:43 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@870 -- # rpc_cmd bdev_raid_create -n Raid -r 0 -z 64 -b ''\''lvs0/lvol0 lvs0/lvol1'\''' -s 00:28:06.333 17:24:43 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:06.333 17:24:43 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:28:06.599 [2024-11-26 17:24:43.777955] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev 11422540-26b1-41a6-9af7-8a5ddc60b509 is claimed 00:28:06.599 [2024-11-26 17:24:43.778129] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev d62a1893-49a5-4813-b837-b29a01e02b60 is claimed 00:28:06.599 [2024-11-26 17:24:43.778307] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:28:06.599 [2024-11-26 17:24:43.778330] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 245760, blocklen 512 00:28:06.599 [2024-11-26 17:24:43.778704] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:28:06.599 [2024-11-26 17:24:43.779259] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:28:06.599 [2024-11-26 17:24:43.779284] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Raid, raid_bdev 0x617000007780 00:28:06.599 [2024-11-26 17:24:43.779509] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:28:06.599 17:24:43 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:06.599 17:24:43 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@875 -- # jq '.[].num_blocks' 00:28:06.599 17:24:43 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@875 -- # rpc_cmd bdev_get_bdevs -b lvs0/lvol0 00:28:06.599 17:24:43 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:06.599 17:24:43 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:28:06.599 17:24:43 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:06.599 17:24:43 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@875 -- # (( 64 == 64 )) 00:28:06.599 17:24:43 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@876 -- # rpc_cmd bdev_get_bdevs -b lvs0/lvol1 00:28:06.599 17:24:43 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:06.599 17:24:43 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@876 -- # jq '.[].num_blocks' 00:28:06.599 17:24:43 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:28:06.600 17:24:43 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:06.600 17:24:43 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@876 -- # (( 64 == 64 )) 00:28:06.600 17:24:43 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@879 -- # case $raid_level in 00:28:06.600 17:24:43 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@880 -- # jq '.[].num_blocks' 00:28:06.600 17:24:43 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@879 -- # case $raid_level in 00:28:06.600 17:24:43 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@880 -- # rpc_cmd bdev_get_bdevs -b Raid 00:28:06.600 17:24:43 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:06.600 17:24:43 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:28:06.600 [2024-11-26 17:24:43.886247] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:28:06.600 17:24:43 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:06.600 17:24:43 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@879 -- # case $raid_level in 00:28:06.600 17:24:43 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@879 -- # case $raid_level in 00:28:06.600 17:24:43 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@880 -- # (( 245760 == 245760 )) 00:28:06.600 17:24:43 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@885 -- # rpc_cmd bdev_lvol_resize lvs0/lvol0 100 00:28:06.600 17:24:43 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:06.600 17:24:43 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:28:06.600 [2024-11-26 17:24:43.950159] bdev_raid.c:2317:raid_bdev_resize_base_bdev: *DEBUG*: raid_bdev_resize_base_bdev 00:28:06.600 [2024-11-26 17:24:43.950204] bdev_raid.c:2330:raid_bdev_resize_base_bdev: *NOTICE*: base_bdev '11422540-26b1-41a6-9af7-8a5ddc60b509' was resized: old size 131072, new size 204800 00:28:06.600 17:24:43 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:06.600 17:24:43 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@886 -- # rpc_cmd bdev_lvol_resize lvs0/lvol1 100 00:28:06.600 17:24:43 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:06.600 17:24:43 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:28:06.600 [2024-11-26 17:24:43.957981] bdev_raid.c:2317:raid_bdev_resize_base_bdev: *DEBUG*: raid_bdev_resize_base_bdev 00:28:06.600 [2024-11-26 17:24:43.958012] bdev_raid.c:2330:raid_bdev_resize_base_bdev: *NOTICE*: base_bdev 'd62a1893-49a5-4813-b837-b29a01e02b60' was resized: old size 131072, new size 204800 00:28:06.600 [2024-11-26 17:24:43.958072] bdev_raid.c:2344:raid_bdev_resize_base_bdev: *NOTICE*: raid bdev 'Raid': block count was changed from 245760 to 393216 00:28:06.600 17:24:43 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:06.600 17:24:43 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@889 -- # jq '.[].num_blocks' 00:28:06.600 17:24:43 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@889 -- # rpc_cmd bdev_get_bdevs -b lvs0/lvol0 00:28:06.600 17:24:43 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:06.600 17:24:43 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:28:06.600 17:24:43 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:06.600 17:24:44 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@889 -- # (( 100 == 100 )) 00:28:06.600 17:24:44 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@890 -- # rpc_cmd bdev_get_bdevs -b lvs0/lvol1 00:28:06.600 17:24:44 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:06.600 17:24:44 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:28:06.600 17:24:44 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@890 -- # jq '.[].num_blocks' 00:28:06.600 17:24:44 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:06.859 17:24:44 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@890 -- # (( 100 == 100 )) 00:28:06.859 17:24:44 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@893 -- # case $raid_level in 00:28:06.859 17:24:44 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@894 -- # jq '.[].num_blocks' 00:28:06.859 17:24:44 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@893 -- # case $raid_level in 00:28:06.859 17:24:44 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@894 -- # rpc_cmd bdev_get_bdevs -b Raid 00:28:06.859 17:24:44 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:06.859 17:24:44 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:28:06.859 [2024-11-26 17:24:44.058155] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:28:06.859 17:24:44 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:06.859 17:24:44 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@893 -- # case $raid_level in 00:28:06.859 17:24:44 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@893 -- # case $raid_level in 00:28:06.859 17:24:44 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@894 -- # (( 393216 == 393216 )) 00:28:06.859 17:24:44 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@898 -- # rpc_cmd bdev_passthru_delete pt0 00:28:06.859 17:24:44 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:06.859 17:24:44 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:28:06.859 [2024-11-26 17:24:44.089982] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev pt0 being removed: closing lvstore lvs0 00:28:06.859 [2024-11-26 17:24:44.090291] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: lvs0/lvol0 00:28:06.859 [2024-11-26 17:24:44.090327] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:28:06.859 [2024-11-26 17:24:44.090348] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: lvs0/lvol1 00:28:06.859 [2024-11-26 17:24:44.090477] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:28:06.860 [2024-11-26 17:24:44.090518] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:28:06.860 [2024-11-26 17:24:44.090535] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Raid, state offline 00:28:06.860 17:24:44 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:06.860 17:24:44 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@899 -- # rpc_cmd bdev_passthru_create -b malloc0 -p pt0 00:28:06.860 17:24:44 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:06.860 17:24:44 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:28:06.860 [2024-11-26 17:24:44.101832] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc0 00:28:06.860 [2024-11-26 17:24:44.102035] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:28:06.860 [2024-11-26 17:24:44.102084] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008180 00:28:06.860 [2024-11-26 17:24:44.102100] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:28:06.860 [2024-11-26 17:24:44.104691] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:28:06.860 [2024-11-26 17:24:44.104735] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt0 00:28:06.860 pt0 00:28:06.860 [2024-11-26 17:24:44.106383] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev 11422540-26b1-41a6-9af7-8a5ddc60b509 00:28:06.860 [2024-11-26 17:24:44.106449] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev 11422540-26b1-41a6-9af7-8a5ddc60b509 is claimed 00:28:06.860 [2024-11-26 17:24:44.106555] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev d62a1893-49a5-4813-b837-b29a01e02b60 00:28:06.860 [2024-11-26 17:24:44.106576] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev d62a1893-49a5-4813-b837-b29a01e02b60 is claimed 00:28:06.860 [2024-11-26 17:24:44.106739] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev d62a1893-49a5-4813-b837-b29a01e02b60 (2) smaller than existing raid bdev Raid (3) 00:28:06.860 [2024-11-26 17:24:44.106769] bdev_raid.c:3894:raid_bdev_examine_done: *ERROR*: Failed to examine bdev 11422540-26b1-41a6-9af7-8a5ddc60b509: File exists 00:28:06.860 [2024-11-26 17:24:44.106808] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007b00 00:28:06.860 [2024-11-26 17:24:44.106821] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 393216, blocklen 512 00:28:06.860 [2024-11-26 17:24:44.107111] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006080 00:28:06.860 [2024-11-26 17:24:44.107272] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007b00 00:28:06.860 [2024-11-26 17:24:44.107282] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Raid, raid_bdev 0x617000007b00 00:28:06.860 [2024-11-26 17:24:44.107427] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:28:06.860 17:24:44 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:06.860 17:24:44 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@900 -- # rpc_cmd bdev_wait_for_examine 00:28:06.860 17:24:44 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:06.860 17:24:44 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:28:06.860 17:24:44 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:06.860 17:24:44 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@904 -- # case $raid_level in 00:28:06.860 17:24:44 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@905 -- # rpc_cmd bdev_get_bdevs -b Raid 00:28:06.860 17:24:44 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:06.860 17:24:44 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:28:06.860 17:24:44 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@904 -- # case $raid_level in 00:28:06.860 17:24:44 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@905 -- # jq '.[].num_blocks' 00:28:06.860 [2024-11-26 17:24:44.122670] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:28:06.860 17:24:44 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:06.860 17:24:44 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@904 -- # case $raid_level in 00:28:06.860 17:24:44 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@904 -- # case $raid_level in 00:28:06.860 17:24:44 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@905 -- # (( 393216 == 393216 )) 00:28:06.860 17:24:44 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@909 -- # killprocess 60454 00:28:06.860 17:24:44 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@954 -- # '[' -z 60454 ']' 00:28:06.860 17:24:44 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@958 -- # kill -0 60454 00:28:06.860 17:24:44 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@959 -- # uname 00:28:06.860 17:24:44 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:28:06.860 17:24:44 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 60454 00:28:06.860 killing process with pid 60454 00:28:06.860 17:24:44 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:28:06.860 17:24:44 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:28:06.860 17:24:44 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 60454' 00:28:06.860 17:24:44 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@973 -- # kill 60454 00:28:06.860 [2024-11-26 17:24:44.192035] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:28:06.860 [2024-11-26 17:24:44.192182] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:28:06.860 [2024-11-26 17:24:44.192244] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:28:06.860 [2024-11-26 17:24:44.192258] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Raid, state offline 00:28:06.860 17:24:44 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@978 -- # wait 60454 00:28:08.761 [2024-11-26 17:24:45.732084] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:28:09.696 ************************************ 00:28:09.696 END TEST raid0_resize_superblock_test 00:28:09.696 ************************************ 00:28:09.696 17:24:47 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@911 -- # return 0 00:28:09.696 00:28:09.696 real 0m5.076s 00:28:09.696 user 0m5.271s 00:28:09.696 sys 0m0.646s 00:28:09.696 17:24:47 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:28:09.696 17:24:47 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:28:09.696 17:24:47 bdev_raid -- bdev/bdev_raid.sh@954 -- # run_test raid1_resize_superblock_test raid_resize_superblock_test 1 00:28:09.696 17:24:47 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:28:09.696 17:24:47 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:28:09.696 17:24:47 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:28:09.696 ************************************ 00:28:09.696 START TEST raid1_resize_superblock_test 00:28:09.696 ************************************ 00:28:09.696 17:24:47 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@1129 -- # raid_resize_superblock_test 1 00:28:09.696 17:24:47 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@854 -- # local raid_level=1 00:28:09.696 17:24:47 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@857 -- # raid_pid=60558 00:28:09.696 17:24:47 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@856 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:28:09.696 Process raid pid: 60558 00:28:09.696 17:24:47 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@858 -- # echo 'Process raid pid: 60558' 00:28:09.696 17:24:47 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@859 -- # waitforlisten 60558 00:28:09.696 17:24:47 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@835 -- # '[' -z 60558 ']' 00:28:09.696 17:24:47 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:28:09.696 17:24:47 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:28:09.696 17:24:47 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:28:09.696 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:28:09.696 17:24:47 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:28:09.696 17:24:47 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:28:09.953 [2024-11-26 17:24:47.176791] Starting SPDK v25.01-pre git sha1 f7ce15267 / DPDK 24.03.0 initialization... 00:28:09.953 [2024-11-26 17:24:47.177024] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:28:09.953 [2024-11-26 17:24:47.369952] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:10.211 [2024-11-26 17:24:47.542357] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:28:10.468 [2024-11-26 17:24:47.817227] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:28:10.468 [2024-11-26 17:24:47.817278] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:28:11.037 17:24:48 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:28:11.037 17:24:48 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@868 -- # return 0 00:28:11.037 17:24:48 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@861 -- # rpc_cmd bdev_malloc_create -b malloc0 512 512 00:28:11.037 17:24:48 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:11.037 17:24:48 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:28:11.604 malloc0 00:28:11.604 17:24:48 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:11.604 17:24:48 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@863 -- # rpc_cmd bdev_passthru_create -b malloc0 -p pt0 00:28:11.604 17:24:48 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:11.604 17:24:48 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:28:11.604 [2024-11-26 17:24:48.832673] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc0 00:28:11.604 [2024-11-26 17:24:48.832743] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:28:11.604 [2024-11-26 17:24:48.832770] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:28:11.604 [2024-11-26 17:24:48.832787] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:28:11.604 [2024-11-26 17:24:48.835390] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:28:11.604 [2024-11-26 17:24:48.835436] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt0 00:28:11.604 pt0 00:28:11.604 17:24:48 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:11.604 17:24:48 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@864 -- # rpc_cmd bdev_lvol_create_lvstore pt0 lvs0 00:28:11.604 17:24:48 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:11.604 17:24:48 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:28:11.604 8142b3e6-0b80-4a4c-874c-7ba528cae90c 00:28:11.604 17:24:48 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:11.604 17:24:48 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@866 -- # rpc_cmd bdev_lvol_create -l lvs0 lvol0 64 00:28:11.604 17:24:48 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:11.604 17:24:48 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:28:11.604 de0170b9-7e62-41cc-b7ac-f810f57a1dc3 00:28:11.604 17:24:48 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:11.604 17:24:48 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@867 -- # rpc_cmd bdev_lvol_create -l lvs0 lvol1 64 00:28:11.604 17:24:48 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:11.604 17:24:48 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:28:11.604 375c6b7e-21b3-41de-9cb8-a0bb8e1a2e36 00:28:11.604 17:24:48 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:11.604 17:24:48 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@869 -- # case $raid_level in 00:28:11.604 17:24:48 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@871 -- # rpc_cmd bdev_raid_create -n Raid -r 1 -b ''\''lvs0/lvol0 lvs0/lvol1'\''' -s 00:28:11.604 17:24:48 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:11.604 17:24:48 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:28:11.604 [2024-11-26 17:24:48.959639] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev de0170b9-7e62-41cc-b7ac-f810f57a1dc3 is claimed 00:28:11.604 [2024-11-26 17:24:48.959945] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev 375c6b7e-21b3-41de-9cb8-a0bb8e1a2e36 is claimed 00:28:11.604 [2024-11-26 17:24:48.960140] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:28:11.604 [2024-11-26 17:24:48.960164] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 122880, blocklen 512 00:28:11.604 [2024-11-26 17:24:48.960492] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:28:11.604 [2024-11-26 17:24:48.960698] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:28:11.604 [2024-11-26 17:24:48.960710] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Raid, raid_bdev 0x617000007780 00:28:11.604 [2024-11-26 17:24:48.960890] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:28:11.604 17:24:48 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:11.604 17:24:48 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@875 -- # jq '.[].num_blocks' 00:28:11.604 17:24:48 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@875 -- # rpc_cmd bdev_get_bdevs -b lvs0/lvol0 00:28:11.604 17:24:48 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:11.604 17:24:48 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:28:11.604 17:24:48 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:11.604 17:24:49 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@875 -- # (( 64 == 64 )) 00:28:11.604 17:24:49 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@876 -- # jq '.[].num_blocks' 00:28:11.604 17:24:49 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@876 -- # rpc_cmd bdev_get_bdevs -b lvs0/lvol1 00:28:11.604 17:24:49 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:11.604 17:24:49 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:28:11.604 17:24:49 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:11.861 17:24:49 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@876 -- # (( 64 == 64 )) 00:28:11.861 17:24:49 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@879 -- # case $raid_level in 00:28:11.861 17:24:49 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@881 -- # rpc_cmd bdev_get_bdevs -b Raid 00:28:11.861 17:24:49 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@879 -- # case $raid_level in 00:28:11.861 17:24:49 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@881 -- # jq '.[].num_blocks' 00:28:11.861 17:24:49 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:11.861 17:24:49 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:28:11.861 [2024-11-26 17:24:49.063919] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:28:11.861 17:24:49 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:11.861 17:24:49 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@879 -- # case $raid_level in 00:28:11.861 17:24:49 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@879 -- # case $raid_level in 00:28:11.861 17:24:49 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@881 -- # (( 122880 == 122880 )) 00:28:11.861 17:24:49 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@885 -- # rpc_cmd bdev_lvol_resize lvs0/lvol0 100 00:28:11.861 17:24:49 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:11.861 17:24:49 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:28:11.861 [2024-11-26 17:24:49.095876] bdev_raid.c:2317:raid_bdev_resize_base_bdev: *DEBUG*: raid_bdev_resize_base_bdev 00:28:11.861 [2024-11-26 17:24:49.095910] bdev_raid.c:2330:raid_bdev_resize_base_bdev: *NOTICE*: base_bdev 'de0170b9-7e62-41cc-b7ac-f810f57a1dc3' was resized: old size 131072, new size 204800 00:28:11.861 17:24:49 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:11.861 17:24:49 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@886 -- # rpc_cmd bdev_lvol_resize lvs0/lvol1 100 00:28:11.861 17:24:49 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:11.861 17:24:49 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:28:11.861 [2024-11-26 17:24:49.103868] bdev_raid.c:2317:raid_bdev_resize_base_bdev: *DEBUG*: raid_bdev_resize_base_bdev 00:28:11.861 [2024-11-26 17:24:49.103903] bdev_raid.c:2330:raid_bdev_resize_base_bdev: *NOTICE*: base_bdev '375c6b7e-21b3-41de-9cb8-a0bb8e1a2e36' was resized: old size 131072, new size 204800 00:28:11.861 [2024-11-26 17:24:49.103945] bdev_raid.c:2344:raid_bdev_resize_base_bdev: *NOTICE*: raid bdev 'Raid': block count was changed from 122880 to 196608 00:28:11.861 17:24:49 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:11.861 17:24:49 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@889 -- # rpc_cmd bdev_get_bdevs -b lvs0/lvol0 00:28:11.861 17:24:49 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@889 -- # jq '.[].num_blocks' 00:28:11.861 17:24:49 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:11.861 17:24:49 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:28:11.861 17:24:49 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:11.861 17:24:49 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@889 -- # (( 100 == 100 )) 00:28:11.861 17:24:49 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@890 -- # rpc_cmd bdev_get_bdevs -b lvs0/lvol1 00:28:11.861 17:24:49 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:11.861 17:24:49 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:28:11.861 17:24:49 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@890 -- # jq '.[].num_blocks' 00:28:11.861 17:24:49 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:11.861 17:24:49 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@890 -- # (( 100 == 100 )) 00:28:11.861 17:24:49 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@893 -- # case $raid_level in 00:28:11.861 17:24:49 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@895 -- # jq '.[].num_blocks' 00:28:11.861 17:24:49 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@893 -- # case $raid_level in 00:28:11.861 17:24:49 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@895 -- # rpc_cmd bdev_get_bdevs -b Raid 00:28:11.861 17:24:49 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:11.861 17:24:49 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:28:11.861 [2024-11-26 17:24:49.219921] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:28:11.861 17:24:49 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:11.861 17:24:49 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@893 -- # case $raid_level in 00:28:11.861 17:24:49 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@893 -- # case $raid_level in 00:28:11.861 17:24:49 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@895 -- # (( 196608 == 196608 )) 00:28:11.861 17:24:49 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@898 -- # rpc_cmd bdev_passthru_delete pt0 00:28:11.861 17:24:49 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:11.861 17:24:49 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:28:11.861 [2024-11-26 17:24:49.263695] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev pt0 being removed: closing lvstore lvs0 00:28:11.861 [2024-11-26 17:24:49.263902] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: lvs0/lvol0 00:28:11.861 [2024-11-26 17:24:49.263939] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: lvs0/lvol1 00:28:11.861 [2024-11-26 17:24:49.264143] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:28:11.861 [2024-11-26 17:24:49.264352] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:28:11.861 [2024-11-26 17:24:49.264425] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:28:11.861 [2024-11-26 17:24:49.264443] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Raid, state offline 00:28:11.861 17:24:49 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:11.861 17:24:49 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@899 -- # rpc_cmd bdev_passthru_create -b malloc0 -p pt0 00:28:11.861 17:24:49 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:11.861 17:24:49 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:28:11.861 [2024-11-26 17:24:49.271608] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc0 00:28:11.861 [2024-11-26 17:24:49.271667] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:28:11.861 [2024-11-26 17:24:49.271688] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008180 00:28:11.861 [2024-11-26 17:24:49.271703] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:28:11.861 [2024-11-26 17:24:49.274233] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:28:11.861 [2024-11-26 17:24:49.274417] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt0 00:28:11.862 [2024-11-26 17:24:49.276224] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev de0170b9-7e62-41cc-b7ac-f810f57a1dc3 00:28:11.862 [2024-11-26 17:24:49.276299] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev de0170b9-7e62-41cc-b7ac-f810f57a1dc3 is claimed 00:28:11.862 [2024-11-26 17:24:49.276409] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev 375c6b7e-21b3-41de-9cb8-a0bb8e1a2e36 00:28:11.862 [2024-11-26 17:24:49.276431] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev 375c6b7e-21b3-41de-9cb8-a0bb8e1a2e36 is claimed 00:28:11.862 [2024-11-26 17:24:49.276604] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev 375c6b7e-21b3-41de-9cb8-a0bb8e1a2e36 (2) smaller than existing raid bdev Raid (3) 00:28:11.862 [2024-11-26 17:24:49.276632] bdev_raid.c:3894:raid_bdev_examine_done: *ERROR*: Failed to examine bdev de0170b9-7e62-41cc-b7ac-f810f57a1dc3: File exists 00:28:11.862 [2024-11-26 17:24:49.276672] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007b00 00:28:11.862 [2024-11-26 17:24:49.276686] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 196608, blocklen 512 00:28:11.862 pt0 00:28:11.862 [2024-11-26 17:24:49.276941] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006080 00:28:11.862 [2024-11-26 17:24:49.277112] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007b00 00:28:11.862 [2024-11-26 17:24:49.277123] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Raid, raid_bdev 0x617000007b00 00:28:11.862 [2024-11-26 17:24:49.277277] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:28:11.862 17:24:49 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:11.862 17:24:49 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@900 -- # rpc_cmd bdev_wait_for_examine 00:28:11.862 17:24:49 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:11.862 17:24:49 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:28:11.862 17:24:49 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:11.862 17:24:49 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@904 -- # case $raid_level in 00:28:11.862 17:24:49 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@906 -- # jq '.[].num_blocks' 00:28:11.862 17:24:49 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@904 -- # case $raid_level in 00:28:11.862 17:24:49 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@906 -- # rpc_cmd bdev_get_bdevs -b Raid 00:28:11.862 17:24:49 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:11.862 17:24:49 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:28:11.862 [2024-11-26 17:24:49.296531] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:28:12.120 17:24:49 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:12.120 17:24:49 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@904 -- # case $raid_level in 00:28:12.120 17:24:49 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@904 -- # case $raid_level in 00:28:12.120 17:24:49 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@906 -- # (( 196608 == 196608 )) 00:28:12.120 17:24:49 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@909 -- # killprocess 60558 00:28:12.120 17:24:49 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@954 -- # '[' -z 60558 ']' 00:28:12.120 17:24:49 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@958 -- # kill -0 60558 00:28:12.120 17:24:49 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@959 -- # uname 00:28:12.120 17:24:49 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:28:12.120 17:24:49 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 60558 00:28:12.120 killing process with pid 60558 00:28:12.120 17:24:49 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:28:12.120 17:24:49 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:28:12.120 17:24:49 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 60558' 00:28:12.120 17:24:49 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@973 -- # kill 60558 00:28:12.120 [2024-11-26 17:24:49.358759] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:28:12.120 [2024-11-26 17:24:49.358850] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:28:12.120 17:24:49 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@978 -- # wait 60558 00:28:12.120 [2024-11-26 17:24:49.358910] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:28:12.120 [2024-11-26 17:24:49.358923] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Raid, state offline 00:28:14.021 [2024-11-26 17:24:50.951952] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:28:14.957 17:24:52 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@911 -- # return 0 00:28:14.957 00:28:14.957 real 0m5.141s 00:28:14.957 user 0m5.458s 00:28:14.957 sys 0m0.705s 00:28:14.957 17:24:52 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:28:14.957 17:24:52 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:28:14.957 ************************************ 00:28:14.957 END TEST raid1_resize_superblock_test 00:28:14.957 ************************************ 00:28:14.957 17:24:52 bdev_raid -- bdev/bdev_raid.sh@956 -- # uname -s 00:28:14.957 17:24:52 bdev_raid -- bdev/bdev_raid.sh@956 -- # '[' Linux = Linux ']' 00:28:14.957 17:24:52 bdev_raid -- bdev/bdev_raid.sh@956 -- # modprobe -n nbd 00:28:14.957 17:24:52 bdev_raid -- bdev/bdev_raid.sh@957 -- # has_nbd=true 00:28:14.957 17:24:52 bdev_raid -- bdev/bdev_raid.sh@958 -- # modprobe nbd 00:28:14.957 17:24:52 bdev_raid -- bdev/bdev_raid.sh@959 -- # run_test raid_function_test_raid0 raid_function_test raid0 00:28:14.957 17:24:52 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:28:14.957 17:24:52 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:28:14.957 17:24:52 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:28:14.957 ************************************ 00:28:14.957 START TEST raid_function_test_raid0 00:28:14.957 ************************************ 00:28:14.957 17:24:52 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@1129 -- # raid_function_test raid0 00:28:14.957 17:24:52 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@64 -- # local raid_level=raid0 00:28:14.957 17:24:52 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@65 -- # local nbd=/dev/nbd0 00:28:14.957 17:24:52 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@66 -- # local raid_bdev 00:28:14.957 Process raid pid: 60666 00:28:14.957 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:28:14.957 17:24:52 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@69 -- # raid_pid=60666 00:28:14.957 17:24:52 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@70 -- # echo 'Process raid pid: 60666' 00:28:14.957 17:24:52 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@71 -- # waitforlisten 60666 00:28:14.957 17:24:52 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@835 -- # '[' -z 60666 ']' 00:28:14.957 17:24:52 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:28:14.957 17:24:52 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@68 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:28:14.958 17:24:52 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@840 -- # local max_retries=100 00:28:14.958 17:24:52 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:28:14.958 17:24:52 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@844 -- # xtrace_disable 00:28:14.958 17:24:52 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@10 -- # set +x 00:28:15.244 [2024-11-26 17:24:52.420713] Starting SPDK v25.01-pre git sha1 f7ce15267 / DPDK 24.03.0 initialization... 00:28:15.244 [2024-11-26 17:24:52.421245] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:28:15.244 [2024-11-26 17:24:52.628297] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:15.503 [2024-11-26 17:24:52.833138] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:28:15.761 [2024-11-26 17:24:53.133086] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:28:15.761 [2024-11-26 17:24:53.133382] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:28:16.019 17:24:53 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:28:16.019 17:24:53 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@868 -- # return 0 00:28:16.019 17:24:53 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@73 -- # rpc_cmd bdev_malloc_create 32 512 -b Base_1 00:28:16.019 17:24:53 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:16.019 17:24:53 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@10 -- # set +x 00:28:16.019 Base_1 00:28:16.019 17:24:53 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:16.019 17:24:53 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@74 -- # rpc_cmd bdev_malloc_create 32 512 -b Base_2 00:28:16.019 17:24:53 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:16.019 17:24:53 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@10 -- # set +x 00:28:16.279 Base_2 00:28:16.279 17:24:53 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:16.279 17:24:53 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@75 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''Base_1 Base_2'\''' -n raid 00:28:16.279 17:24:53 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:16.279 17:24:53 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@10 -- # set +x 00:28:16.279 [2024-11-26 17:24:53.508981] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev Base_1 is claimed 00:28:16.279 [2024-11-26 17:24:53.511824] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev Base_2 is claimed 00:28:16.279 [2024-11-26 17:24:53.512189] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:28:16.279 [2024-11-26 17:24:53.512217] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 131072, blocklen 512 00:28:16.279 [2024-11-26 17:24:53.512611] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:28:16.279 [2024-11-26 17:24:53.512812] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:28:16.279 [2024-11-26 17:24:53.512825] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid, raid_bdev 0x617000007780 00:28:16.279 [2024-11-26 17:24:53.513105] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:28:16.279 17:24:53 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:16.279 17:24:53 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@77 -- # rpc_cmd bdev_raid_get_bdevs online 00:28:16.279 17:24:53 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@77 -- # jq -r '.[0]["name"] | select(.)' 00:28:16.279 17:24:53 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:16.279 17:24:53 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@10 -- # set +x 00:28:16.279 17:24:53 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:16.279 17:24:53 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@77 -- # raid_bdev=raid 00:28:16.279 17:24:53 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@78 -- # '[' raid = '' ']' 00:28:16.279 17:24:53 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@83 -- # nbd_start_disks /var/tmp/spdk.sock raid /dev/nbd0 00:28:16.279 17:24:53 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:28:16.279 17:24:53 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@10 -- # bdev_list=('raid') 00:28:16.279 17:24:53 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:28:16.279 17:24:53 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:28:16.279 17:24:53 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:28:16.279 17:24:53 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@12 -- # local i 00:28:16.279 17:24:53 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:28:16.279 17:24:53 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:28:16.279 17:24:53 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk raid /dev/nbd0 00:28:16.538 [2024-11-26 17:24:53.745290] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:28:16.538 /dev/nbd0 00:28:16.538 17:24:53 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:28:16.538 17:24:53 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:28:16.538 17:24:53 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:28:16.538 17:24:53 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@873 -- # local i 00:28:16.538 17:24:53 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:28:16.538 17:24:53 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:28:16.538 17:24:53 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:28:16.538 17:24:53 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@877 -- # break 00:28:16.538 17:24:53 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:28:16.538 17:24:53 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:28:16.538 17:24:53 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:28:16.538 1+0 records in 00:28:16.538 1+0 records out 00:28:16.538 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000430391 s, 9.5 MB/s 00:28:16.538 17:24:53 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:28:16.538 17:24:53 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@890 -- # size=4096 00:28:16.538 17:24:53 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:28:16.538 17:24:53 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:28:16.538 17:24:53 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@893 -- # return 0 00:28:16.538 17:24:53 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:28:16.538 17:24:53 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:28:16.538 17:24:53 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@84 -- # nbd_get_count /var/tmp/spdk.sock 00:28:16.538 17:24:53 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk.sock 00:28:16.538 17:24:53 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_get_disks 00:28:16.797 17:24:54 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:28:16.797 { 00:28:16.797 "nbd_device": "/dev/nbd0", 00:28:16.797 "bdev_name": "raid" 00:28:16.797 } 00:28:16.797 ]' 00:28:16.797 17:24:54 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@64 -- # echo '[ 00:28:16.797 { 00:28:16.797 "nbd_device": "/dev/nbd0", 00:28:16.797 "bdev_name": "raid" 00:28:16.797 } 00:28:16.797 ]' 00:28:16.797 17:24:54 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:28:16.797 17:24:54 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@64 -- # nbd_disks_name=/dev/nbd0 00:28:16.797 17:24:54 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@65 -- # echo /dev/nbd0 00:28:16.797 17:24:54 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:28:16.797 17:24:54 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@65 -- # count=1 00:28:16.797 17:24:54 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@66 -- # echo 1 00:28:16.797 17:24:54 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@84 -- # count=1 00:28:16.797 17:24:54 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@85 -- # '[' 1 -ne 1 ']' 00:28:16.797 17:24:54 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@89 -- # raid_unmap_data_verify /dev/nbd0 00:28:16.797 17:24:54 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@17 -- # hash blkdiscard 00:28:16.797 17:24:54 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@18 -- # local nbd=/dev/nbd0 00:28:16.797 17:24:54 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@19 -- # local blksize 00:28:16.797 17:24:54 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@20 -- # lsblk -o LOG-SEC /dev/nbd0 00:28:16.797 17:24:54 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@20 -- # cut -d ' ' -f 5 00:28:16.797 17:24:54 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@20 -- # grep -v LOG-SEC 00:28:16.797 17:24:54 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@20 -- # blksize=512 00:28:16.797 17:24:54 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@21 -- # local rw_blk_num=4096 00:28:16.797 17:24:54 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@22 -- # local rw_len=2097152 00:28:16.797 17:24:54 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@23 -- # unmap_blk_offs=('0' '1028' '321') 00:28:16.797 17:24:54 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@23 -- # local unmap_blk_offs 00:28:16.797 17:24:54 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@24 -- # unmap_blk_nums=('128' '2035' '456') 00:28:16.797 17:24:54 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@24 -- # local unmap_blk_nums 00:28:16.797 17:24:54 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@25 -- # local unmap_off 00:28:16.797 17:24:54 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@26 -- # local unmap_len 00:28:16.797 17:24:54 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@29 -- # dd if=/dev/urandom of=/raidtest/raidrandtest bs=512 count=4096 00:28:16.797 4096+0 records in 00:28:16.797 4096+0 records out 00:28:16.797 2097152 bytes (2.1 MB, 2.0 MiB) copied, 0.0273254 s, 76.7 MB/s 00:28:16.797 17:24:54 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@30 -- # dd if=/raidtest/raidrandtest of=/dev/nbd0 bs=512 count=4096 oflag=direct 00:28:17.364 4096+0 records in 00:28:17.364 4096+0 records out 00:28:17.364 2097152 bytes (2.1 MB, 2.0 MiB) copied, 0.307271 s, 6.8 MB/s 00:28:17.364 17:24:54 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@31 -- # blockdev --flushbufs /dev/nbd0 00:28:17.364 17:24:54 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@34 -- # cmp -b -n 2097152 /raidtest/raidrandtest /dev/nbd0 00:28:17.364 17:24:54 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@36 -- # (( i = 0 )) 00:28:17.364 17:24:54 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@36 -- # (( i < 3 )) 00:28:17.365 17:24:54 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@37 -- # unmap_off=0 00:28:17.365 17:24:54 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@38 -- # unmap_len=65536 00:28:17.365 17:24:54 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@41 -- # dd if=/dev/zero of=/raidtest/raidrandtest bs=512 seek=0 count=128 conv=notrunc 00:28:17.365 128+0 records in 00:28:17.365 128+0 records out 00:28:17.365 65536 bytes (66 kB, 64 KiB) copied, 0.00118875 s, 55.1 MB/s 00:28:17.365 17:24:54 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@44 -- # blkdiscard -o 0 -l 65536 /dev/nbd0 00:28:17.365 17:24:54 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@45 -- # blockdev --flushbufs /dev/nbd0 00:28:17.365 17:24:54 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@48 -- # cmp -b -n 2097152 /raidtest/raidrandtest /dev/nbd0 00:28:17.365 17:24:54 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@36 -- # (( i++ )) 00:28:17.365 17:24:54 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@36 -- # (( i < 3 )) 00:28:17.365 17:24:54 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@37 -- # unmap_off=526336 00:28:17.365 17:24:54 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@38 -- # unmap_len=1041920 00:28:17.365 17:24:54 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@41 -- # dd if=/dev/zero of=/raidtest/raidrandtest bs=512 seek=1028 count=2035 conv=notrunc 00:28:17.365 2035+0 records in 00:28:17.365 2035+0 records out 00:28:17.365 1041920 bytes (1.0 MB, 1018 KiB) copied, 0.0166469 s, 62.6 MB/s 00:28:17.365 17:24:54 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@44 -- # blkdiscard -o 526336 -l 1041920 /dev/nbd0 00:28:17.365 17:24:54 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@45 -- # blockdev --flushbufs /dev/nbd0 00:28:17.365 17:24:54 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@48 -- # cmp -b -n 2097152 /raidtest/raidrandtest /dev/nbd0 00:28:17.365 17:24:54 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@36 -- # (( i++ )) 00:28:17.365 17:24:54 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@36 -- # (( i < 3 )) 00:28:17.365 17:24:54 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@37 -- # unmap_off=164352 00:28:17.365 17:24:54 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@38 -- # unmap_len=233472 00:28:17.365 17:24:54 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@41 -- # dd if=/dev/zero of=/raidtest/raidrandtest bs=512 seek=321 count=456 conv=notrunc 00:28:17.365 456+0 records in 00:28:17.365 456+0 records out 00:28:17.365 233472 bytes (233 kB, 228 KiB) copied, 0.00294122 s, 79.4 MB/s 00:28:17.365 17:24:54 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@44 -- # blkdiscard -o 164352 -l 233472 /dev/nbd0 00:28:17.365 17:24:54 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@45 -- # blockdev --flushbufs /dev/nbd0 00:28:17.365 17:24:54 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@48 -- # cmp -b -n 2097152 /raidtest/raidrandtest /dev/nbd0 00:28:17.365 17:24:54 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@36 -- # (( i++ )) 00:28:17.365 17:24:54 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@36 -- # (( i < 3 )) 00:28:17.365 17:24:54 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@52 -- # return 0 00:28:17.365 17:24:54 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@91 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:28:17.365 17:24:54 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:28:17.365 17:24:54 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:28:17.365 17:24:54 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:28:17.365 17:24:54 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@51 -- # local i 00:28:17.365 17:24:54 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:28:17.365 17:24:54 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:28:17.623 17:24:54 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:28:17.623 [2024-11-26 17:24:54.960138] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:28:17.623 17:24:54 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:28:17.623 17:24:54 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:28:17.623 17:24:54 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:28:17.623 17:24:54 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:28:17.623 17:24:54 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:28:17.623 17:24:54 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@41 -- # break 00:28:17.623 17:24:54 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@45 -- # return 0 00:28:17.623 17:24:54 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@92 -- # nbd_get_count /var/tmp/spdk.sock 00:28:17.623 17:24:54 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk.sock 00:28:17.623 17:24:54 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_get_disks 00:28:17.881 17:24:55 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:28:17.881 17:24:55 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@64 -- # echo '[]' 00:28:17.881 17:24:55 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:28:17.881 17:24:55 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:28:17.881 17:24:55 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@65 -- # echo '' 00:28:17.881 17:24:55 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:28:17.881 17:24:55 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@65 -- # true 00:28:17.881 17:24:55 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@65 -- # count=0 00:28:17.881 17:24:55 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@66 -- # echo 0 00:28:17.881 17:24:55 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@92 -- # count=0 00:28:17.881 17:24:55 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@93 -- # '[' 0 -ne 0 ']' 00:28:17.881 17:24:55 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@97 -- # killprocess 60666 00:28:17.881 17:24:55 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@954 -- # '[' -z 60666 ']' 00:28:17.881 17:24:55 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@958 -- # kill -0 60666 00:28:17.881 17:24:55 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@959 -- # uname 00:28:17.881 17:24:55 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:28:17.881 17:24:55 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 60666 00:28:18.139 17:24:55 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:28:18.139 killing process with pid 60666 00:28:18.139 17:24:55 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:28:18.139 17:24:55 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@972 -- # echo 'killing process with pid 60666' 00:28:18.139 17:24:55 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@973 -- # kill 60666 00:28:18.139 [2024-11-26 17:24:55.330439] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:28:18.139 17:24:55 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@978 -- # wait 60666 00:28:18.139 [2024-11-26 17:24:55.330544] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:28:18.139 [2024-11-26 17:24:55.330599] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:28:18.139 [2024-11-26 17:24:55.330618] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid, state offline 00:28:18.139 [2024-11-26 17:24:55.573071] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:28:19.515 17:24:56 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@99 -- # return 0 00:28:19.515 00:28:19.515 real 0m4.639s 00:28:19.515 user 0m5.269s 00:28:19.515 sys 0m1.335s 00:28:19.515 17:24:56 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@1130 -- # xtrace_disable 00:28:19.515 17:24:56 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@10 -- # set +x 00:28:19.515 ************************************ 00:28:19.515 END TEST raid_function_test_raid0 00:28:19.515 ************************************ 00:28:19.803 17:24:56 bdev_raid -- bdev/bdev_raid.sh@960 -- # run_test raid_function_test_concat raid_function_test concat 00:28:19.803 17:24:56 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:28:19.803 17:24:56 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:28:19.803 17:24:56 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:28:19.803 ************************************ 00:28:19.803 START TEST raid_function_test_concat 00:28:19.803 ************************************ 00:28:19.803 17:24:56 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@1129 -- # raid_function_test concat 00:28:19.803 17:24:56 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@64 -- # local raid_level=concat 00:28:19.803 17:24:56 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@65 -- # local nbd=/dev/nbd0 00:28:19.803 17:24:56 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@66 -- # local raid_bdev 00:28:19.803 17:24:56 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@69 -- # raid_pid=60795 00:28:19.803 17:24:56 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@68 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:28:19.803 17:24:56 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@70 -- # echo 'Process raid pid: 60795' 00:28:19.803 Process raid pid: 60795 00:28:19.803 17:24:56 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@71 -- # waitforlisten 60795 00:28:19.804 17:24:56 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@835 -- # '[' -z 60795 ']' 00:28:19.804 17:24:56 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:28:19.804 17:24:56 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@840 -- # local max_retries=100 00:28:19.804 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:28:19.804 17:24:56 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:28:19.804 17:24:56 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@844 -- # xtrace_disable 00:28:19.804 17:24:56 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@10 -- # set +x 00:28:19.804 [2024-11-26 17:24:57.133840] Starting SPDK v25.01-pre git sha1 f7ce15267 / DPDK 24.03.0 initialization... 00:28:19.804 [2024-11-26 17:24:57.134026] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:28:20.061 [2024-11-26 17:24:57.350195] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:20.320 [2024-11-26 17:24:57.590188] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:28:20.578 [2024-11-26 17:24:57.899264] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:28:20.578 [2024-11-26 17:24:57.899306] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:28:20.835 17:24:58 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:28:20.835 17:24:58 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@868 -- # return 0 00:28:20.835 17:24:58 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@73 -- # rpc_cmd bdev_malloc_create 32 512 -b Base_1 00:28:20.835 17:24:58 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:20.835 17:24:58 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@10 -- # set +x 00:28:20.835 Base_1 00:28:20.835 17:24:58 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:20.835 17:24:58 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@74 -- # rpc_cmd bdev_malloc_create 32 512 -b Base_2 00:28:20.835 17:24:58 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:20.835 17:24:58 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@10 -- # set +x 00:28:21.093 Base_2 00:28:21.093 17:24:58 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:21.093 17:24:58 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@75 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''Base_1 Base_2'\''' -n raid 00:28:21.093 17:24:58 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:21.093 17:24:58 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@10 -- # set +x 00:28:21.093 [2024-11-26 17:24:58.292486] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev Base_1 is claimed 00:28:21.093 [2024-11-26 17:24:58.294882] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev Base_2 is claimed 00:28:21.093 [2024-11-26 17:24:58.294987] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:28:21.093 [2024-11-26 17:24:58.295007] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 131072, blocklen 512 00:28:21.093 [2024-11-26 17:24:58.295349] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:28:21.093 [2024-11-26 17:24:58.295517] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:28:21.093 [2024-11-26 17:24:58.295530] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid, raid_bdev 0x617000007780 00:28:21.093 [2024-11-26 17:24:58.295710] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:28:21.093 17:24:58 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:21.093 17:24:58 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@77 -- # jq -r '.[0]["name"] | select(.)' 00:28:21.093 17:24:58 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@77 -- # rpc_cmd bdev_raid_get_bdevs online 00:28:21.093 17:24:58 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:21.093 17:24:58 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@10 -- # set +x 00:28:21.093 17:24:58 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:21.093 17:24:58 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@77 -- # raid_bdev=raid 00:28:21.093 17:24:58 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@78 -- # '[' raid = '' ']' 00:28:21.093 17:24:58 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@83 -- # nbd_start_disks /var/tmp/spdk.sock raid /dev/nbd0 00:28:21.093 17:24:58 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:28:21.093 17:24:58 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@10 -- # bdev_list=('raid') 00:28:21.093 17:24:58 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:28:21.093 17:24:58 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:28:21.093 17:24:58 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:28:21.093 17:24:58 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@12 -- # local i 00:28:21.093 17:24:58 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:28:21.093 17:24:58 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:28:21.093 17:24:58 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk raid /dev/nbd0 00:28:21.350 [2024-11-26 17:24:58.556605] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:28:21.350 /dev/nbd0 00:28:21.350 17:24:58 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:28:21.350 17:24:58 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:28:21.350 17:24:58 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:28:21.350 17:24:58 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@873 -- # local i 00:28:21.350 17:24:58 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:28:21.350 17:24:58 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:28:21.350 17:24:58 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:28:21.350 17:24:58 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@877 -- # break 00:28:21.350 17:24:58 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:28:21.350 17:24:58 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:28:21.350 17:24:58 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:28:21.350 1+0 records in 00:28:21.350 1+0 records out 00:28:21.350 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00036289 s, 11.3 MB/s 00:28:21.350 17:24:58 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:28:21.350 17:24:58 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@890 -- # size=4096 00:28:21.350 17:24:58 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:28:21.350 17:24:58 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:28:21.350 17:24:58 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@893 -- # return 0 00:28:21.350 17:24:58 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:28:21.350 17:24:58 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:28:21.350 17:24:58 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@84 -- # nbd_get_count /var/tmp/spdk.sock 00:28:21.350 17:24:58 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk.sock 00:28:21.350 17:24:58 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_get_disks 00:28:21.608 17:24:58 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:28:21.608 { 00:28:21.608 "nbd_device": "/dev/nbd0", 00:28:21.608 "bdev_name": "raid" 00:28:21.608 } 00:28:21.608 ]' 00:28:21.608 17:24:58 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@64 -- # echo '[ 00:28:21.608 { 00:28:21.608 "nbd_device": "/dev/nbd0", 00:28:21.608 "bdev_name": "raid" 00:28:21.608 } 00:28:21.608 ]' 00:28:21.608 17:24:58 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:28:21.608 17:24:58 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@64 -- # nbd_disks_name=/dev/nbd0 00:28:21.608 17:24:58 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@65 -- # echo /dev/nbd0 00:28:21.608 17:24:58 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:28:21.608 17:24:58 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@65 -- # count=1 00:28:21.608 17:24:58 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@66 -- # echo 1 00:28:21.608 17:24:58 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@84 -- # count=1 00:28:21.608 17:24:58 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@85 -- # '[' 1 -ne 1 ']' 00:28:21.608 17:24:58 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@89 -- # raid_unmap_data_verify /dev/nbd0 00:28:21.608 17:24:58 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@17 -- # hash blkdiscard 00:28:21.608 17:24:58 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@18 -- # local nbd=/dev/nbd0 00:28:21.608 17:24:58 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@19 -- # local blksize 00:28:21.608 17:24:58 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@20 -- # lsblk -o LOG-SEC /dev/nbd0 00:28:21.608 17:24:58 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@20 -- # cut -d ' ' -f 5 00:28:21.608 17:24:58 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@20 -- # grep -v LOG-SEC 00:28:21.608 17:24:58 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@20 -- # blksize=512 00:28:21.608 17:24:58 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@21 -- # local rw_blk_num=4096 00:28:21.608 17:24:58 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@22 -- # local rw_len=2097152 00:28:21.608 17:24:58 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@23 -- # unmap_blk_offs=('0' '1028' '321') 00:28:21.608 17:24:58 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@23 -- # local unmap_blk_offs 00:28:21.608 17:24:58 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@24 -- # unmap_blk_nums=('128' '2035' '456') 00:28:21.608 17:24:58 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@24 -- # local unmap_blk_nums 00:28:21.608 17:24:58 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@25 -- # local unmap_off 00:28:21.608 17:24:58 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@26 -- # local unmap_len 00:28:21.608 17:24:58 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@29 -- # dd if=/dev/urandom of=/raidtest/raidrandtest bs=512 count=4096 00:28:21.608 4096+0 records in 00:28:21.608 4096+0 records out 00:28:21.608 2097152 bytes (2.1 MB, 2.0 MiB) copied, 0.0423713 s, 49.5 MB/s 00:28:21.608 17:24:59 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@30 -- # dd if=/raidtest/raidrandtest of=/dev/nbd0 bs=512 count=4096 oflag=direct 00:28:22.174 4096+0 records in 00:28:22.174 4096+0 records out 00:28:22.174 2097152 bytes (2.1 MB, 2.0 MiB) copied, 0.318201 s, 6.6 MB/s 00:28:22.174 17:24:59 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@31 -- # blockdev --flushbufs /dev/nbd0 00:28:22.174 17:24:59 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@34 -- # cmp -b -n 2097152 /raidtest/raidrandtest /dev/nbd0 00:28:22.174 17:24:59 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@36 -- # (( i = 0 )) 00:28:22.174 17:24:59 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@36 -- # (( i < 3 )) 00:28:22.174 17:24:59 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@37 -- # unmap_off=0 00:28:22.174 17:24:59 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@38 -- # unmap_len=65536 00:28:22.174 17:24:59 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@41 -- # dd if=/dev/zero of=/raidtest/raidrandtest bs=512 seek=0 count=128 conv=notrunc 00:28:22.174 128+0 records in 00:28:22.174 128+0 records out 00:28:22.174 65536 bytes (66 kB, 64 KiB) copied, 0.00167481 s, 39.1 MB/s 00:28:22.174 17:24:59 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@44 -- # blkdiscard -o 0 -l 65536 /dev/nbd0 00:28:22.174 17:24:59 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@45 -- # blockdev --flushbufs /dev/nbd0 00:28:22.174 17:24:59 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@48 -- # cmp -b -n 2097152 /raidtest/raidrandtest /dev/nbd0 00:28:22.174 17:24:59 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@36 -- # (( i++ )) 00:28:22.174 17:24:59 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@36 -- # (( i < 3 )) 00:28:22.174 17:24:59 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@37 -- # unmap_off=526336 00:28:22.174 17:24:59 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@38 -- # unmap_len=1041920 00:28:22.174 17:24:59 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@41 -- # dd if=/dev/zero of=/raidtest/raidrandtest bs=512 seek=1028 count=2035 conv=notrunc 00:28:22.174 2035+0 records in 00:28:22.174 2035+0 records out 00:28:22.174 1041920 bytes (1.0 MB, 1018 KiB) copied, 0.0125226 s, 83.2 MB/s 00:28:22.174 17:24:59 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@44 -- # blkdiscard -o 526336 -l 1041920 /dev/nbd0 00:28:22.174 17:24:59 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@45 -- # blockdev --flushbufs /dev/nbd0 00:28:22.174 17:24:59 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@48 -- # cmp -b -n 2097152 /raidtest/raidrandtest /dev/nbd0 00:28:22.174 17:24:59 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@36 -- # (( i++ )) 00:28:22.174 17:24:59 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@36 -- # (( i < 3 )) 00:28:22.174 17:24:59 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@37 -- # unmap_off=164352 00:28:22.174 17:24:59 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@38 -- # unmap_len=233472 00:28:22.174 17:24:59 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@41 -- # dd if=/dev/zero of=/raidtest/raidrandtest bs=512 seek=321 count=456 conv=notrunc 00:28:22.174 456+0 records in 00:28:22.174 456+0 records out 00:28:22.174 233472 bytes (233 kB, 228 KiB) copied, 0.00503216 s, 46.4 MB/s 00:28:22.174 17:24:59 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@44 -- # blkdiscard -o 164352 -l 233472 /dev/nbd0 00:28:22.174 17:24:59 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@45 -- # blockdev --flushbufs /dev/nbd0 00:28:22.174 17:24:59 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@48 -- # cmp -b -n 2097152 /raidtest/raidrandtest /dev/nbd0 00:28:22.174 17:24:59 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@36 -- # (( i++ )) 00:28:22.174 17:24:59 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@36 -- # (( i < 3 )) 00:28:22.174 17:24:59 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@52 -- # return 0 00:28:22.174 17:24:59 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@91 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:28:22.174 17:24:59 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:28:22.174 17:24:59 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:28:22.174 17:24:59 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:28:22.174 17:24:59 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@51 -- # local i 00:28:22.174 17:24:59 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:28:22.174 17:24:59 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:28:22.433 17:24:59 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:28:22.433 [2024-11-26 17:24:59.730150] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:28:22.433 17:24:59 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:28:22.433 17:24:59 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:28:22.433 17:24:59 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:28:22.433 17:24:59 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:28:22.433 17:24:59 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:28:22.433 17:24:59 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@41 -- # break 00:28:22.433 17:24:59 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@45 -- # return 0 00:28:22.433 17:24:59 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@92 -- # nbd_get_count /var/tmp/spdk.sock 00:28:22.433 17:24:59 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk.sock 00:28:22.433 17:24:59 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_get_disks 00:28:22.690 17:24:59 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:28:22.690 17:24:59 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:28:22.690 17:24:59 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:28:22.690 17:25:00 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:28:22.690 17:25:00 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@65 -- # echo '' 00:28:22.690 17:25:00 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:28:22.690 17:25:00 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@65 -- # true 00:28:22.690 17:25:00 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@65 -- # count=0 00:28:22.690 17:25:00 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@66 -- # echo 0 00:28:22.690 17:25:00 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@92 -- # count=0 00:28:22.690 17:25:00 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@93 -- # '[' 0 -ne 0 ']' 00:28:22.690 17:25:00 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@97 -- # killprocess 60795 00:28:22.690 17:25:00 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@954 -- # '[' -z 60795 ']' 00:28:22.690 17:25:00 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@958 -- # kill -0 60795 00:28:22.690 17:25:00 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@959 -- # uname 00:28:22.690 17:25:00 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:28:22.690 17:25:00 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 60795 00:28:22.690 17:25:00 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:28:22.690 killing process with pid 60795 00:28:22.690 17:25:00 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:28:22.690 17:25:00 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@972 -- # echo 'killing process with pid 60795' 00:28:22.690 17:25:00 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@973 -- # kill 60795 00:28:22.690 [2024-11-26 17:25:00.079279] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:28:22.690 17:25:00 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@978 -- # wait 60795 00:28:22.690 [2024-11-26 17:25:00.079417] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:28:22.690 [2024-11-26 17:25:00.079489] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:28:22.690 [2024-11-26 17:25:00.079508] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid, state offline 00:28:22.948 [2024-11-26 17:25:00.329181] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:28:24.318 17:25:01 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@99 -- # return 0 00:28:24.318 00:28:24.318 real 0m4.683s 00:28:24.318 user 0m5.449s 00:28:24.318 sys 0m1.264s 00:28:24.318 17:25:01 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@1130 -- # xtrace_disable 00:28:24.318 ************************************ 00:28:24.318 END TEST raid_function_test_concat 00:28:24.318 17:25:01 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@10 -- # set +x 00:28:24.318 ************************************ 00:28:24.318 17:25:01 bdev_raid -- bdev/bdev_raid.sh@963 -- # run_test raid0_resize_test raid_resize_test 0 00:28:24.318 17:25:01 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:28:24.318 17:25:01 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:28:24.318 17:25:01 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:28:24.318 ************************************ 00:28:24.318 START TEST raid0_resize_test 00:28:24.318 ************************************ 00:28:24.318 17:25:01 bdev_raid.raid0_resize_test -- common/autotest_common.sh@1129 -- # raid_resize_test 0 00:28:24.318 17:25:01 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@332 -- # local raid_level=0 00:28:24.318 17:25:01 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@333 -- # local blksize=512 00:28:24.318 17:25:01 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@334 -- # local bdev_size_mb=32 00:28:24.318 17:25:01 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@335 -- # local new_bdev_size_mb=64 00:28:24.318 17:25:01 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@336 -- # local blkcnt 00:28:24.318 17:25:01 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@337 -- # local raid_size_mb 00:28:24.318 17:25:01 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@338 -- # local new_raid_size_mb 00:28:24.318 17:25:01 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@339 -- # local expected_size 00:28:24.318 17:25:01 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@342 -- # raid_pid=60935 00:28:24.318 17:25:01 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@343 -- # echo 'Process raid pid: 60935' 00:28:24.318 Process raid pid: 60935 00:28:24.318 17:25:01 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@344 -- # waitforlisten 60935 00:28:24.318 17:25:01 bdev_raid.raid0_resize_test -- common/autotest_common.sh@835 -- # '[' -z 60935 ']' 00:28:24.318 17:25:01 bdev_raid.raid0_resize_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:28:24.318 17:25:01 bdev_raid.raid0_resize_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:28:24.318 17:25:01 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@341 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:28:24.318 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:28:24.318 17:25:01 bdev_raid.raid0_resize_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:28:24.318 17:25:01 bdev_raid.raid0_resize_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:28:24.318 17:25:01 bdev_raid.raid0_resize_test -- common/autotest_common.sh@10 -- # set +x 00:28:24.581 [2024-11-26 17:25:01.818578] Starting SPDK v25.01-pre git sha1 f7ce15267 / DPDK 24.03.0 initialization... 00:28:24.581 [2024-11-26 17:25:01.818798] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:28:24.581 [2024-11-26 17:25:02.012346] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:24.839 [2024-11-26 17:25:02.199554] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:28:25.097 [2024-11-26 17:25:02.474730] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:28:25.097 [2024-11-26 17:25:02.474781] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:28:25.663 17:25:03 bdev_raid.raid0_resize_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:28:25.663 17:25:03 bdev_raid.raid0_resize_test -- common/autotest_common.sh@868 -- # return 0 00:28:25.663 17:25:03 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@346 -- # rpc_cmd bdev_null_create Base_1 32 512 00:28:25.663 17:25:03 bdev_raid.raid0_resize_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:25.663 17:25:03 bdev_raid.raid0_resize_test -- common/autotest_common.sh@10 -- # set +x 00:28:25.663 Base_1 00:28:25.663 17:25:03 bdev_raid.raid0_resize_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:25.663 17:25:03 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@347 -- # rpc_cmd bdev_null_create Base_2 32 512 00:28:25.663 17:25:03 bdev_raid.raid0_resize_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:25.663 17:25:03 bdev_raid.raid0_resize_test -- common/autotest_common.sh@10 -- # set +x 00:28:25.663 Base_2 00:28:25.663 17:25:03 bdev_raid.raid0_resize_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:25.663 17:25:03 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@349 -- # '[' 0 -eq 0 ']' 00:28:25.663 17:25:03 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@350 -- # rpc_cmd bdev_raid_create -z 64 -r 0 -b ''\''Base_1 Base_2'\''' -n Raid 00:28:25.663 17:25:03 bdev_raid.raid0_resize_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:25.663 17:25:03 bdev_raid.raid0_resize_test -- common/autotest_common.sh@10 -- # set +x 00:28:25.663 [2024-11-26 17:25:03.043919] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev Base_1 is claimed 00:28:25.663 [2024-11-26 17:25:03.047089] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev Base_2 is claimed 00:28:25.663 [2024-11-26 17:25:03.047192] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:28:25.663 [2024-11-26 17:25:03.047211] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 131072, blocklen 512 00:28:25.663 [2024-11-26 17:25:03.047610] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ba0 00:28:25.663 [2024-11-26 17:25:03.047788] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:28:25.663 [2024-11-26 17:25:03.047808] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Raid, raid_bdev 0x617000007780 00:28:25.663 [2024-11-26 17:25:03.048105] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:28:25.663 17:25:03 bdev_raid.raid0_resize_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:25.663 17:25:03 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@356 -- # rpc_cmd bdev_null_resize Base_1 64 00:28:25.663 17:25:03 bdev_raid.raid0_resize_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:25.663 17:25:03 bdev_raid.raid0_resize_test -- common/autotest_common.sh@10 -- # set +x 00:28:25.663 [2024-11-26 17:25:03.052096] bdev_raid.c:2317:raid_bdev_resize_base_bdev: *DEBUG*: raid_bdev_resize_base_bdev 00:28:25.663 [2024-11-26 17:25:03.052140] bdev_raid.c:2330:raid_bdev_resize_base_bdev: *NOTICE*: base_bdev 'Base_1' was resized: old size 65536, new size 131072 00:28:25.663 true 00:28:25.663 17:25:03 bdev_raid.raid0_resize_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:25.663 17:25:03 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@359 -- # rpc_cmd bdev_get_bdevs -b Raid 00:28:25.663 17:25:03 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@359 -- # jq '.[].num_blocks' 00:28:25.663 17:25:03 bdev_raid.raid0_resize_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:25.663 17:25:03 bdev_raid.raid0_resize_test -- common/autotest_common.sh@10 -- # set +x 00:28:25.663 [2024-11-26 17:25:03.064444] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:28:25.663 17:25:03 bdev_raid.raid0_resize_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:25.663 17:25:03 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@359 -- # blkcnt=131072 00:28:25.663 17:25:03 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@360 -- # raid_size_mb=64 00:28:25.663 17:25:03 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@361 -- # '[' 0 -eq 0 ']' 00:28:25.663 17:25:03 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@362 -- # expected_size=64 00:28:25.920 17:25:03 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@366 -- # '[' 64 '!=' 64 ']' 00:28:25.920 17:25:03 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@372 -- # rpc_cmd bdev_null_resize Base_2 64 00:28:25.920 17:25:03 bdev_raid.raid0_resize_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:25.921 17:25:03 bdev_raid.raid0_resize_test -- common/autotest_common.sh@10 -- # set +x 00:28:25.921 [2024-11-26 17:25:03.112179] bdev_raid.c:2317:raid_bdev_resize_base_bdev: *DEBUG*: raid_bdev_resize_base_bdev 00:28:25.921 [2024-11-26 17:25:03.112247] bdev_raid.c:2330:raid_bdev_resize_base_bdev: *NOTICE*: base_bdev 'Base_2' was resized: old size 65536, new size 131072 00:28:25.921 [2024-11-26 17:25:03.112314] bdev_raid.c:2344:raid_bdev_resize_base_bdev: *NOTICE*: raid bdev 'Raid': block count was changed from 131072 to 262144 00:28:25.921 true 00:28:25.921 17:25:03 bdev_raid.raid0_resize_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:25.921 17:25:03 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@375 -- # rpc_cmd bdev_get_bdevs -b Raid 00:28:25.921 17:25:03 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@375 -- # jq '.[].num_blocks' 00:28:25.921 17:25:03 bdev_raid.raid0_resize_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:25.921 17:25:03 bdev_raid.raid0_resize_test -- common/autotest_common.sh@10 -- # set +x 00:28:25.921 [2024-11-26 17:25:03.124533] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:28:25.921 17:25:03 bdev_raid.raid0_resize_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:25.921 17:25:03 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@375 -- # blkcnt=262144 00:28:25.921 17:25:03 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@376 -- # raid_size_mb=128 00:28:25.921 17:25:03 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@377 -- # '[' 0 -eq 0 ']' 00:28:25.921 17:25:03 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@378 -- # expected_size=128 00:28:25.921 17:25:03 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@382 -- # '[' 128 '!=' 128 ']' 00:28:25.921 17:25:03 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@387 -- # killprocess 60935 00:28:25.921 17:25:03 bdev_raid.raid0_resize_test -- common/autotest_common.sh@954 -- # '[' -z 60935 ']' 00:28:25.921 17:25:03 bdev_raid.raid0_resize_test -- common/autotest_common.sh@958 -- # kill -0 60935 00:28:25.921 17:25:03 bdev_raid.raid0_resize_test -- common/autotest_common.sh@959 -- # uname 00:28:25.921 17:25:03 bdev_raid.raid0_resize_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:28:25.921 17:25:03 bdev_raid.raid0_resize_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 60935 00:28:25.921 17:25:03 bdev_raid.raid0_resize_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:28:25.921 17:25:03 bdev_raid.raid0_resize_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:28:25.921 killing process with pid 60935 00:28:25.921 17:25:03 bdev_raid.raid0_resize_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 60935' 00:28:25.921 17:25:03 bdev_raid.raid0_resize_test -- common/autotest_common.sh@973 -- # kill 60935 00:28:25.921 17:25:03 bdev_raid.raid0_resize_test -- common/autotest_common.sh@978 -- # wait 60935 00:28:25.921 [2024-11-26 17:25:03.202745] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:28:25.921 [2024-11-26 17:25:03.202896] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:28:25.921 [2024-11-26 17:25:03.203003] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:28:25.921 [2024-11-26 17:25:03.203030] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Raid, state offline 00:28:25.921 [2024-11-26 17:25:03.222827] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:28:27.293 17:25:04 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@389 -- # return 0 00:28:27.293 00:28:27.293 real 0m2.746s 00:28:27.293 user 0m3.197s 00:28:27.293 sys 0m0.403s 00:28:27.293 17:25:04 bdev_raid.raid0_resize_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:28:27.293 17:25:04 bdev_raid.raid0_resize_test -- common/autotest_common.sh@10 -- # set +x 00:28:27.293 ************************************ 00:28:27.293 END TEST raid0_resize_test 00:28:27.293 ************************************ 00:28:27.293 17:25:04 bdev_raid -- bdev/bdev_raid.sh@964 -- # run_test raid1_resize_test raid_resize_test 1 00:28:27.293 17:25:04 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:28:27.293 17:25:04 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:28:27.293 17:25:04 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:28:27.293 ************************************ 00:28:27.293 START TEST raid1_resize_test 00:28:27.293 ************************************ 00:28:27.293 17:25:04 bdev_raid.raid1_resize_test -- common/autotest_common.sh@1129 -- # raid_resize_test 1 00:28:27.293 17:25:04 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@332 -- # local raid_level=1 00:28:27.293 17:25:04 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@333 -- # local blksize=512 00:28:27.293 17:25:04 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@334 -- # local bdev_size_mb=32 00:28:27.293 17:25:04 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@335 -- # local new_bdev_size_mb=64 00:28:27.293 17:25:04 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@336 -- # local blkcnt 00:28:27.293 17:25:04 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@337 -- # local raid_size_mb 00:28:27.293 17:25:04 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@338 -- # local new_raid_size_mb 00:28:27.293 17:25:04 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@339 -- # local expected_size 00:28:27.293 17:25:04 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@342 -- # raid_pid=60997 00:28:27.293 Process raid pid: 60997 00:28:27.293 17:25:04 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@341 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:28:27.293 17:25:04 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@343 -- # echo 'Process raid pid: 60997' 00:28:27.293 17:25:04 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@344 -- # waitforlisten 60997 00:28:27.293 17:25:04 bdev_raid.raid1_resize_test -- common/autotest_common.sh@835 -- # '[' -z 60997 ']' 00:28:27.293 17:25:04 bdev_raid.raid1_resize_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:28:27.293 17:25:04 bdev_raid.raid1_resize_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:28:27.293 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:28:27.293 17:25:04 bdev_raid.raid1_resize_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:28:27.293 17:25:04 bdev_raid.raid1_resize_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:28:27.293 17:25:04 bdev_raid.raid1_resize_test -- common/autotest_common.sh@10 -- # set +x 00:28:27.293 [2024-11-26 17:25:04.635425] Starting SPDK v25.01-pre git sha1 f7ce15267 / DPDK 24.03.0 initialization... 00:28:27.293 [2024-11-26 17:25:04.635619] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:28:27.551 [2024-11-26 17:25:04.840757] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:27.551 [2024-11-26 17:25:04.969448] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:28:27.809 [2024-11-26 17:25:05.194850] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:28:27.809 [2024-11-26 17:25:05.194902] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:28:28.377 17:25:05 bdev_raid.raid1_resize_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:28:28.377 17:25:05 bdev_raid.raid1_resize_test -- common/autotest_common.sh@868 -- # return 0 00:28:28.377 17:25:05 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@346 -- # rpc_cmd bdev_null_create Base_1 32 512 00:28:28.377 17:25:05 bdev_raid.raid1_resize_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:28.377 17:25:05 bdev_raid.raid1_resize_test -- common/autotest_common.sh@10 -- # set +x 00:28:28.377 Base_1 00:28:28.377 17:25:05 bdev_raid.raid1_resize_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:28.377 17:25:05 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@347 -- # rpc_cmd bdev_null_create Base_2 32 512 00:28:28.377 17:25:05 bdev_raid.raid1_resize_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:28.377 17:25:05 bdev_raid.raid1_resize_test -- common/autotest_common.sh@10 -- # set +x 00:28:28.377 Base_2 00:28:28.377 17:25:05 bdev_raid.raid1_resize_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:28.377 17:25:05 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@349 -- # '[' 1 -eq 0 ']' 00:28:28.377 17:25:05 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@352 -- # rpc_cmd bdev_raid_create -r 1 -b ''\''Base_1 Base_2'\''' -n Raid 00:28:28.377 17:25:05 bdev_raid.raid1_resize_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:28.377 17:25:05 bdev_raid.raid1_resize_test -- common/autotest_common.sh@10 -- # set +x 00:28:28.377 [2024-11-26 17:25:05.573726] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev Base_1 is claimed 00:28:28.377 [2024-11-26 17:25:05.575953] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev Base_2 is claimed 00:28:28.377 [2024-11-26 17:25:05.576222] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:28:28.377 [2024-11-26 17:25:05.576249] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:28:28.377 [2024-11-26 17:25:05.576602] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ba0 00:28:28.377 [2024-11-26 17:25:05.576747] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:28:28.377 [2024-11-26 17:25:05.576759] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Raid, raid_bdev 0x617000007780 00:28:28.377 [2024-11-26 17:25:05.576940] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:28:28.377 17:25:05 bdev_raid.raid1_resize_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:28.377 17:25:05 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@356 -- # rpc_cmd bdev_null_resize Base_1 64 00:28:28.377 17:25:05 bdev_raid.raid1_resize_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:28.377 17:25:05 bdev_raid.raid1_resize_test -- common/autotest_common.sh@10 -- # set +x 00:28:28.377 [2024-11-26 17:25:05.581698] bdev_raid.c:2317:raid_bdev_resize_base_bdev: *DEBUG*: raid_bdev_resize_base_bdev 00:28:28.377 [2024-11-26 17:25:05.581735] bdev_raid.c:2330:raid_bdev_resize_base_bdev: *NOTICE*: base_bdev 'Base_1' was resized: old size 65536, new size 131072 00:28:28.377 true 00:28:28.377 17:25:05 bdev_raid.raid1_resize_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:28.377 17:25:05 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@359 -- # rpc_cmd bdev_get_bdevs -b Raid 00:28:28.377 17:25:05 bdev_raid.raid1_resize_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:28.377 17:25:05 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@359 -- # jq '.[].num_blocks' 00:28:28.377 17:25:05 bdev_raid.raid1_resize_test -- common/autotest_common.sh@10 -- # set +x 00:28:28.377 [2024-11-26 17:25:05.593876] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:28:28.377 17:25:05 bdev_raid.raid1_resize_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:28.377 17:25:05 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@359 -- # blkcnt=65536 00:28:28.377 17:25:05 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@360 -- # raid_size_mb=32 00:28:28.377 17:25:05 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@361 -- # '[' 1 -eq 0 ']' 00:28:28.377 17:25:05 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@364 -- # expected_size=32 00:28:28.377 17:25:05 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@366 -- # '[' 32 '!=' 32 ']' 00:28:28.377 17:25:05 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@372 -- # rpc_cmd bdev_null_resize Base_2 64 00:28:28.377 17:25:05 bdev_raid.raid1_resize_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:28.377 17:25:05 bdev_raid.raid1_resize_test -- common/autotest_common.sh@10 -- # set +x 00:28:28.377 [2024-11-26 17:25:05.629689] bdev_raid.c:2317:raid_bdev_resize_base_bdev: *DEBUG*: raid_bdev_resize_base_bdev 00:28:28.377 [2024-11-26 17:25:05.629716] bdev_raid.c:2330:raid_bdev_resize_base_bdev: *NOTICE*: base_bdev 'Base_2' was resized: old size 65536, new size 131072 00:28:28.377 [2024-11-26 17:25:05.629750] bdev_raid.c:2344:raid_bdev_resize_base_bdev: *NOTICE*: raid bdev 'Raid': block count was changed from 65536 to 131072 00:28:28.377 true 00:28:28.377 17:25:05 bdev_raid.raid1_resize_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:28.377 17:25:05 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@375 -- # rpc_cmd bdev_get_bdevs -b Raid 00:28:28.377 17:25:05 bdev_raid.raid1_resize_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:28.377 17:25:05 bdev_raid.raid1_resize_test -- common/autotest_common.sh@10 -- # set +x 00:28:28.377 17:25:05 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@375 -- # jq '.[].num_blocks' 00:28:28.377 [2024-11-26 17:25:05.641848] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:28:28.377 17:25:05 bdev_raid.raid1_resize_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:28.377 17:25:05 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@375 -- # blkcnt=131072 00:28:28.377 17:25:05 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@376 -- # raid_size_mb=64 00:28:28.377 17:25:05 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@377 -- # '[' 1 -eq 0 ']' 00:28:28.377 17:25:05 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@380 -- # expected_size=64 00:28:28.377 17:25:05 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@382 -- # '[' 64 '!=' 64 ']' 00:28:28.377 17:25:05 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@387 -- # killprocess 60997 00:28:28.377 17:25:05 bdev_raid.raid1_resize_test -- common/autotest_common.sh@954 -- # '[' -z 60997 ']' 00:28:28.377 17:25:05 bdev_raid.raid1_resize_test -- common/autotest_common.sh@958 -- # kill -0 60997 00:28:28.377 17:25:05 bdev_raid.raid1_resize_test -- common/autotest_common.sh@959 -- # uname 00:28:28.377 17:25:05 bdev_raid.raid1_resize_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:28:28.377 17:25:05 bdev_raid.raid1_resize_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 60997 00:28:28.377 killing process with pid 60997 00:28:28.377 17:25:05 bdev_raid.raid1_resize_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:28:28.377 17:25:05 bdev_raid.raid1_resize_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:28:28.377 17:25:05 bdev_raid.raid1_resize_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 60997' 00:28:28.377 17:25:05 bdev_raid.raid1_resize_test -- common/autotest_common.sh@973 -- # kill 60997 00:28:28.377 [2024-11-26 17:25:05.716366] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:28:28.377 17:25:05 bdev_raid.raid1_resize_test -- common/autotest_common.sh@978 -- # wait 60997 00:28:28.377 [2024-11-26 17:25:05.716467] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:28:28.377 [2024-11-26 17:25:05.716985] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:28:28.378 [2024-11-26 17:25:05.717013] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Raid, state offline 00:28:28.378 [2024-11-26 17:25:05.735076] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:28:29.756 ************************************ 00:28:29.756 END TEST raid1_resize_test 00:28:29.756 ************************************ 00:28:29.756 17:25:06 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@389 -- # return 0 00:28:29.756 00:28:29.756 real 0m2.406s 00:28:29.756 user 0m2.556s 00:28:29.756 sys 0m0.426s 00:28:29.756 17:25:06 bdev_raid.raid1_resize_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:28:29.756 17:25:06 bdev_raid.raid1_resize_test -- common/autotest_common.sh@10 -- # set +x 00:28:29.756 17:25:06 bdev_raid -- bdev/bdev_raid.sh@966 -- # for n in {2..4} 00:28:29.756 17:25:06 bdev_raid -- bdev/bdev_raid.sh@967 -- # for level in raid0 concat raid1 00:28:29.756 17:25:06 bdev_raid -- bdev/bdev_raid.sh@968 -- # run_test raid_state_function_test raid_state_function_test raid0 2 false 00:28:29.756 17:25:06 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:28:29.756 17:25:06 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:28:29.756 17:25:06 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:28:29.756 ************************************ 00:28:29.756 START TEST raid_state_function_test 00:28:29.756 ************************************ 00:28:29.756 17:25:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1129 -- # raid_state_function_test raid0 2 false 00:28:29.756 17:25:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # local raid_level=raid0 00:28:29.756 17:25:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=2 00:28:29.756 17:25:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # local superblock=false 00:28:29.756 17:25:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:28:29.756 17:25:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:28:29.756 17:25:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:28:29.756 17:25:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:28:29.756 17:25:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:28:29.756 17:25:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:28:29.756 17:25:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:28:29.756 17:25:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:28:29.756 17:25:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:28:29.756 17:25:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:28:29.756 17:25:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:28:29.756 17:25:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:28:29.756 17:25:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # local strip_size 00:28:29.756 17:25:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:28:29.756 17:25:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:28:29.756 17:25:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@215 -- # '[' raid0 '!=' raid1 ']' 00:28:29.756 17:25:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:28:29.756 17:25:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:28:29.756 17:25:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@222 -- # '[' false = true ']' 00:28:29.756 Process raid pid: 61059 00:28:29.756 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:28:29.756 17:25:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # superblock_create_arg= 00:28:29.756 17:25:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@229 -- # raid_pid=61059 00:28:29.756 17:25:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 61059' 00:28:29.756 17:25:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@231 -- # waitforlisten 61059 00:28:29.756 17:25:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@835 -- # '[' -z 61059 ']' 00:28:29.756 17:25:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:28:29.756 17:25:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:28:29.756 17:25:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:28:29.756 17:25:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:28:29.756 17:25:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:28:29.756 17:25:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:28:29.756 [2024-11-26 17:25:07.099159] Starting SPDK v25.01-pre git sha1 f7ce15267 / DPDK 24.03.0 initialization... 00:28:29.756 [2024-11-26 17:25:07.099583] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:28:30.015 [2024-11-26 17:25:07.294204] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:30.015 [2024-11-26 17:25:07.416216] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:28:30.273 [2024-11-26 17:25:07.619318] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:28:30.273 [2024-11-26 17:25:07.619543] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:28:30.840 17:25:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:28:30.840 17:25:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@868 -- # return 0 00:28:30.840 17:25:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:28:30.840 17:25:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:30.840 17:25:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:28:30.840 [2024-11-26 17:25:07.989297] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:28:30.840 [2024-11-26 17:25:07.989497] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:28:30.840 [2024-11-26 17:25:07.989593] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:28:30.840 [2024-11-26 17:25:07.989642] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:28:30.840 17:25:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:30.840 17:25:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 2 00:28:30.840 17:25:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:28:30.840 17:25:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:28:30.840 17:25:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:28:30.840 17:25:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:28:30.840 17:25:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:28:30.840 17:25:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:28:30.840 17:25:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:28:30.840 17:25:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:28:30.840 17:25:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:28:30.840 17:25:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:28:30.840 17:25:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:28:30.840 17:25:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:30.840 17:25:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:28:30.840 17:25:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:30.840 17:25:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:28:30.840 "name": "Existed_Raid", 00:28:30.840 "uuid": "00000000-0000-0000-0000-000000000000", 00:28:30.840 "strip_size_kb": 64, 00:28:30.841 "state": "configuring", 00:28:30.841 "raid_level": "raid0", 00:28:30.841 "superblock": false, 00:28:30.841 "num_base_bdevs": 2, 00:28:30.841 "num_base_bdevs_discovered": 0, 00:28:30.841 "num_base_bdevs_operational": 2, 00:28:30.841 "base_bdevs_list": [ 00:28:30.841 { 00:28:30.841 "name": "BaseBdev1", 00:28:30.841 "uuid": "00000000-0000-0000-0000-000000000000", 00:28:30.841 "is_configured": false, 00:28:30.841 "data_offset": 0, 00:28:30.841 "data_size": 0 00:28:30.841 }, 00:28:30.841 { 00:28:30.841 "name": "BaseBdev2", 00:28:30.841 "uuid": "00000000-0000-0000-0000-000000000000", 00:28:30.841 "is_configured": false, 00:28:30.841 "data_offset": 0, 00:28:30.841 "data_size": 0 00:28:30.841 } 00:28:30.841 ] 00:28:30.841 }' 00:28:30.841 17:25:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:28:30.841 17:25:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:28:31.099 17:25:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:28:31.099 17:25:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:31.099 17:25:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:28:31.099 [2024-11-26 17:25:08.441359] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:28:31.099 [2024-11-26 17:25:08.441406] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:28:31.099 17:25:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:31.099 17:25:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:28:31.099 17:25:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:31.099 17:25:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:28:31.099 [2024-11-26 17:25:08.453356] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:28:31.099 [2024-11-26 17:25:08.453410] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:28:31.099 [2024-11-26 17:25:08.453423] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:28:31.099 [2024-11-26 17:25:08.453440] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:28:31.099 17:25:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:31.099 17:25:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:28:31.099 17:25:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:31.099 17:25:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:28:31.099 [2024-11-26 17:25:08.500999] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:28:31.099 BaseBdev1 00:28:31.099 17:25:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:31.099 17:25:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:28:31.099 17:25:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:28:31.099 17:25:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:28:31.099 17:25:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:28:31.099 17:25:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:28:31.099 17:25:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:28:31.099 17:25:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:28:31.099 17:25:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:31.099 17:25:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:28:31.099 17:25:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:31.099 17:25:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:28:31.099 17:25:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:31.099 17:25:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:28:31.099 [ 00:28:31.099 { 00:28:31.099 "name": "BaseBdev1", 00:28:31.099 "aliases": [ 00:28:31.099 "669a80ec-1b5f-4c9f-8567-008892ec6f3f" 00:28:31.099 ], 00:28:31.099 "product_name": "Malloc disk", 00:28:31.099 "block_size": 512, 00:28:31.099 "num_blocks": 65536, 00:28:31.099 "uuid": "669a80ec-1b5f-4c9f-8567-008892ec6f3f", 00:28:31.099 "assigned_rate_limits": { 00:28:31.099 "rw_ios_per_sec": 0, 00:28:31.099 "rw_mbytes_per_sec": 0, 00:28:31.099 "r_mbytes_per_sec": 0, 00:28:31.099 "w_mbytes_per_sec": 0 00:28:31.099 }, 00:28:31.099 "claimed": true, 00:28:31.099 "claim_type": "exclusive_write", 00:28:31.099 "zoned": false, 00:28:31.099 "supported_io_types": { 00:28:31.099 "read": true, 00:28:31.099 "write": true, 00:28:31.099 "unmap": true, 00:28:31.099 "flush": true, 00:28:31.099 "reset": true, 00:28:31.099 "nvme_admin": false, 00:28:31.099 "nvme_io": false, 00:28:31.099 "nvme_io_md": false, 00:28:31.099 "write_zeroes": true, 00:28:31.099 "zcopy": true, 00:28:31.099 "get_zone_info": false, 00:28:31.099 "zone_management": false, 00:28:31.099 "zone_append": false, 00:28:31.099 "compare": false, 00:28:31.099 "compare_and_write": false, 00:28:31.099 "abort": true, 00:28:31.099 "seek_hole": false, 00:28:31.099 "seek_data": false, 00:28:31.099 "copy": true, 00:28:31.099 "nvme_iov_md": false 00:28:31.099 }, 00:28:31.099 "memory_domains": [ 00:28:31.099 { 00:28:31.099 "dma_device_id": "system", 00:28:31.099 "dma_device_type": 1 00:28:31.099 }, 00:28:31.099 { 00:28:31.099 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:28:31.099 "dma_device_type": 2 00:28:31.099 } 00:28:31.099 ], 00:28:31.099 "driver_specific": {} 00:28:31.099 } 00:28:31.099 ] 00:28:31.099 17:25:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:31.099 17:25:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:28:31.099 17:25:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 2 00:28:31.099 17:25:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:28:31.099 17:25:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:28:31.099 17:25:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:28:31.099 17:25:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:28:31.099 17:25:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:28:31.099 17:25:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:28:31.099 17:25:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:28:31.099 17:25:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:28:31.099 17:25:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:28:31.099 17:25:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:28:31.358 17:25:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:28:31.358 17:25:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:31.358 17:25:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:28:31.358 17:25:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:31.358 17:25:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:28:31.358 "name": "Existed_Raid", 00:28:31.358 "uuid": "00000000-0000-0000-0000-000000000000", 00:28:31.358 "strip_size_kb": 64, 00:28:31.358 "state": "configuring", 00:28:31.358 "raid_level": "raid0", 00:28:31.358 "superblock": false, 00:28:31.358 "num_base_bdevs": 2, 00:28:31.358 "num_base_bdevs_discovered": 1, 00:28:31.358 "num_base_bdevs_operational": 2, 00:28:31.358 "base_bdevs_list": [ 00:28:31.358 { 00:28:31.358 "name": "BaseBdev1", 00:28:31.358 "uuid": "669a80ec-1b5f-4c9f-8567-008892ec6f3f", 00:28:31.358 "is_configured": true, 00:28:31.358 "data_offset": 0, 00:28:31.358 "data_size": 65536 00:28:31.358 }, 00:28:31.358 { 00:28:31.358 "name": "BaseBdev2", 00:28:31.358 "uuid": "00000000-0000-0000-0000-000000000000", 00:28:31.358 "is_configured": false, 00:28:31.358 "data_offset": 0, 00:28:31.358 "data_size": 0 00:28:31.358 } 00:28:31.358 ] 00:28:31.358 }' 00:28:31.358 17:25:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:28:31.358 17:25:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:28:31.633 17:25:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:28:31.633 17:25:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:31.633 17:25:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:28:31.633 [2024-11-26 17:25:08.969163] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:28:31.633 [2024-11-26 17:25:08.969218] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:28:31.633 17:25:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:31.633 17:25:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:28:31.633 17:25:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:31.633 17:25:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:28:31.633 [2024-11-26 17:25:08.981192] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:28:31.633 [2024-11-26 17:25:08.983561] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:28:31.633 [2024-11-26 17:25:08.983731] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:28:31.633 17:25:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:31.633 17:25:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:28:31.633 17:25:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:28:31.633 17:25:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 2 00:28:31.633 17:25:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:28:31.633 17:25:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:28:31.633 17:25:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:28:31.633 17:25:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:28:31.633 17:25:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:28:31.633 17:25:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:28:31.633 17:25:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:28:31.633 17:25:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:28:31.633 17:25:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:28:31.633 17:25:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:28:31.633 17:25:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:28:31.633 17:25:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:31.633 17:25:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:28:31.633 17:25:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:31.633 17:25:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:28:31.633 "name": "Existed_Raid", 00:28:31.633 "uuid": "00000000-0000-0000-0000-000000000000", 00:28:31.633 "strip_size_kb": 64, 00:28:31.633 "state": "configuring", 00:28:31.633 "raid_level": "raid0", 00:28:31.633 "superblock": false, 00:28:31.633 "num_base_bdevs": 2, 00:28:31.633 "num_base_bdevs_discovered": 1, 00:28:31.633 "num_base_bdevs_operational": 2, 00:28:31.633 "base_bdevs_list": [ 00:28:31.633 { 00:28:31.633 "name": "BaseBdev1", 00:28:31.633 "uuid": "669a80ec-1b5f-4c9f-8567-008892ec6f3f", 00:28:31.633 "is_configured": true, 00:28:31.633 "data_offset": 0, 00:28:31.633 "data_size": 65536 00:28:31.633 }, 00:28:31.633 { 00:28:31.633 "name": "BaseBdev2", 00:28:31.633 "uuid": "00000000-0000-0000-0000-000000000000", 00:28:31.633 "is_configured": false, 00:28:31.633 "data_offset": 0, 00:28:31.633 "data_size": 0 00:28:31.633 } 00:28:31.633 ] 00:28:31.633 }' 00:28:31.633 17:25:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:28:31.633 17:25:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:28:32.197 17:25:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:28:32.197 17:25:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:32.197 17:25:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:28:32.197 [2024-11-26 17:25:09.466827] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:28:32.197 [2024-11-26 17:25:09.467387] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:28:32.197 [2024-11-26 17:25:09.467415] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 131072, blocklen 512 00:28:32.197 [2024-11-26 17:25:09.467766] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:28:32.197 [2024-11-26 17:25:09.467979] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:28:32.197 [2024-11-26 17:25:09.467996] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:28:32.197 [2024-11-26 17:25:09.468330] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:28:32.197 BaseBdev2 00:28:32.197 17:25:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:32.197 17:25:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:28:32.197 17:25:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:28:32.197 17:25:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:28:32.197 17:25:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:28:32.197 17:25:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:28:32.197 17:25:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:28:32.197 17:25:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:28:32.197 17:25:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:32.197 17:25:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:28:32.197 17:25:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:32.197 17:25:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:28:32.197 17:25:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:32.197 17:25:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:28:32.197 [ 00:28:32.197 { 00:28:32.197 "name": "BaseBdev2", 00:28:32.197 "aliases": [ 00:28:32.197 "870fe54f-5459-4a62-b947-773ce92c2436" 00:28:32.197 ], 00:28:32.197 "product_name": "Malloc disk", 00:28:32.197 "block_size": 512, 00:28:32.197 "num_blocks": 65536, 00:28:32.197 "uuid": "870fe54f-5459-4a62-b947-773ce92c2436", 00:28:32.197 "assigned_rate_limits": { 00:28:32.197 "rw_ios_per_sec": 0, 00:28:32.197 "rw_mbytes_per_sec": 0, 00:28:32.197 "r_mbytes_per_sec": 0, 00:28:32.197 "w_mbytes_per_sec": 0 00:28:32.197 }, 00:28:32.197 "claimed": true, 00:28:32.197 "claim_type": "exclusive_write", 00:28:32.197 "zoned": false, 00:28:32.197 "supported_io_types": { 00:28:32.197 "read": true, 00:28:32.197 "write": true, 00:28:32.197 "unmap": true, 00:28:32.197 "flush": true, 00:28:32.197 "reset": true, 00:28:32.197 "nvme_admin": false, 00:28:32.197 "nvme_io": false, 00:28:32.197 "nvme_io_md": false, 00:28:32.197 "write_zeroes": true, 00:28:32.197 "zcopy": true, 00:28:32.197 "get_zone_info": false, 00:28:32.197 "zone_management": false, 00:28:32.197 "zone_append": false, 00:28:32.197 "compare": false, 00:28:32.197 "compare_and_write": false, 00:28:32.197 "abort": true, 00:28:32.197 "seek_hole": false, 00:28:32.197 "seek_data": false, 00:28:32.197 "copy": true, 00:28:32.197 "nvme_iov_md": false 00:28:32.197 }, 00:28:32.197 "memory_domains": [ 00:28:32.197 { 00:28:32.197 "dma_device_id": "system", 00:28:32.197 "dma_device_type": 1 00:28:32.197 }, 00:28:32.197 { 00:28:32.197 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:28:32.197 "dma_device_type": 2 00:28:32.197 } 00:28:32.197 ], 00:28:32.197 "driver_specific": {} 00:28:32.197 } 00:28:32.197 ] 00:28:32.197 17:25:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:32.197 17:25:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:28:32.197 17:25:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:28:32.197 17:25:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:28:32.197 17:25:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid0 64 2 00:28:32.197 17:25:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:28:32.197 17:25:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:28:32.197 17:25:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:28:32.197 17:25:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:28:32.198 17:25:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:28:32.198 17:25:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:28:32.198 17:25:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:28:32.198 17:25:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:28:32.198 17:25:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:28:32.198 17:25:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:28:32.198 17:25:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:28:32.198 17:25:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:32.198 17:25:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:28:32.198 17:25:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:32.198 17:25:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:28:32.198 "name": "Existed_Raid", 00:28:32.198 "uuid": "2ade7d86-bddd-4698-91bc-6f11175e563c", 00:28:32.198 "strip_size_kb": 64, 00:28:32.198 "state": "online", 00:28:32.198 "raid_level": "raid0", 00:28:32.198 "superblock": false, 00:28:32.198 "num_base_bdevs": 2, 00:28:32.198 "num_base_bdevs_discovered": 2, 00:28:32.198 "num_base_bdevs_operational": 2, 00:28:32.198 "base_bdevs_list": [ 00:28:32.198 { 00:28:32.198 "name": "BaseBdev1", 00:28:32.198 "uuid": "669a80ec-1b5f-4c9f-8567-008892ec6f3f", 00:28:32.198 "is_configured": true, 00:28:32.198 "data_offset": 0, 00:28:32.198 "data_size": 65536 00:28:32.198 }, 00:28:32.198 { 00:28:32.198 "name": "BaseBdev2", 00:28:32.198 "uuid": "870fe54f-5459-4a62-b947-773ce92c2436", 00:28:32.198 "is_configured": true, 00:28:32.198 "data_offset": 0, 00:28:32.198 "data_size": 65536 00:28:32.198 } 00:28:32.198 ] 00:28:32.198 }' 00:28:32.198 17:25:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:28:32.198 17:25:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:28:32.456 17:25:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:28:32.456 17:25:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:28:32.456 17:25:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:28:32.456 17:25:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:28:32.456 17:25:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:28:32.456 17:25:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:28:32.715 17:25:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:28:32.715 17:25:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:28:32.715 17:25:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:32.715 17:25:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:28:32.715 [2024-11-26 17:25:09.911315] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:28:32.715 17:25:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:32.715 17:25:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:28:32.715 "name": "Existed_Raid", 00:28:32.715 "aliases": [ 00:28:32.715 "2ade7d86-bddd-4698-91bc-6f11175e563c" 00:28:32.715 ], 00:28:32.715 "product_name": "Raid Volume", 00:28:32.715 "block_size": 512, 00:28:32.715 "num_blocks": 131072, 00:28:32.715 "uuid": "2ade7d86-bddd-4698-91bc-6f11175e563c", 00:28:32.715 "assigned_rate_limits": { 00:28:32.715 "rw_ios_per_sec": 0, 00:28:32.715 "rw_mbytes_per_sec": 0, 00:28:32.715 "r_mbytes_per_sec": 0, 00:28:32.715 "w_mbytes_per_sec": 0 00:28:32.715 }, 00:28:32.715 "claimed": false, 00:28:32.715 "zoned": false, 00:28:32.715 "supported_io_types": { 00:28:32.715 "read": true, 00:28:32.715 "write": true, 00:28:32.715 "unmap": true, 00:28:32.715 "flush": true, 00:28:32.715 "reset": true, 00:28:32.715 "nvme_admin": false, 00:28:32.715 "nvme_io": false, 00:28:32.715 "nvme_io_md": false, 00:28:32.715 "write_zeroes": true, 00:28:32.715 "zcopy": false, 00:28:32.715 "get_zone_info": false, 00:28:32.715 "zone_management": false, 00:28:32.715 "zone_append": false, 00:28:32.715 "compare": false, 00:28:32.715 "compare_and_write": false, 00:28:32.715 "abort": false, 00:28:32.715 "seek_hole": false, 00:28:32.716 "seek_data": false, 00:28:32.716 "copy": false, 00:28:32.716 "nvme_iov_md": false 00:28:32.716 }, 00:28:32.716 "memory_domains": [ 00:28:32.716 { 00:28:32.716 "dma_device_id": "system", 00:28:32.716 "dma_device_type": 1 00:28:32.716 }, 00:28:32.716 { 00:28:32.716 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:28:32.716 "dma_device_type": 2 00:28:32.716 }, 00:28:32.716 { 00:28:32.716 "dma_device_id": "system", 00:28:32.716 "dma_device_type": 1 00:28:32.716 }, 00:28:32.716 { 00:28:32.716 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:28:32.716 "dma_device_type": 2 00:28:32.716 } 00:28:32.716 ], 00:28:32.716 "driver_specific": { 00:28:32.716 "raid": { 00:28:32.716 "uuid": "2ade7d86-bddd-4698-91bc-6f11175e563c", 00:28:32.716 "strip_size_kb": 64, 00:28:32.716 "state": "online", 00:28:32.716 "raid_level": "raid0", 00:28:32.716 "superblock": false, 00:28:32.716 "num_base_bdevs": 2, 00:28:32.716 "num_base_bdevs_discovered": 2, 00:28:32.716 "num_base_bdevs_operational": 2, 00:28:32.716 "base_bdevs_list": [ 00:28:32.716 { 00:28:32.716 "name": "BaseBdev1", 00:28:32.716 "uuid": "669a80ec-1b5f-4c9f-8567-008892ec6f3f", 00:28:32.716 "is_configured": true, 00:28:32.716 "data_offset": 0, 00:28:32.716 "data_size": 65536 00:28:32.716 }, 00:28:32.716 { 00:28:32.716 "name": "BaseBdev2", 00:28:32.716 "uuid": "870fe54f-5459-4a62-b947-773ce92c2436", 00:28:32.716 "is_configured": true, 00:28:32.716 "data_offset": 0, 00:28:32.716 "data_size": 65536 00:28:32.716 } 00:28:32.716 ] 00:28:32.716 } 00:28:32.716 } 00:28:32.716 }' 00:28:32.716 17:25:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:28:32.716 17:25:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:28:32.716 BaseBdev2' 00:28:32.716 17:25:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:28:32.716 17:25:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:28:32.716 17:25:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:28:32.716 17:25:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:28:32.716 17:25:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:32.716 17:25:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:28:32.716 17:25:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:28:32.716 17:25:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:32.716 17:25:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:28:32.716 17:25:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:28:32.716 17:25:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:28:32.716 17:25:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:28:32.716 17:25:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:32.716 17:25:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:28:32.716 17:25:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:28:32.716 17:25:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:32.716 17:25:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:28:32.716 17:25:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:28:32.716 17:25:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:28:32.716 17:25:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:32.716 17:25:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:28:32.716 [2024-11-26 17:25:10.143117] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:28:32.716 [2024-11-26 17:25:10.143168] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:28:32.716 [2024-11-26 17:25:10.143221] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:28:32.974 17:25:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:32.974 17:25:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@260 -- # local expected_state 00:28:32.974 17:25:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@261 -- # has_redundancy raid0 00:28:32.974 17:25:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:28:32.974 17:25:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # return 1 00:28:32.974 17:25:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@262 -- # expected_state=offline 00:28:32.974 17:25:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid offline raid0 64 1 00:28:32.974 17:25:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:28:32.974 17:25:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=offline 00:28:32.974 17:25:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:28:32.974 17:25:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:28:32.974 17:25:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:28:32.974 17:25:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:28:32.974 17:25:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:28:32.974 17:25:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:28:32.974 17:25:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:28:32.974 17:25:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:28:32.974 17:25:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:32.974 17:25:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:28:32.974 17:25:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:28:32.974 17:25:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:32.974 17:25:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:28:32.974 "name": "Existed_Raid", 00:28:32.974 "uuid": "2ade7d86-bddd-4698-91bc-6f11175e563c", 00:28:32.974 "strip_size_kb": 64, 00:28:32.974 "state": "offline", 00:28:32.974 "raid_level": "raid0", 00:28:32.974 "superblock": false, 00:28:32.974 "num_base_bdevs": 2, 00:28:32.974 "num_base_bdevs_discovered": 1, 00:28:32.974 "num_base_bdevs_operational": 1, 00:28:32.974 "base_bdevs_list": [ 00:28:32.974 { 00:28:32.974 "name": null, 00:28:32.974 "uuid": "00000000-0000-0000-0000-000000000000", 00:28:32.974 "is_configured": false, 00:28:32.974 "data_offset": 0, 00:28:32.974 "data_size": 65536 00:28:32.974 }, 00:28:32.974 { 00:28:32.974 "name": "BaseBdev2", 00:28:32.974 "uuid": "870fe54f-5459-4a62-b947-773ce92c2436", 00:28:32.974 "is_configured": true, 00:28:32.974 "data_offset": 0, 00:28:32.974 "data_size": 65536 00:28:32.974 } 00:28:32.974 ] 00:28:32.974 }' 00:28:32.974 17:25:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:28:32.974 17:25:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:28:33.541 17:25:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:28:33.541 17:25:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:28:33.541 17:25:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:28:33.541 17:25:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:28:33.541 17:25:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:33.541 17:25:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:28:33.541 17:25:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:33.541 17:25:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:28:33.541 17:25:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:28:33.541 17:25:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:28:33.541 17:25:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:33.541 17:25:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:28:33.541 [2024-11-26 17:25:10.730286] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:28:33.541 [2024-11-26 17:25:10.730346] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:28:33.541 17:25:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:33.541 17:25:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:28:33.541 17:25:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:28:33.541 17:25:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:28:33.541 17:25:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:28:33.541 17:25:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:33.541 17:25:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:28:33.541 17:25:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:33.541 17:25:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:28:33.541 17:25:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:28:33.541 17:25:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@284 -- # '[' 2 -gt 2 ']' 00:28:33.541 17:25:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@326 -- # killprocess 61059 00:28:33.541 17:25:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@954 -- # '[' -z 61059 ']' 00:28:33.541 17:25:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@958 -- # kill -0 61059 00:28:33.541 17:25:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@959 -- # uname 00:28:33.541 17:25:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:28:33.541 17:25:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 61059 00:28:33.541 killing process with pid 61059 00:28:33.541 17:25:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:28:33.541 17:25:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:28:33.541 17:25:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 61059' 00:28:33.541 17:25:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@973 -- # kill 61059 00:28:33.541 [2024-11-26 17:25:10.932531] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:28:33.541 17:25:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@978 -- # wait 61059 00:28:33.541 [2024-11-26 17:25:10.952427] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:28:35.004 17:25:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@328 -- # return 0 00:28:35.004 00:28:35.004 real 0m5.258s 00:28:35.004 user 0m7.485s 00:28:35.004 sys 0m0.869s 00:28:35.004 17:25:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:28:35.004 17:25:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:28:35.004 ************************************ 00:28:35.004 END TEST raid_state_function_test 00:28:35.004 ************************************ 00:28:35.004 17:25:12 bdev_raid -- bdev/bdev_raid.sh@969 -- # run_test raid_state_function_test_sb raid_state_function_test raid0 2 true 00:28:35.004 17:25:12 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:28:35.005 17:25:12 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:28:35.005 17:25:12 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:28:35.005 ************************************ 00:28:35.005 START TEST raid_state_function_test_sb 00:28:35.005 ************************************ 00:28:35.005 17:25:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1129 -- # raid_state_function_test raid0 2 true 00:28:35.005 17:25:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # local raid_level=raid0 00:28:35.005 17:25:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=2 00:28:35.005 17:25:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:28:35.005 17:25:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:28:35.005 17:25:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:28:35.005 17:25:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:28:35.005 17:25:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:28:35.005 17:25:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:28:35.005 17:25:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:28:35.005 17:25:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:28:35.005 17:25:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:28:35.005 17:25:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:28:35.005 17:25:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:28:35.005 17:25:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:28:35.005 17:25:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:28:35.005 17:25:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # local strip_size 00:28:35.005 17:25:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:28:35.005 17:25:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:28:35.005 17:25:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@215 -- # '[' raid0 '!=' raid1 ']' 00:28:35.005 17:25:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:28:35.005 17:25:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:28:35.005 17:25:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:28:35.005 17:25:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:28:35.005 17:25:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@229 -- # raid_pid=61312 00:28:35.005 Process raid pid: 61312 00:28:35.005 17:25:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:28:35.005 17:25:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 61312' 00:28:35.005 17:25:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # waitforlisten 61312 00:28:35.005 17:25:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@835 -- # '[' -z 61312 ']' 00:28:35.005 17:25:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:28:35.005 17:25:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@840 -- # local max_retries=100 00:28:35.005 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:28:35.005 17:25:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:28:35.005 17:25:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@844 -- # xtrace_disable 00:28:35.005 17:25:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:28:35.262 [2024-11-26 17:25:12.456956] Starting SPDK v25.01-pre git sha1 f7ce15267 / DPDK 24.03.0 initialization... 00:28:35.262 [2024-11-26 17:25:12.457141] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:28:35.262 [2024-11-26 17:25:12.653087] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:35.521 [2024-11-26 17:25:12.786379] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:28:35.779 [2024-11-26 17:25:13.033078] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:28:35.779 [2024-11-26 17:25:13.033128] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:28:36.038 17:25:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:28:36.038 17:25:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@868 -- # return 0 00:28:36.038 17:25:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid0 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:28:36.038 17:25:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:36.038 17:25:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:28:36.038 [2024-11-26 17:25:13.430389] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:28:36.038 [2024-11-26 17:25:13.430457] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:28:36.038 [2024-11-26 17:25:13.430474] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:28:36.038 [2024-11-26 17:25:13.430493] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:28:36.038 17:25:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:36.038 17:25:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 2 00:28:36.038 17:25:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:28:36.038 17:25:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:28:36.038 17:25:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:28:36.038 17:25:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:28:36.038 17:25:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:28:36.038 17:25:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:28:36.038 17:25:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:28:36.038 17:25:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:28:36.038 17:25:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:28:36.038 17:25:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:28:36.038 17:25:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:28:36.038 17:25:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:36.038 17:25:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:28:36.038 17:25:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:36.038 17:25:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:28:36.038 "name": "Existed_Raid", 00:28:36.038 "uuid": "2e630f03-6613-452d-85cd-9f39ad05c990", 00:28:36.038 "strip_size_kb": 64, 00:28:36.038 "state": "configuring", 00:28:36.038 "raid_level": "raid0", 00:28:36.038 "superblock": true, 00:28:36.038 "num_base_bdevs": 2, 00:28:36.038 "num_base_bdevs_discovered": 0, 00:28:36.038 "num_base_bdevs_operational": 2, 00:28:36.038 "base_bdevs_list": [ 00:28:36.038 { 00:28:36.038 "name": "BaseBdev1", 00:28:36.038 "uuid": "00000000-0000-0000-0000-000000000000", 00:28:36.038 "is_configured": false, 00:28:36.038 "data_offset": 0, 00:28:36.038 "data_size": 0 00:28:36.038 }, 00:28:36.038 { 00:28:36.038 "name": "BaseBdev2", 00:28:36.038 "uuid": "00000000-0000-0000-0000-000000000000", 00:28:36.038 "is_configured": false, 00:28:36.038 "data_offset": 0, 00:28:36.038 "data_size": 0 00:28:36.038 } 00:28:36.038 ] 00:28:36.038 }' 00:28:36.038 17:25:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:28:36.038 17:25:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:28:36.604 17:25:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:28:36.604 17:25:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:36.604 17:25:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:28:36.604 [2024-11-26 17:25:13.862419] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:28:36.604 [2024-11-26 17:25:13.862615] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:28:36.604 17:25:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:36.604 17:25:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid0 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:28:36.604 17:25:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:36.604 17:25:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:28:36.605 [2024-11-26 17:25:13.874446] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:28:36.605 [2024-11-26 17:25:13.874504] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:28:36.605 [2024-11-26 17:25:13.874518] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:28:36.605 [2024-11-26 17:25:13.874536] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:28:36.605 17:25:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:36.605 17:25:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:28:36.605 17:25:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:36.605 17:25:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:28:36.605 [2024-11-26 17:25:13.926031] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:28:36.605 BaseBdev1 00:28:36.605 17:25:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:36.605 17:25:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:28:36.605 17:25:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:28:36.605 17:25:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:28:36.605 17:25:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:28:36.605 17:25:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:28:36.605 17:25:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:28:36.605 17:25:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:28:36.605 17:25:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:36.605 17:25:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:28:36.605 17:25:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:36.605 17:25:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:28:36.605 17:25:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:36.605 17:25:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:28:36.605 [ 00:28:36.605 { 00:28:36.605 "name": "BaseBdev1", 00:28:36.605 "aliases": [ 00:28:36.605 "95be1619-1cf3-4f22-a5b2-f6ea1e4d3b52" 00:28:36.605 ], 00:28:36.605 "product_name": "Malloc disk", 00:28:36.605 "block_size": 512, 00:28:36.605 "num_blocks": 65536, 00:28:36.605 "uuid": "95be1619-1cf3-4f22-a5b2-f6ea1e4d3b52", 00:28:36.605 "assigned_rate_limits": { 00:28:36.605 "rw_ios_per_sec": 0, 00:28:36.605 "rw_mbytes_per_sec": 0, 00:28:36.605 "r_mbytes_per_sec": 0, 00:28:36.605 "w_mbytes_per_sec": 0 00:28:36.605 }, 00:28:36.605 "claimed": true, 00:28:36.605 "claim_type": "exclusive_write", 00:28:36.605 "zoned": false, 00:28:36.605 "supported_io_types": { 00:28:36.605 "read": true, 00:28:36.605 "write": true, 00:28:36.605 "unmap": true, 00:28:36.605 "flush": true, 00:28:36.605 "reset": true, 00:28:36.605 "nvme_admin": false, 00:28:36.605 "nvme_io": false, 00:28:36.605 "nvme_io_md": false, 00:28:36.605 "write_zeroes": true, 00:28:36.605 "zcopy": true, 00:28:36.605 "get_zone_info": false, 00:28:36.605 "zone_management": false, 00:28:36.605 "zone_append": false, 00:28:36.605 "compare": false, 00:28:36.605 "compare_and_write": false, 00:28:36.605 "abort": true, 00:28:36.605 "seek_hole": false, 00:28:36.605 "seek_data": false, 00:28:36.605 "copy": true, 00:28:36.605 "nvme_iov_md": false 00:28:36.605 }, 00:28:36.605 "memory_domains": [ 00:28:36.605 { 00:28:36.605 "dma_device_id": "system", 00:28:36.605 "dma_device_type": 1 00:28:36.605 }, 00:28:36.605 { 00:28:36.605 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:28:36.605 "dma_device_type": 2 00:28:36.605 } 00:28:36.605 ], 00:28:36.605 "driver_specific": {} 00:28:36.605 } 00:28:36.605 ] 00:28:36.605 17:25:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:36.605 17:25:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:28:36.605 17:25:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 2 00:28:36.605 17:25:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:28:36.605 17:25:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:28:36.605 17:25:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:28:36.605 17:25:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:28:36.605 17:25:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:28:36.605 17:25:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:28:36.605 17:25:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:28:36.605 17:25:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:28:36.605 17:25:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:28:36.605 17:25:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:28:36.605 17:25:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:36.605 17:25:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:28:36.605 17:25:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:28:36.605 17:25:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:36.605 17:25:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:28:36.605 "name": "Existed_Raid", 00:28:36.605 "uuid": "ebbc3207-a5a3-4ebc-b699-39749d85293d", 00:28:36.605 "strip_size_kb": 64, 00:28:36.605 "state": "configuring", 00:28:36.605 "raid_level": "raid0", 00:28:36.605 "superblock": true, 00:28:36.605 "num_base_bdevs": 2, 00:28:36.605 "num_base_bdevs_discovered": 1, 00:28:36.605 "num_base_bdevs_operational": 2, 00:28:36.605 "base_bdevs_list": [ 00:28:36.605 { 00:28:36.605 "name": "BaseBdev1", 00:28:36.605 "uuid": "95be1619-1cf3-4f22-a5b2-f6ea1e4d3b52", 00:28:36.605 "is_configured": true, 00:28:36.605 "data_offset": 2048, 00:28:36.605 "data_size": 63488 00:28:36.605 }, 00:28:36.605 { 00:28:36.605 "name": "BaseBdev2", 00:28:36.605 "uuid": "00000000-0000-0000-0000-000000000000", 00:28:36.605 "is_configured": false, 00:28:36.605 "data_offset": 0, 00:28:36.605 "data_size": 0 00:28:36.605 } 00:28:36.605 ] 00:28:36.605 }' 00:28:36.605 17:25:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:28:36.605 17:25:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:28:37.171 17:25:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:28:37.171 17:25:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:37.171 17:25:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:28:37.171 [2024-11-26 17:25:14.370254] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:28:37.171 [2024-11-26 17:25:14.370322] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:28:37.171 17:25:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:37.171 17:25:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid0 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:28:37.171 17:25:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:37.171 17:25:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:28:37.171 [2024-11-26 17:25:14.378314] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:28:37.171 [2024-11-26 17:25:14.380713] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:28:37.171 [2024-11-26 17:25:14.380899] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:28:37.171 17:25:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:37.171 17:25:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:28:37.171 17:25:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:28:37.171 17:25:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 2 00:28:37.171 17:25:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:28:37.171 17:25:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:28:37.171 17:25:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:28:37.171 17:25:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:28:37.171 17:25:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:28:37.171 17:25:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:28:37.171 17:25:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:28:37.171 17:25:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:28:37.171 17:25:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:28:37.171 17:25:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:28:37.171 17:25:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:37.171 17:25:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:28:37.171 17:25:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:28:37.171 17:25:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:37.171 17:25:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:28:37.171 "name": "Existed_Raid", 00:28:37.171 "uuid": "e12c5621-a0a0-4c8a-b68c-171ae676260b", 00:28:37.171 "strip_size_kb": 64, 00:28:37.171 "state": "configuring", 00:28:37.171 "raid_level": "raid0", 00:28:37.171 "superblock": true, 00:28:37.171 "num_base_bdevs": 2, 00:28:37.171 "num_base_bdevs_discovered": 1, 00:28:37.171 "num_base_bdevs_operational": 2, 00:28:37.171 "base_bdevs_list": [ 00:28:37.171 { 00:28:37.171 "name": "BaseBdev1", 00:28:37.171 "uuid": "95be1619-1cf3-4f22-a5b2-f6ea1e4d3b52", 00:28:37.171 "is_configured": true, 00:28:37.171 "data_offset": 2048, 00:28:37.171 "data_size": 63488 00:28:37.171 }, 00:28:37.171 { 00:28:37.171 "name": "BaseBdev2", 00:28:37.172 "uuid": "00000000-0000-0000-0000-000000000000", 00:28:37.172 "is_configured": false, 00:28:37.172 "data_offset": 0, 00:28:37.172 "data_size": 0 00:28:37.172 } 00:28:37.172 ] 00:28:37.172 }' 00:28:37.172 17:25:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:28:37.172 17:25:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:28:37.430 17:25:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:28:37.430 17:25:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:37.430 17:25:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:28:37.689 [2024-11-26 17:25:14.900439] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:28:37.689 [2024-11-26 17:25:14.900734] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:28:37.689 [2024-11-26 17:25:14.900753] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:28:37.689 [2024-11-26 17:25:14.901070] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:28:37.689 BaseBdev2 00:28:37.689 [2024-11-26 17:25:14.901264] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:28:37.689 [2024-11-26 17:25:14.901282] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:28:37.689 [2024-11-26 17:25:14.901436] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:28:37.689 17:25:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:37.689 17:25:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:28:37.689 17:25:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:28:37.689 17:25:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:28:37.689 17:25:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:28:37.689 17:25:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:28:37.689 17:25:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:28:37.689 17:25:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:28:37.689 17:25:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:37.689 17:25:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:28:37.689 17:25:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:37.689 17:25:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:28:37.689 17:25:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:37.689 17:25:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:28:37.689 [ 00:28:37.689 { 00:28:37.689 "name": "BaseBdev2", 00:28:37.689 "aliases": [ 00:28:37.689 "a78195fb-7721-4951-b8d1-3bb45c190b4b" 00:28:37.689 ], 00:28:37.689 "product_name": "Malloc disk", 00:28:37.689 "block_size": 512, 00:28:37.689 "num_blocks": 65536, 00:28:37.689 "uuid": "a78195fb-7721-4951-b8d1-3bb45c190b4b", 00:28:37.689 "assigned_rate_limits": { 00:28:37.689 "rw_ios_per_sec": 0, 00:28:37.689 "rw_mbytes_per_sec": 0, 00:28:37.689 "r_mbytes_per_sec": 0, 00:28:37.689 "w_mbytes_per_sec": 0 00:28:37.689 }, 00:28:37.689 "claimed": true, 00:28:37.689 "claim_type": "exclusive_write", 00:28:37.689 "zoned": false, 00:28:37.689 "supported_io_types": { 00:28:37.689 "read": true, 00:28:37.689 "write": true, 00:28:37.689 "unmap": true, 00:28:37.689 "flush": true, 00:28:37.689 "reset": true, 00:28:37.689 "nvme_admin": false, 00:28:37.689 "nvme_io": false, 00:28:37.689 "nvme_io_md": false, 00:28:37.689 "write_zeroes": true, 00:28:37.689 "zcopy": true, 00:28:37.689 "get_zone_info": false, 00:28:37.689 "zone_management": false, 00:28:37.689 "zone_append": false, 00:28:37.689 "compare": false, 00:28:37.689 "compare_and_write": false, 00:28:37.689 "abort": true, 00:28:37.689 "seek_hole": false, 00:28:37.689 "seek_data": false, 00:28:37.689 "copy": true, 00:28:37.689 "nvme_iov_md": false 00:28:37.689 }, 00:28:37.689 "memory_domains": [ 00:28:37.689 { 00:28:37.689 "dma_device_id": "system", 00:28:37.689 "dma_device_type": 1 00:28:37.689 }, 00:28:37.689 { 00:28:37.689 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:28:37.689 "dma_device_type": 2 00:28:37.689 } 00:28:37.689 ], 00:28:37.689 "driver_specific": {} 00:28:37.689 } 00:28:37.689 ] 00:28:37.689 17:25:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:37.689 17:25:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:28:37.689 17:25:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:28:37.689 17:25:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:28:37.689 17:25:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid0 64 2 00:28:37.689 17:25:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:28:37.689 17:25:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:28:37.689 17:25:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:28:37.689 17:25:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:28:37.689 17:25:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:28:37.689 17:25:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:28:37.689 17:25:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:28:37.689 17:25:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:28:37.689 17:25:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:28:37.689 17:25:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:28:37.689 17:25:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:28:37.689 17:25:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:37.689 17:25:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:28:37.689 17:25:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:37.689 17:25:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:28:37.689 "name": "Existed_Raid", 00:28:37.689 "uuid": "e12c5621-a0a0-4c8a-b68c-171ae676260b", 00:28:37.689 "strip_size_kb": 64, 00:28:37.689 "state": "online", 00:28:37.689 "raid_level": "raid0", 00:28:37.689 "superblock": true, 00:28:37.689 "num_base_bdevs": 2, 00:28:37.690 "num_base_bdevs_discovered": 2, 00:28:37.690 "num_base_bdevs_operational": 2, 00:28:37.690 "base_bdevs_list": [ 00:28:37.690 { 00:28:37.690 "name": "BaseBdev1", 00:28:37.690 "uuid": "95be1619-1cf3-4f22-a5b2-f6ea1e4d3b52", 00:28:37.690 "is_configured": true, 00:28:37.690 "data_offset": 2048, 00:28:37.690 "data_size": 63488 00:28:37.690 }, 00:28:37.690 { 00:28:37.690 "name": "BaseBdev2", 00:28:37.690 "uuid": "a78195fb-7721-4951-b8d1-3bb45c190b4b", 00:28:37.690 "is_configured": true, 00:28:37.690 "data_offset": 2048, 00:28:37.690 "data_size": 63488 00:28:37.690 } 00:28:37.690 ] 00:28:37.690 }' 00:28:37.690 17:25:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:28:37.690 17:25:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:28:38.257 17:25:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:28:38.257 17:25:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:28:38.257 17:25:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:28:38.257 17:25:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:28:38.257 17:25:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:28:38.257 17:25:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:28:38.257 17:25:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:28:38.257 17:25:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:38.257 17:25:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:28:38.257 17:25:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:28:38.257 [2024-11-26 17:25:15.420980] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:28:38.257 17:25:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:38.257 17:25:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:28:38.257 "name": "Existed_Raid", 00:28:38.257 "aliases": [ 00:28:38.257 "e12c5621-a0a0-4c8a-b68c-171ae676260b" 00:28:38.257 ], 00:28:38.257 "product_name": "Raid Volume", 00:28:38.257 "block_size": 512, 00:28:38.257 "num_blocks": 126976, 00:28:38.257 "uuid": "e12c5621-a0a0-4c8a-b68c-171ae676260b", 00:28:38.257 "assigned_rate_limits": { 00:28:38.257 "rw_ios_per_sec": 0, 00:28:38.257 "rw_mbytes_per_sec": 0, 00:28:38.257 "r_mbytes_per_sec": 0, 00:28:38.257 "w_mbytes_per_sec": 0 00:28:38.257 }, 00:28:38.257 "claimed": false, 00:28:38.257 "zoned": false, 00:28:38.257 "supported_io_types": { 00:28:38.257 "read": true, 00:28:38.257 "write": true, 00:28:38.257 "unmap": true, 00:28:38.257 "flush": true, 00:28:38.257 "reset": true, 00:28:38.257 "nvme_admin": false, 00:28:38.257 "nvme_io": false, 00:28:38.257 "nvme_io_md": false, 00:28:38.258 "write_zeroes": true, 00:28:38.258 "zcopy": false, 00:28:38.258 "get_zone_info": false, 00:28:38.258 "zone_management": false, 00:28:38.258 "zone_append": false, 00:28:38.258 "compare": false, 00:28:38.258 "compare_and_write": false, 00:28:38.258 "abort": false, 00:28:38.258 "seek_hole": false, 00:28:38.258 "seek_data": false, 00:28:38.258 "copy": false, 00:28:38.258 "nvme_iov_md": false 00:28:38.258 }, 00:28:38.258 "memory_domains": [ 00:28:38.258 { 00:28:38.258 "dma_device_id": "system", 00:28:38.258 "dma_device_type": 1 00:28:38.258 }, 00:28:38.258 { 00:28:38.258 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:28:38.258 "dma_device_type": 2 00:28:38.258 }, 00:28:38.258 { 00:28:38.258 "dma_device_id": "system", 00:28:38.258 "dma_device_type": 1 00:28:38.258 }, 00:28:38.258 { 00:28:38.258 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:28:38.258 "dma_device_type": 2 00:28:38.258 } 00:28:38.258 ], 00:28:38.258 "driver_specific": { 00:28:38.258 "raid": { 00:28:38.258 "uuid": "e12c5621-a0a0-4c8a-b68c-171ae676260b", 00:28:38.258 "strip_size_kb": 64, 00:28:38.258 "state": "online", 00:28:38.258 "raid_level": "raid0", 00:28:38.258 "superblock": true, 00:28:38.258 "num_base_bdevs": 2, 00:28:38.258 "num_base_bdevs_discovered": 2, 00:28:38.258 "num_base_bdevs_operational": 2, 00:28:38.258 "base_bdevs_list": [ 00:28:38.258 { 00:28:38.258 "name": "BaseBdev1", 00:28:38.258 "uuid": "95be1619-1cf3-4f22-a5b2-f6ea1e4d3b52", 00:28:38.258 "is_configured": true, 00:28:38.258 "data_offset": 2048, 00:28:38.258 "data_size": 63488 00:28:38.258 }, 00:28:38.258 { 00:28:38.258 "name": "BaseBdev2", 00:28:38.258 "uuid": "a78195fb-7721-4951-b8d1-3bb45c190b4b", 00:28:38.258 "is_configured": true, 00:28:38.258 "data_offset": 2048, 00:28:38.258 "data_size": 63488 00:28:38.258 } 00:28:38.258 ] 00:28:38.258 } 00:28:38.258 } 00:28:38.258 }' 00:28:38.258 17:25:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:28:38.258 17:25:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:28:38.258 BaseBdev2' 00:28:38.258 17:25:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:28:38.258 17:25:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:28:38.258 17:25:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:28:38.258 17:25:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:28:38.258 17:25:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:28:38.258 17:25:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:38.258 17:25:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:28:38.258 17:25:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:38.258 17:25:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:28:38.258 17:25:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:28:38.258 17:25:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:28:38.258 17:25:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:28:38.258 17:25:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:28:38.258 17:25:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:38.258 17:25:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:28:38.258 17:25:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:38.258 17:25:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:28:38.258 17:25:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:28:38.258 17:25:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:28:38.258 17:25:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:38.258 17:25:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:28:38.258 [2024-11-26 17:25:15.640788] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:28:38.258 [2024-11-26 17:25:15.640832] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:28:38.258 [2024-11-26 17:25:15.640893] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:28:38.516 17:25:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:38.516 17:25:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # local expected_state 00:28:38.516 17:25:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@261 -- # has_redundancy raid0 00:28:38.516 17:25:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # case $1 in 00:28:38.516 17:25:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # return 1 00:28:38.516 17:25:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@262 -- # expected_state=offline 00:28:38.516 17:25:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid offline raid0 64 1 00:28:38.516 17:25:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:28:38.516 17:25:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=offline 00:28:38.516 17:25:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:28:38.516 17:25:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:28:38.516 17:25:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:28:38.516 17:25:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:28:38.516 17:25:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:28:38.516 17:25:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:28:38.516 17:25:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:28:38.516 17:25:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:28:38.516 17:25:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:28:38.516 17:25:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:38.516 17:25:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:28:38.516 17:25:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:38.516 17:25:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:28:38.516 "name": "Existed_Raid", 00:28:38.516 "uuid": "e12c5621-a0a0-4c8a-b68c-171ae676260b", 00:28:38.516 "strip_size_kb": 64, 00:28:38.516 "state": "offline", 00:28:38.516 "raid_level": "raid0", 00:28:38.516 "superblock": true, 00:28:38.516 "num_base_bdevs": 2, 00:28:38.516 "num_base_bdevs_discovered": 1, 00:28:38.516 "num_base_bdevs_operational": 1, 00:28:38.516 "base_bdevs_list": [ 00:28:38.516 { 00:28:38.516 "name": null, 00:28:38.516 "uuid": "00000000-0000-0000-0000-000000000000", 00:28:38.516 "is_configured": false, 00:28:38.516 "data_offset": 0, 00:28:38.516 "data_size": 63488 00:28:38.516 }, 00:28:38.516 { 00:28:38.516 "name": "BaseBdev2", 00:28:38.516 "uuid": "a78195fb-7721-4951-b8d1-3bb45c190b4b", 00:28:38.516 "is_configured": true, 00:28:38.516 "data_offset": 2048, 00:28:38.516 "data_size": 63488 00:28:38.516 } 00:28:38.516 ] 00:28:38.516 }' 00:28:38.516 17:25:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:28:38.516 17:25:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:28:38.775 17:25:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:28:38.775 17:25:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:28:39.073 17:25:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:28:39.073 17:25:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:39.073 17:25:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:28:39.073 17:25:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:28:39.073 17:25:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:39.073 17:25:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:28:39.073 17:25:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:28:39.073 17:25:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:28:39.073 17:25:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:39.073 17:25:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:28:39.073 [2024-11-26 17:25:16.263807] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:28:39.073 [2024-11-26 17:25:16.263872] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:28:39.073 17:25:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:39.073 17:25:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:28:39.073 17:25:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:28:39.073 17:25:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:28:39.073 17:25:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:39.073 17:25:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:28:39.073 17:25:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:28:39.073 17:25:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:39.073 17:25:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:28:39.073 17:25:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:28:39.073 17:25:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@284 -- # '[' 2 -gt 2 ']' 00:28:39.073 17:25:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@326 -- # killprocess 61312 00:28:39.073 17:25:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@954 -- # '[' -z 61312 ']' 00:28:39.073 17:25:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@958 -- # kill -0 61312 00:28:39.073 17:25:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@959 -- # uname 00:28:39.073 17:25:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:28:39.073 17:25:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 61312 00:28:39.073 killing process with pid 61312 00:28:39.073 17:25:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:28:39.073 17:25:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:28:39.073 17:25:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@972 -- # echo 'killing process with pid 61312' 00:28:39.073 17:25:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@973 -- # kill 61312 00:28:39.073 [2024-11-26 17:25:16.475224] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:28:39.073 17:25:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@978 -- # wait 61312 00:28:39.073 [2024-11-26 17:25:16.495332] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:28:40.457 17:25:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@328 -- # return 0 00:28:40.457 00:28:40.457 real 0m5.564s 00:28:40.457 user 0m7.928s 00:28:40.457 sys 0m0.953s 00:28:40.457 17:25:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1130 -- # xtrace_disable 00:28:40.457 ************************************ 00:28:40.457 END TEST raid_state_function_test_sb 00:28:40.457 ************************************ 00:28:40.457 17:25:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:28:40.457 17:25:17 bdev_raid -- bdev/bdev_raid.sh@970 -- # run_test raid_superblock_test raid_superblock_test raid0 2 00:28:40.457 17:25:17 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:28:40.457 17:25:17 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:28:40.715 17:25:17 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:28:40.715 ************************************ 00:28:40.715 START TEST raid_superblock_test 00:28:40.715 ************************************ 00:28:40.715 17:25:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1129 -- # raid_superblock_test raid0 2 00:28:40.715 17:25:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@393 -- # local raid_level=raid0 00:28:40.715 17:25:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=2 00:28:40.715 17:25:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:28:40.715 17:25:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:28:40.715 17:25:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:28:40.715 17:25:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:28:40.715 17:25:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:28:40.715 17:25:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:28:40.715 17:25:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:28:40.715 17:25:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size 00:28:40.715 17:25:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:28:40.715 17:25:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:28:40.715 17:25:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:28:40.715 17:25:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@404 -- # '[' raid0 '!=' raid1 ']' 00:28:40.715 17:25:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@405 -- # strip_size=64 00:28:40.715 17:25:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@406 -- # strip_size_create_arg='-z 64' 00:28:40.715 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:28:40.715 17:25:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@412 -- # raid_pid=61570 00:28:40.715 17:25:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:28:40.715 17:25:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@413 -- # waitforlisten 61570 00:28:40.715 17:25:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@835 -- # '[' -z 61570 ']' 00:28:40.715 17:25:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:28:40.715 17:25:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:28:40.715 17:25:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:28:40.715 17:25:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:28:40.715 17:25:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:28:40.715 [2024-11-26 17:25:18.037757] Starting SPDK v25.01-pre git sha1 f7ce15267 / DPDK 24.03.0 initialization... 00:28:40.715 [2024-11-26 17:25:18.038231] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61570 ] 00:28:40.974 [2024-11-26 17:25:18.249726] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:41.231 [2024-11-26 17:25:18.442117] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:28:41.489 [2024-11-26 17:25:18.700057] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:28:41.489 [2024-11-26 17:25:18.700129] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:28:41.748 17:25:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:28:41.749 17:25:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@868 -- # return 0 00:28:41.749 17:25:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:28:41.749 17:25:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:28:41.749 17:25:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:28:41.749 17:25:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:28:41.749 17:25:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:28:41.749 17:25:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:28:41.749 17:25:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:28:41.749 17:25:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:28:41.749 17:25:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc1 00:28:41.749 17:25:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:41.749 17:25:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:28:41.749 malloc1 00:28:41.749 17:25:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:41.749 17:25:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:28:41.749 17:25:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:41.749 17:25:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:28:41.749 [2024-11-26 17:25:19.116945] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:28:41.749 [2024-11-26 17:25:19.117198] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:28:41.749 [2024-11-26 17:25:19.117352] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:28:41.749 [2024-11-26 17:25:19.117470] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:28:41.749 [2024-11-26 17:25:19.120367] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:28:41.749 [2024-11-26 17:25:19.120566] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:28:41.749 pt1 00:28:41.749 17:25:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:41.749 17:25:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:28:41.749 17:25:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:28:41.749 17:25:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:28:41.749 17:25:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:28:41.749 17:25:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:28:41.749 17:25:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:28:41.749 17:25:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:28:41.749 17:25:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:28:41.749 17:25:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc2 00:28:41.749 17:25:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:41.749 17:25:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:28:41.749 malloc2 00:28:41.749 17:25:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:41.749 17:25:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:28:41.749 17:25:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:41.749 17:25:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:28:41.749 [2024-11-26 17:25:19.177610] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:28:41.749 [2024-11-26 17:25:19.177697] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:28:41.749 [2024-11-26 17:25:19.177745] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:28:41.749 [2024-11-26 17:25:19.177765] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:28:41.749 [2024-11-26 17:25:19.181292] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:28:41.749 [2024-11-26 17:25:19.181355] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:28:41.749 pt2 00:28:41.749 17:25:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:41.749 17:25:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:28:41.749 17:25:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:28:41.749 17:25:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''pt1 pt2'\''' -n raid_bdev1 -s 00:28:41.749 17:25:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:41.749 17:25:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:28:41.749 [2024-11-26 17:25:19.189767] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:28:41.749 [2024-11-26 17:25:19.192752] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:28:41.749 [2024-11-26 17:25:19.192979] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:28:41.749 [2024-11-26 17:25:19.193002] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:28:41.749 [2024-11-26 17:25:19.193430] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:28:41.749 [2024-11-26 17:25:19.193667] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:28:41.749 [2024-11-26 17:25:19.193694] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:28:41.749 [2024-11-26 17:25:19.193975] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:28:42.006 17:25:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:42.006 17:25:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 2 00:28:42.006 17:25:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:28:42.006 17:25:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:28:42.006 17:25:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:28:42.006 17:25:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:28:42.006 17:25:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:28:42.006 17:25:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:28:42.006 17:25:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:28:42.006 17:25:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:28:42.006 17:25:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:28:42.006 17:25:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:28:42.006 17:25:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:28:42.006 17:25:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:42.006 17:25:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:28:42.006 17:25:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:42.006 17:25:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:28:42.006 "name": "raid_bdev1", 00:28:42.006 "uuid": "af617d8a-e978-412b-bace-4a1a2bca85a7", 00:28:42.006 "strip_size_kb": 64, 00:28:42.006 "state": "online", 00:28:42.006 "raid_level": "raid0", 00:28:42.006 "superblock": true, 00:28:42.006 "num_base_bdevs": 2, 00:28:42.006 "num_base_bdevs_discovered": 2, 00:28:42.006 "num_base_bdevs_operational": 2, 00:28:42.006 "base_bdevs_list": [ 00:28:42.006 { 00:28:42.006 "name": "pt1", 00:28:42.007 "uuid": "00000000-0000-0000-0000-000000000001", 00:28:42.007 "is_configured": true, 00:28:42.007 "data_offset": 2048, 00:28:42.007 "data_size": 63488 00:28:42.007 }, 00:28:42.007 { 00:28:42.007 "name": "pt2", 00:28:42.007 "uuid": "00000000-0000-0000-0000-000000000002", 00:28:42.007 "is_configured": true, 00:28:42.007 "data_offset": 2048, 00:28:42.007 "data_size": 63488 00:28:42.007 } 00:28:42.007 ] 00:28:42.007 }' 00:28:42.007 17:25:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:28:42.007 17:25:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:28:42.265 17:25:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:28:42.265 17:25:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:28:42.265 17:25:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:28:42.265 17:25:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:28:42.265 17:25:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:28:42.265 17:25:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:28:42.265 17:25:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:28:42.265 17:25:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:28:42.265 17:25:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:42.265 17:25:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:28:42.265 [2024-11-26 17:25:19.686373] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:28:42.524 17:25:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:42.524 17:25:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:28:42.524 "name": "raid_bdev1", 00:28:42.524 "aliases": [ 00:28:42.524 "af617d8a-e978-412b-bace-4a1a2bca85a7" 00:28:42.524 ], 00:28:42.524 "product_name": "Raid Volume", 00:28:42.524 "block_size": 512, 00:28:42.524 "num_blocks": 126976, 00:28:42.524 "uuid": "af617d8a-e978-412b-bace-4a1a2bca85a7", 00:28:42.524 "assigned_rate_limits": { 00:28:42.524 "rw_ios_per_sec": 0, 00:28:42.524 "rw_mbytes_per_sec": 0, 00:28:42.524 "r_mbytes_per_sec": 0, 00:28:42.524 "w_mbytes_per_sec": 0 00:28:42.524 }, 00:28:42.524 "claimed": false, 00:28:42.524 "zoned": false, 00:28:42.524 "supported_io_types": { 00:28:42.524 "read": true, 00:28:42.524 "write": true, 00:28:42.524 "unmap": true, 00:28:42.524 "flush": true, 00:28:42.524 "reset": true, 00:28:42.524 "nvme_admin": false, 00:28:42.524 "nvme_io": false, 00:28:42.524 "nvme_io_md": false, 00:28:42.524 "write_zeroes": true, 00:28:42.524 "zcopy": false, 00:28:42.524 "get_zone_info": false, 00:28:42.524 "zone_management": false, 00:28:42.524 "zone_append": false, 00:28:42.524 "compare": false, 00:28:42.524 "compare_and_write": false, 00:28:42.524 "abort": false, 00:28:42.524 "seek_hole": false, 00:28:42.524 "seek_data": false, 00:28:42.524 "copy": false, 00:28:42.524 "nvme_iov_md": false 00:28:42.524 }, 00:28:42.524 "memory_domains": [ 00:28:42.524 { 00:28:42.524 "dma_device_id": "system", 00:28:42.524 "dma_device_type": 1 00:28:42.524 }, 00:28:42.524 { 00:28:42.524 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:28:42.524 "dma_device_type": 2 00:28:42.524 }, 00:28:42.524 { 00:28:42.524 "dma_device_id": "system", 00:28:42.524 "dma_device_type": 1 00:28:42.524 }, 00:28:42.524 { 00:28:42.524 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:28:42.524 "dma_device_type": 2 00:28:42.524 } 00:28:42.524 ], 00:28:42.524 "driver_specific": { 00:28:42.524 "raid": { 00:28:42.524 "uuid": "af617d8a-e978-412b-bace-4a1a2bca85a7", 00:28:42.524 "strip_size_kb": 64, 00:28:42.524 "state": "online", 00:28:42.524 "raid_level": "raid0", 00:28:42.524 "superblock": true, 00:28:42.524 "num_base_bdevs": 2, 00:28:42.524 "num_base_bdevs_discovered": 2, 00:28:42.524 "num_base_bdevs_operational": 2, 00:28:42.524 "base_bdevs_list": [ 00:28:42.524 { 00:28:42.524 "name": "pt1", 00:28:42.524 "uuid": "00000000-0000-0000-0000-000000000001", 00:28:42.524 "is_configured": true, 00:28:42.524 "data_offset": 2048, 00:28:42.524 "data_size": 63488 00:28:42.524 }, 00:28:42.524 { 00:28:42.524 "name": "pt2", 00:28:42.524 "uuid": "00000000-0000-0000-0000-000000000002", 00:28:42.524 "is_configured": true, 00:28:42.524 "data_offset": 2048, 00:28:42.524 "data_size": 63488 00:28:42.524 } 00:28:42.524 ] 00:28:42.524 } 00:28:42.524 } 00:28:42.524 }' 00:28:42.524 17:25:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:28:42.524 17:25:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:28:42.524 pt2' 00:28:42.524 17:25:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:28:42.524 17:25:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:28:42.524 17:25:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:28:42.524 17:25:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:28:42.524 17:25:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:28:42.524 17:25:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:42.524 17:25:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:28:42.524 17:25:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:42.524 17:25:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:28:42.524 17:25:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:28:42.524 17:25:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:28:42.524 17:25:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:28:42.524 17:25:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:42.524 17:25:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:28:42.524 17:25:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:28:42.524 17:25:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:42.524 17:25:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:28:42.524 17:25:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:28:42.524 17:25:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:28:42.524 17:25:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:42.524 17:25:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:28:42.524 17:25:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:28:42.524 [2024-11-26 17:25:19.926433] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:28:42.524 17:25:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:42.783 17:25:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=af617d8a-e978-412b-bace-4a1a2bca85a7 00:28:42.784 17:25:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@436 -- # '[' -z af617d8a-e978-412b-bace-4a1a2bca85a7 ']' 00:28:42.784 17:25:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:28:42.784 17:25:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:42.784 17:25:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:28:42.784 [2024-11-26 17:25:19.974094] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:28:42.784 [2024-11-26 17:25:19.974124] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:28:42.784 [2024-11-26 17:25:19.974225] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:28:42.784 [2024-11-26 17:25:19.974279] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:28:42.784 [2024-11-26 17:25:19.974296] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:28:42.784 17:25:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:42.784 17:25:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:28:42.784 17:25:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:28:42.784 17:25:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:42.784 17:25:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:28:42.784 17:25:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:42.784 17:25:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:28:42.784 17:25:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:28:42.784 17:25:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:28:42.784 17:25:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:28:42.784 17:25:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:42.784 17:25:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:28:42.784 17:25:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:42.784 17:25:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:28:42.784 17:25:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:28:42.784 17:25:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:42.784 17:25:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:28:42.784 17:25:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:42.784 17:25:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:28:42.784 17:25:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:42.784 17:25:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:28:42.784 17:25:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:28:42.784 17:25:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:42.784 17:25:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:28:42.784 17:25:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:28:42.784 17:25:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@652 -- # local es=0 00:28:42.784 17:25:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:28:42.784 17:25:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:28:42.784 17:25:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:28:42.784 17:25:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:28:42.784 17:25:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:28:42.784 17:25:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:28:42.784 17:25:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:42.784 17:25:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:28:42.784 [2024-11-26 17:25:20.110203] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:28:42.784 [2024-11-26 17:25:20.112778] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:28:42.784 [2024-11-26 17:25:20.112869] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:28:42.784 [2024-11-26 17:25:20.112935] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:28:42.784 [2024-11-26 17:25:20.112958] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:28:42.784 [2024-11-26 17:25:20.112976] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state configuring 00:28:42.784 request: 00:28:42.784 { 00:28:42.784 "name": "raid_bdev1", 00:28:42.784 "raid_level": "raid0", 00:28:42.784 "base_bdevs": [ 00:28:42.784 "malloc1", 00:28:42.784 "malloc2" 00:28:42.784 ], 00:28:42.784 "strip_size_kb": 64, 00:28:42.784 "superblock": false, 00:28:42.784 "method": "bdev_raid_create", 00:28:42.784 "req_id": 1 00:28:42.784 } 00:28:42.784 Got JSON-RPC error response 00:28:42.784 response: 00:28:42.784 { 00:28:42.784 "code": -17, 00:28:42.784 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:28:42.784 } 00:28:42.784 17:25:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:28:42.784 17:25:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@655 -- # es=1 00:28:42.784 17:25:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:28:42.784 17:25:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:28:42.784 17:25:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:28:42.784 17:25:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:28:42.784 17:25:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:42.784 17:25:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:28:42.784 17:25:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:28:42.784 17:25:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:42.784 17:25:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:28:42.784 17:25:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:28:42.784 17:25:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:28:42.784 17:25:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:42.784 17:25:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:28:42.784 [2024-11-26 17:25:20.166184] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:28:42.784 [2024-11-26 17:25:20.166263] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:28:42.784 [2024-11-26 17:25:20.166286] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:28:42.784 [2024-11-26 17:25:20.166303] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:28:42.784 [2024-11-26 17:25:20.169131] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:28:42.784 [2024-11-26 17:25:20.169177] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:28:42.784 [2024-11-26 17:25:20.169274] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:28:42.784 [2024-11-26 17:25:20.169359] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:28:42.784 pt1 00:28:42.784 17:25:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:42.784 17:25:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring raid0 64 2 00:28:42.784 17:25:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:28:42.784 17:25:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:28:42.784 17:25:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:28:42.784 17:25:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:28:42.784 17:25:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:28:42.784 17:25:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:28:42.784 17:25:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:28:42.784 17:25:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:28:42.784 17:25:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:28:42.784 17:25:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:28:42.784 17:25:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:42.784 17:25:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:28:42.784 17:25:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:28:42.784 17:25:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:42.784 17:25:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:28:42.784 "name": "raid_bdev1", 00:28:42.784 "uuid": "af617d8a-e978-412b-bace-4a1a2bca85a7", 00:28:42.784 "strip_size_kb": 64, 00:28:42.784 "state": "configuring", 00:28:42.784 "raid_level": "raid0", 00:28:42.784 "superblock": true, 00:28:42.784 "num_base_bdevs": 2, 00:28:42.784 "num_base_bdevs_discovered": 1, 00:28:42.784 "num_base_bdevs_operational": 2, 00:28:42.784 "base_bdevs_list": [ 00:28:42.784 { 00:28:42.784 "name": "pt1", 00:28:42.784 "uuid": "00000000-0000-0000-0000-000000000001", 00:28:42.784 "is_configured": true, 00:28:42.784 "data_offset": 2048, 00:28:42.784 "data_size": 63488 00:28:42.784 }, 00:28:42.784 { 00:28:42.784 "name": null, 00:28:42.784 "uuid": "00000000-0000-0000-0000-000000000002", 00:28:42.784 "is_configured": false, 00:28:42.784 "data_offset": 2048, 00:28:42.784 "data_size": 63488 00:28:42.784 } 00:28:42.784 ] 00:28:42.784 }' 00:28:42.784 17:25:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:28:42.784 17:25:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:28:43.363 17:25:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@470 -- # '[' 2 -gt 2 ']' 00:28:43.363 17:25:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:28:43.363 17:25:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:28:43.363 17:25:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:28:43.363 17:25:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:43.363 17:25:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:28:43.363 [2024-11-26 17:25:20.666340] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:28:43.363 [2024-11-26 17:25:20.666579] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:28:43.363 [2024-11-26 17:25:20.666710] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:28:43.363 [2024-11-26 17:25:20.666807] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:28:43.363 [2024-11-26 17:25:20.667405] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:28:43.363 [2024-11-26 17:25:20.667567] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:28:43.363 [2024-11-26 17:25:20.667764] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:28:43.363 [2024-11-26 17:25:20.667921] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:28:43.363 [2024-11-26 17:25:20.668127] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:28:43.363 [2024-11-26 17:25:20.668240] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:28:43.363 [2024-11-26 17:25:20.668593] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:28:43.363 [2024-11-26 17:25:20.668793] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:28:43.363 [2024-11-26 17:25:20.668842] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:28:43.363 [2024-11-26 17:25:20.669063] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:28:43.363 pt2 00:28:43.363 17:25:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:43.363 17:25:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:28:43.363 17:25:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:28:43.363 17:25:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 2 00:28:43.363 17:25:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:28:43.363 17:25:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:28:43.363 17:25:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:28:43.363 17:25:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:28:43.363 17:25:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:28:43.363 17:25:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:28:43.363 17:25:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:28:43.363 17:25:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:28:43.363 17:25:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:28:43.363 17:25:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:28:43.363 17:25:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:43.363 17:25:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:28:43.363 17:25:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:28:43.363 17:25:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:43.363 17:25:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:28:43.363 "name": "raid_bdev1", 00:28:43.363 "uuid": "af617d8a-e978-412b-bace-4a1a2bca85a7", 00:28:43.363 "strip_size_kb": 64, 00:28:43.363 "state": "online", 00:28:43.363 "raid_level": "raid0", 00:28:43.363 "superblock": true, 00:28:43.363 "num_base_bdevs": 2, 00:28:43.363 "num_base_bdevs_discovered": 2, 00:28:43.363 "num_base_bdevs_operational": 2, 00:28:43.363 "base_bdevs_list": [ 00:28:43.363 { 00:28:43.363 "name": "pt1", 00:28:43.363 "uuid": "00000000-0000-0000-0000-000000000001", 00:28:43.363 "is_configured": true, 00:28:43.363 "data_offset": 2048, 00:28:43.363 "data_size": 63488 00:28:43.363 }, 00:28:43.363 { 00:28:43.363 "name": "pt2", 00:28:43.363 "uuid": "00000000-0000-0000-0000-000000000002", 00:28:43.363 "is_configured": true, 00:28:43.363 "data_offset": 2048, 00:28:43.363 "data_size": 63488 00:28:43.363 } 00:28:43.363 ] 00:28:43.363 }' 00:28:43.363 17:25:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:28:43.363 17:25:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:28:43.958 17:25:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:28:43.958 17:25:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:28:43.958 17:25:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:28:43.958 17:25:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:28:43.958 17:25:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:28:43.958 17:25:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:28:43.958 17:25:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:28:43.958 17:25:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:43.958 17:25:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:28:43.958 17:25:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:28:43.958 [2024-11-26 17:25:21.186742] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:28:43.958 17:25:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:43.958 17:25:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:28:43.958 "name": "raid_bdev1", 00:28:43.958 "aliases": [ 00:28:43.958 "af617d8a-e978-412b-bace-4a1a2bca85a7" 00:28:43.958 ], 00:28:43.958 "product_name": "Raid Volume", 00:28:43.958 "block_size": 512, 00:28:43.958 "num_blocks": 126976, 00:28:43.958 "uuid": "af617d8a-e978-412b-bace-4a1a2bca85a7", 00:28:43.958 "assigned_rate_limits": { 00:28:43.958 "rw_ios_per_sec": 0, 00:28:43.958 "rw_mbytes_per_sec": 0, 00:28:43.958 "r_mbytes_per_sec": 0, 00:28:43.958 "w_mbytes_per_sec": 0 00:28:43.958 }, 00:28:43.958 "claimed": false, 00:28:43.958 "zoned": false, 00:28:43.958 "supported_io_types": { 00:28:43.958 "read": true, 00:28:43.959 "write": true, 00:28:43.959 "unmap": true, 00:28:43.959 "flush": true, 00:28:43.959 "reset": true, 00:28:43.959 "nvme_admin": false, 00:28:43.959 "nvme_io": false, 00:28:43.959 "nvme_io_md": false, 00:28:43.959 "write_zeroes": true, 00:28:43.959 "zcopy": false, 00:28:43.959 "get_zone_info": false, 00:28:43.959 "zone_management": false, 00:28:43.959 "zone_append": false, 00:28:43.959 "compare": false, 00:28:43.959 "compare_and_write": false, 00:28:43.959 "abort": false, 00:28:43.959 "seek_hole": false, 00:28:43.959 "seek_data": false, 00:28:43.959 "copy": false, 00:28:43.959 "nvme_iov_md": false 00:28:43.959 }, 00:28:43.959 "memory_domains": [ 00:28:43.959 { 00:28:43.959 "dma_device_id": "system", 00:28:43.959 "dma_device_type": 1 00:28:43.959 }, 00:28:43.959 { 00:28:43.959 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:28:43.959 "dma_device_type": 2 00:28:43.959 }, 00:28:43.959 { 00:28:43.959 "dma_device_id": "system", 00:28:43.959 "dma_device_type": 1 00:28:43.959 }, 00:28:43.959 { 00:28:43.959 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:28:43.959 "dma_device_type": 2 00:28:43.959 } 00:28:43.959 ], 00:28:43.959 "driver_specific": { 00:28:43.959 "raid": { 00:28:43.959 "uuid": "af617d8a-e978-412b-bace-4a1a2bca85a7", 00:28:43.959 "strip_size_kb": 64, 00:28:43.959 "state": "online", 00:28:43.959 "raid_level": "raid0", 00:28:43.959 "superblock": true, 00:28:43.959 "num_base_bdevs": 2, 00:28:43.959 "num_base_bdevs_discovered": 2, 00:28:43.959 "num_base_bdevs_operational": 2, 00:28:43.959 "base_bdevs_list": [ 00:28:43.959 { 00:28:43.959 "name": "pt1", 00:28:43.959 "uuid": "00000000-0000-0000-0000-000000000001", 00:28:43.959 "is_configured": true, 00:28:43.959 "data_offset": 2048, 00:28:43.959 "data_size": 63488 00:28:43.959 }, 00:28:43.959 { 00:28:43.959 "name": "pt2", 00:28:43.959 "uuid": "00000000-0000-0000-0000-000000000002", 00:28:43.959 "is_configured": true, 00:28:43.959 "data_offset": 2048, 00:28:43.959 "data_size": 63488 00:28:43.959 } 00:28:43.959 ] 00:28:43.959 } 00:28:43.959 } 00:28:43.959 }' 00:28:43.959 17:25:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:28:43.959 17:25:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:28:43.959 pt2' 00:28:43.959 17:25:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:28:43.959 17:25:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:28:43.959 17:25:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:28:43.959 17:25:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:28:43.959 17:25:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:43.959 17:25:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:28:43.959 17:25:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:28:43.959 17:25:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:43.959 17:25:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:28:43.959 17:25:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:28:43.959 17:25:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:28:43.959 17:25:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:28:43.959 17:25:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:43.959 17:25:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:28:43.959 17:25:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:28:44.218 17:25:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:44.218 17:25:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:28:44.218 17:25:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:28:44.218 17:25:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:28:44.218 17:25:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:28:44.218 17:25:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:44.218 17:25:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:28:44.218 [2024-11-26 17:25:21.454813] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:28:44.218 17:25:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:44.218 17:25:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # '[' af617d8a-e978-412b-bace-4a1a2bca85a7 '!=' af617d8a-e978-412b-bace-4a1a2bca85a7 ']' 00:28:44.218 17:25:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@491 -- # has_redundancy raid0 00:28:44.218 17:25:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:28:44.218 17:25:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # return 1 00:28:44.218 17:25:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@563 -- # killprocess 61570 00:28:44.218 17:25:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@954 -- # '[' -z 61570 ']' 00:28:44.218 17:25:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@958 -- # kill -0 61570 00:28:44.218 17:25:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@959 -- # uname 00:28:44.218 17:25:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:28:44.218 17:25:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 61570 00:28:44.218 killing process with pid 61570 00:28:44.218 17:25:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:28:44.218 17:25:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:28:44.218 17:25:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 61570' 00:28:44.218 17:25:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@973 -- # kill 61570 00:28:44.218 [2024-11-26 17:25:21.546396] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:28:44.218 17:25:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@978 -- # wait 61570 00:28:44.218 [2024-11-26 17:25:21.546517] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:28:44.218 [2024-11-26 17:25:21.546574] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:28:44.218 [2024-11-26 17:25:21.546591] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:28:44.477 [2024-11-26 17:25:21.801616] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:28:45.854 17:25:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@565 -- # return 0 00:28:45.854 00:28:45.854 real 0m5.249s 00:28:45.854 user 0m7.475s 00:28:45.854 sys 0m0.838s 00:28:45.854 17:25:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:28:45.854 ************************************ 00:28:45.854 END TEST raid_superblock_test 00:28:45.854 ************************************ 00:28:45.854 17:25:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:28:45.854 17:25:23 bdev_raid -- bdev/bdev_raid.sh@971 -- # run_test raid_read_error_test raid_io_error_test raid0 2 read 00:28:45.854 17:25:23 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:28:45.854 17:25:23 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:28:45.854 17:25:23 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:28:45.854 ************************************ 00:28:45.854 START TEST raid_read_error_test 00:28:45.854 ************************************ 00:28:45.854 17:25:23 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1129 -- # raid_io_error_test raid0 2 read 00:28:45.854 17:25:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=raid0 00:28:45.854 17:25:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=2 00:28:45.854 17:25:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=read 00:28:45.854 17:25:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:28:45.854 17:25:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:28:45.854 17:25:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:28:45.854 17:25:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:28:45.854 17:25:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:28:45.854 17:25:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:28:45.854 17:25:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:28:45.854 17:25:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:28:45.854 17:25:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:28:45.854 17:25:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:28:45.854 17:25:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:28:45.854 17:25:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:28:45.854 17:25:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:28:45.854 17:25:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:28:45.854 17:25:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:28:45.854 17:25:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@800 -- # '[' raid0 '!=' raid1 ']' 00:28:45.854 17:25:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@801 -- # strip_size=64 00:28:45.854 17:25:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@802 -- # create_arg+=' -z 64' 00:28:45.854 17:25:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:28:45.854 17:25:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.r7PGkrhxxA 00:28:45.854 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:28:45.854 17:25:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=61787 00:28:45.854 17:25:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 61787 00:28:45.854 17:25:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:28:45.854 17:25:23 bdev_raid.raid_read_error_test -- common/autotest_common.sh@835 -- # '[' -z 61787 ']' 00:28:45.854 17:25:23 bdev_raid.raid_read_error_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:28:45.854 17:25:23 bdev_raid.raid_read_error_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:28:45.854 17:25:23 bdev_raid.raid_read_error_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:28:45.854 17:25:23 bdev_raid.raid_read_error_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:28:45.854 17:25:23 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:28:46.113 [2024-11-26 17:25:23.357022] Starting SPDK v25.01-pre git sha1 f7ce15267 / DPDK 24.03.0 initialization... 00:28:46.113 [2024-11-26 17:25:23.357223] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61787 ] 00:28:46.372 [2024-11-26 17:25:23.561679] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:46.372 [2024-11-26 17:25:23.720849] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:28:46.631 [2024-11-26 17:25:23.974748] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:28:46.631 [2024-11-26 17:25:23.975023] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:28:46.890 17:25:24 bdev_raid.raid_read_error_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:28:46.890 17:25:24 bdev_raid.raid_read_error_test -- common/autotest_common.sh@868 -- # return 0 00:28:46.890 17:25:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:28:46.890 17:25:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:28:46.890 17:25:24 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:46.890 17:25:24 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:28:47.149 BaseBdev1_malloc 00:28:47.149 17:25:24 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:47.149 17:25:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:28:47.149 17:25:24 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:47.149 17:25:24 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:28:47.149 true 00:28:47.149 17:25:24 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:47.149 17:25:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:28:47.149 17:25:24 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:47.149 17:25:24 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:28:47.149 [2024-11-26 17:25:24.385296] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:28:47.149 [2024-11-26 17:25:24.385356] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:28:47.149 [2024-11-26 17:25:24.385381] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:28:47.149 [2024-11-26 17:25:24.385395] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:28:47.149 [2024-11-26 17:25:24.388257] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:28:47.149 [2024-11-26 17:25:24.388306] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:28:47.149 BaseBdev1 00:28:47.149 17:25:24 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:47.149 17:25:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:28:47.149 17:25:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:28:47.149 17:25:24 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:47.149 17:25:24 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:28:47.149 BaseBdev2_malloc 00:28:47.149 17:25:24 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:47.149 17:25:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:28:47.149 17:25:24 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:47.149 17:25:24 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:28:47.149 true 00:28:47.149 17:25:24 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:47.149 17:25:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:28:47.149 17:25:24 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:47.149 17:25:24 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:28:47.149 [2024-11-26 17:25:24.460452] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:28:47.149 [2024-11-26 17:25:24.460687] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:28:47.149 [2024-11-26 17:25:24.460722] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:28:47.149 [2024-11-26 17:25:24.460740] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:28:47.150 [2024-11-26 17:25:24.463635] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:28:47.150 [2024-11-26 17:25:24.463682] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:28:47.150 BaseBdev2 00:28:47.150 17:25:24 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:47.150 17:25:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2'\''' -n raid_bdev1 -s 00:28:47.150 17:25:24 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:47.150 17:25:24 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:28:47.150 [2024-11-26 17:25:24.472583] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:28:47.150 [2024-11-26 17:25:24.474972] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:28:47.150 [2024-11-26 17:25:24.475354] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:28:47.150 [2024-11-26 17:25:24.475486] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:28:47.150 [2024-11-26 17:25:24.475845] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:28:47.150 [2024-11-26 17:25:24.476104] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:28:47.150 [2024-11-26 17:25:24.476161] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:28:47.150 [2024-11-26 17:25:24.476546] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:28:47.150 17:25:24 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:47.150 17:25:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 2 00:28:47.150 17:25:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:28:47.150 17:25:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:28:47.150 17:25:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:28:47.150 17:25:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:28:47.150 17:25:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:28:47.150 17:25:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:28:47.150 17:25:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:28:47.150 17:25:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:28:47.150 17:25:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:28:47.150 17:25:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:28:47.150 17:25:24 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:47.150 17:25:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:28:47.150 17:25:24 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:28:47.150 17:25:24 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:47.150 17:25:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:28:47.150 "name": "raid_bdev1", 00:28:47.150 "uuid": "61d9f6c1-96ef-4315-b5b9-30ffd31bde03", 00:28:47.150 "strip_size_kb": 64, 00:28:47.150 "state": "online", 00:28:47.150 "raid_level": "raid0", 00:28:47.150 "superblock": true, 00:28:47.150 "num_base_bdevs": 2, 00:28:47.150 "num_base_bdevs_discovered": 2, 00:28:47.150 "num_base_bdevs_operational": 2, 00:28:47.150 "base_bdevs_list": [ 00:28:47.150 { 00:28:47.150 "name": "BaseBdev1", 00:28:47.150 "uuid": "49e95109-dacc-5cb2-864b-a67ec9465332", 00:28:47.150 "is_configured": true, 00:28:47.150 "data_offset": 2048, 00:28:47.150 "data_size": 63488 00:28:47.150 }, 00:28:47.150 { 00:28:47.150 "name": "BaseBdev2", 00:28:47.150 "uuid": "2a195efd-5288-52d2-811a-151a9e8903fc", 00:28:47.150 "is_configured": true, 00:28:47.150 "data_offset": 2048, 00:28:47.150 "data_size": 63488 00:28:47.150 } 00:28:47.150 ] 00:28:47.150 }' 00:28:47.150 17:25:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:28:47.150 17:25:24 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:28:47.717 17:25:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:28:47.717 17:25:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:28:47.717 [2024-11-26 17:25:25.074302] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000063c0 00:28:48.650 17:25:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc read failure 00:28:48.650 17:25:25 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:48.650 17:25:25 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:28:48.650 17:25:25 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:48.650 17:25:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:28:48.650 17:25:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@832 -- # [[ raid0 = \r\a\i\d\1 ]] 00:28:48.650 17:25:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=2 00:28:48.650 17:25:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 2 00:28:48.650 17:25:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:28:48.650 17:25:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:28:48.650 17:25:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:28:48.650 17:25:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:28:48.650 17:25:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:28:48.650 17:25:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:28:48.650 17:25:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:28:48.650 17:25:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:28:48.650 17:25:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:28:48.650 17:25:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:28:48.650 17:25:25 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:48.650 17:25:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:28:48.650 17:25:25 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:28:48.650 17:25:25 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:48.650 17:25:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:28:48.650 "name": "raid_bdev1", 00:28:48.650 "uuid": "61d9f6c1-96ef-4315-b5b9-30ffd31bde03", 00:28:48.650 "strip_size_kb": 64, 00:28:48.650 "state": "online", 00:28:48.650 "raid_level": "raid0", 00:28:48.650 "superblock": true, 00:28:48.650 "num_base_bdevs": 2, 00:28:48.650 "num_base_bdevs_discovered": 2, 00:28:48.650 "num_base_bdevs_operational": 2, 00:28:48.650 "base_bdevs_list": [ 00:28:48.650 { 00:28:48.650 "name": "BaseBdev1", 00:28:48.650 "uuid": "49e95109-dacc-5cb2-864b-a67ec9465332", 00:28:48.650 "is_configured": true, 00:28:48.650 "data_offset": 2048, 00:28:48.650 "data_size": 63488 00:28:48.650 }, 00:28:48.650 { 00:28:48.650 "name": "BaseBdev2", 00:28:48.650 "uuid": "2a195efd-5288-52d2-811a-151a9e8903fc", 00:28:48.650 "is_configured": true, 00:28:48.650 "data_offset": 2048, 00:28:48.650 "data_size": 63488 00:28:48.650 } 00:28:48.650 ] 00:28:48.650 }' 00:28:48.650 17:25:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:28:48.650 17:25:26 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:28:49.217 17:25:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:28:49.217 17:25:26 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:49.217 17:25:26 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:28:49.217 [2024-11-26 17:25:26.393515] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:28:49.217 [2024-11-26 17:25:26.393558] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:28:49.217 [2024-11-26 17:25:26.396717] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:28:49.217 [2024-11-26 17:25:26.396768] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:28:49.217 [2024-11-26 17:25:26.396805] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:28:49.217 [2024-11-26 17:25:26.396822] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:28:49.217 { 00:28:49.217 "results": [ 00:28:49.217 { 00:28:49.217 "job": "raid_bdev1", 00:28:49.217 "core_mask": "0x1", 00:28:49.217 "workload": "randrw", 00:28:49.217 "percentage": 50, 00:28:49.217 "status": "finished", 00:28:49.217 "queue_depth": 1, 00:28:49.217 "io_size": 131072, 00:28:49.217 "runtime": 1.316639, 00:28:49.217 "iops": 13434.965848649477, 00:28:49.217 "mibps": 1679.3707310811847, 00:28:49.217 "io_failed": 1, 00:28:49.217 "io_timeout": 0, 00:28:49.217 "avg_latency_us": 102.35694053675738, 00:28:49.217 "min_latency_us": 29.866666666666667, 00:28:49.217 "max_latency_us": 1677.4095238095238 00:28:49.217 } 00:28:49.217 ], 00:28:49.217 "core_count": 1 00:28:49.217 } 00:28:49.217 17:25:26 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:49.217 17:25:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 61787 00:28:49.217 17:25:26 bdev_raid.raid_read_error_test -- common/autotest_common.sh@954 -- # '[' -z 61787 ']' 00:28:49.217 17:25:26 bdev_raid.raid_read_error_test -- common/autotest_common.sh@958 -- # kill -0 61787 00:28:49.217 17:25:26 bdev_raid.raid_read_error_test -- common/autotest_common.sh@959 -- # uname 00:28:49.217 17:25:26 bdev_raid.raid_read_error_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:28:49.217 17:25:26 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 61787 00:28:49.217 killing process with pid 61787 00:28:49.217 17:25:26 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:28:49.217 17:25:26 bdev_raid.raid_read_error_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:28:49.217 17:25:26 bdev_raid.raid_read_error_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 61787' 00:28:49.217 17:25:26 bdev_raid.raid_read_error_test -- common/autotest_common.sh@973 -- # kill 61787 00:28:49.217 [2024-11-26 17:25:26.442455] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:28:49.217 17:25:26 bdev_raid.raid_read_error_test -- common/autotest_common.sh@978 -- # wait 61787 00:28:49.217 [2024-11-26 17:25:26.594655] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:28:50.594 17:25:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.r7PGkrhxxA 00:28:50.594 17:25:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:28:50.594 17:25:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:28:50.594 17:25:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.76 00:28:50.594 17:25:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy raid0 00:28:50.594 17:25:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:28:50.594 17:25:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@200 -- # return 1 00:28:50.594 17:25:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@849 -- # [[ 0.76 != \0\.\0\0 ]] 00:28:50.594 00:28:50.594 real 0m4.768s 00:28:50.594 user 0m5.738s 00:28:50.594 sys 0m0.610s 00:28:50.594 17:25:27 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:28:50.594 ************************************ 00:28:50.594 17:25:27 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:28:50.594 END TEST raid_read_error_test 00:28:50.594 ************************************ 00:28:50.594 17:25:28 bdev_raid -- bdev/bdev_raid.sh@972 -- # run_test raid_write_error_test raid_io_error_test raid0 2 write 00:28:50.594 17:25:28 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:28:50.594 17:25:28 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:28:50.594 17:25:28 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:28:50.594 ************************************ 00:28:50.594 START TEST raid_write_error_test 00:28:50.594 ************************************ 00:28:50.594 17:25:28 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1129 -- # raid_io_error_test raid0 2 write 00:28:50.594 17:25:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=raid0 00:28:50.594 17:25:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=2 00:28:50.594 17:25:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=write 00:28:50.853 17:25:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:28:50.853 17:25:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:28:50.853 17:25:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:28:50.853 17:25:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:28:50.853 17:25:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:28:50.853 17:25:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:28:50.853 17:25:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:28:50.853 17:25:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:28:50.853 17:25:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:28:50.853 17:25:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:28:50.853 17:25:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:28:50.853 17:25:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:28:50.853 17:25:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:28:50.853 17:25:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:28:50.853 17:25:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:28:50.853 17:25:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@800 -- # '[' raid0 '!=' raid1 ']' 00:28:50.853 17:25:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@801 -- # strip_size=64 00:28:50.853 17:25:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@802 -- # create_arg+=' -z 64' 00:28:50.853 17:25:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:28:50.853 17:25:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.IQbS7Uen3N 00:28:50.853 17:25:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=61933 00:28:50.853 17:25:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 61933 00:28:50.853 17:25:28 bdev_raid.raid_write_error_test -- common/autotest_common.sh@835 -- # '[' -z 61933 ']' 00:28:50.853 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:28:50.853 17:25:28 bdev_raid.raid_write_error_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:28:50.853 17:25:28 bdev_raid.raid_write_error_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:28:50.853 17:25:28 bdev_raid.raid_write_error_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:28:50.853 17:25:28 bdev_raid.raid_write_error_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:28:50.853 17:25:28 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:28:50.853 17:25:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:28:50.853 [2024-11-26 17:25:28.175454] Starting SPDK v25.01-pre git sha1 f7ce15267 / DPDK 24.03.0 initialization... 00:28:50.853 [2024-11-26 17:25:28.175623] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61933 ] 00:28:51.110 [2024-11-26 17:25:28.378173] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:51.368 [2024-11-26 17:25:28.565949] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:28:51.625 [2024-11-26 17:25:28.817623] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:28:51.625 [2024-11-26 17:25:28.817666] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:28:51.882 17:25:29 bdev_raid.raid_write_error_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:28:51.883 17:25:29 bdev_raid.raid_write_error_test -- common/autotest_common.sh@868 -- # return 0 00:28:51.883 17:25:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:28:51.883 17:25:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:28:51.883 17:25:29 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:51.883 17:25:29 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:28:51.883 BaseBdev1_malloc 00:28:51.883 17:25:29 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:51.883 17:25:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:28:51.883 17:25:29 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:51.883 17:25:29 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:28:51.883 true 00:28:51.883 17:25:29 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:51.883 17:25:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:28:51.883 17:25:29 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:51.883 17:25:29 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:28:51.883 [2024-11-26 17:25:29.190737] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:28:51.883 [2024-11-26 17:25:29.190992] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:28:51.883 [2024-11-26 17:25:29.191039] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:28:51.883 [2024-11-26 17:25:29.191074] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:28:51.883 [2024-11-26 17:25:29.194612] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:28:51.883 [2024-11-26 17:25:29.194824] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:28:51.883 BaseBdev1 00:28:51.883 17:25:29 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:51.883 17:25:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:28:51.883 17:25:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:28:51.883 17:25:29 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:51.883 17:25:29 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:28:51.883 BaseBdev2_malloc 00:28:51.883 17:25:29 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:51.883 17:25:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:28:51.883 17:25:29 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:51.883 17:25:29 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:28:51.883 true 00:28:51.883 17:25:29 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:51.883 17:25:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:28:51.883 17:25:29 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:51.883 17:25:29 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:28:51.883 [2024-11-26 17:25:29.262729] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:28:51.883 [2024-11-26 17:25:29.262963] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:28:51.883 [2024-11-26 17:25:29.263087] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:28:51.883 [2024-11-26 17:25:29.263276] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:28:51.883 [2024-11-26 17:25:29.266619] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:28:51.883 [2024-11-26 17:25:29.266809] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:28:51.883 BaseBdev2 00:28:51.883 17:25:29 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:51.883 17:25:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2'\''' -n raid_bdev1 -s 00:28:51.883 17:25:29 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:51.883 17:25:29 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:28:51.883 [2024-11-26 17:25:29.271180] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:28:51.883 [2024-11-26 17:25:29.273794] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:28:51.883 [2024-11-26 17:25:29.274186] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:28:51.883 [2024-11-26 17:25:29.274325] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:28:51.883 [2024-11-26 17:25:29.274696] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:28:51.883 [2024-11-26 17:25:29.275022] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:28:51.883 [2024-11-26 17:25:29.275177] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:28:51.883 [2024-11-26 17:25:29.275570] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:28:51.883 17:25:29 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:51.883 17:25:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 2 00:28:51.883 17:25:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:28:51.883 17:25:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:28:51.883 17:25:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:28:51.883 17:25:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:28:51.883 17:25:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:28:51.883 17:25:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:28:51.883 17:25:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:28:51.883 17:25:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:28:51.883 17:25:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:28:51.883 17:25:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:28:51.883 17:25:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:28:51.883 17:25:29 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:51.883 17:25:29 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:28:51.883 17:25:29 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:51.883 17:25:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:28:51.883 "name": "raid_bdev1", 00:28:51.883 "uuid": "7d0bf638-a515-4e84-9940-0d5dfe727d14", 00:28:51.883 "strip_size_kb": 64, 00:28:51.883 "state": "online", 00:28:51.883 "raid_level": "raid0", 00:28:51.883 "superblock": true, 00:28:51.883 "num_base_bdevs": 2, 00:28:51.883 "num_base_bdevs_discovered": 2, 00:28:51.883 "num_base_bdevs_operational": 2, 00:28:51.883 "base_bdevs_list": [ 00:28:51.883 { 00:28:51.883 "name": "BaseBdev1", 00:28:51.883 "uuid": "29c6f9a5-f03e-54f4-87db-a2b691e6dfa1", 00:28:51.883 "is_configured": true, 00:28:51.883 "data_offset": 2048, 00:28:51.883 "data_size": 63488 00:28:51.883 }, 00:28:51.883 { 00:28:51.883 "name": "BaseBdev2", 00:28:51.883 "uuid": "4e938e0d-d7bf-567f-88e6-fef8d3aca890", 00:28:51.883 "is_configured": true, 00:28:51.883 "data_offset": 2048, 00:28:51.883 "data_size": 63488 00:28:51.883 } 00:28:51.883 ] 00:28:51.883 }' 00:28:51.883 17:25:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:28:51.883 17:25:29 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:28:52.450 17:25:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:28:52.450 17:25:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:28:52.450 [2024-11-26 17:25:29.765194] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000063c0 00:28:53.385 17:25:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc write failure 00:28:53.385 17:25:30 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:53.385 17:25:30 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:28:53.385 17:25:30 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:53.385 17:25:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:28:53.385 17:25:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@832 -- # [[ raid0 = \r\a\i\d\1 ]] 00:28:53.385 17:25:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=2 00:28:53.385 17:25:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 2 00:28:53.385 17:25:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:28:53.385 17:25:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:28:53.385 17:25:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:28:53.385 17:25:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:28:53.385 17:25:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:28:53.385 17:25:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:28:53.385 17:25:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:28:53.385 17:25:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:28:53.385 17:25:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:28:53.385 17:25:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:28:53.385 17:25:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:28:53.385 17:25:30 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:53.385 17:25:30 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:28:53.385 17:25:30 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:53.385 17:25:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:28:53.385 "name": "raid_bdev1", 00:28:53.385 "uuid": "7d0bf638-a515-4e84-9940-0d5dfe727d14", 00:28:53.385 "strip_size_kb": 64, 00:28:53.385 "state": "online", 00:28:53.385 "raid_level": "raid0", 00:28:53.385 "superblock": true, 00:28:53.385 "num_base_bdevs": 2, 00:28:53.385 "num_base_bdevs_discovered": 2, 00:28:53.385 "num_base_bdevs_operational": 2, 00:28:53.385 "base_bdevs_list": [ 00:28:53.385 { 00:28:53.385 "name": "BaseBdev1", 00:28:53.385 "uuid": "29c6f9a5-f03e-54f4-87db-a2b691e6dfa1", 00:28:53.385 "is_configured": true, 00:28:53.385 "data_offset": 2048, 00:28:53.385 "data_size": 63488 00:28:53.386 }, 00:28:53.386 { 00:28:53.386 "name": "BaseBdev2", 00:28:53.386 "uuid": "4e938e0d-d7bf-567f-88e6-fef8d3aca890", 00:28:53.386 "is_configured": true, 00:28:53.386 "data_offset": 2048, 00:28:53.386 "data_size": 63488 00:28:53.386 } 00:28:53.386 ] 00:28:53.386 }' 00:28:53.386 17:25:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:28:53.386 17:25:30 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:28:53.952 17:25:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:28:53.952 17:25:31 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:53.952 17:25:31 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:28:53.952 [2024-11-26 17:25:31.226100] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:28:53.952 [2024-11-26 17:25:31.226140] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:28:53.952 [2024-11-26 17:25:31.229590] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:28:53.952 [2024-11-26 17:25:31.229759] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:28:53.952 [2024-11-26 17:25:31.229838] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:28:53.952 [2024-11-26 17:25:31.230105] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:28:53.952 { 00:28:53.952 "results": [ 00:28:53.952 { 00:28:53.952 "job": "raid_bdev1", 00:28:53.952 "core_mask": "0x1", 00:28:53.952 "workload": "randrw", 00:28:53.952 "percentage": 50, 00:28:53.952 "status": "finished", 00:28:53.952 "queue_depth": 1, 00:28:53.952 "io_size": 131072, 00:28:53.952 "runtime": 1.45861, 00:28:53.952 "iops": 13568.397309767519, 00:28:53.952 "mibps": 1696.0496637209399, 00:28:53.952 "io_failed": 1, 00:28:53.952 "io_timeout": 0, 00:28:53.952 "avg_latency_us": 101.50163606267083, 00:28:53.952 "min_latency_us": 28.64761904761905, 00:28:53.952 "max_latency_us": 1919.2685714285715 00:28:53.952 } 00:28:53.952 ], 00:28:53.952 "core_count": 1 00:28:53.952 } 00:28:53.952 17:25:31 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:53.952 17:25:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 61933 00:28:53.952 17:25:31 bdev_raid.raid_write_error_test -- common/autotest_common.sh@954 -- # '[' -z 61933 ']' 00:28:53.952 17:25:31 bdev_raid.raid_write_error_test -- common/autotest_common.sh@958 -- # kill -0 61933 00:28:53.952 17:25:31 bdev_raid.raid_write_error_test -- common/autotest_common.sh@959 -- # uname 00:28:53.952 17:25:31 bdev_raid.raid_write_error_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:28:53.952 17:25:31 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 61933 00:28:53.952 killing process with pid 61933 00:28:53.952 17:25:31 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:28:53.952 17:25:31 bdev_raid.raid_write_error_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:28:53.952 17:25:31 bdev_raid.raid_write_error_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 61933' 00:28:53.952 17:25:31 bdev_raid.raid_write_error_test -- common/autotest_common.sh@973 -- # kill 61933 00:28:53.952 [2024-11-26 17:25:31.271107] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:28:53.952 17:25:31 bdev_raid.raid_write_error_test -- common/autotest_common.sh@978 -- # wait 61933 00:28:54.210 [2024-11-26 17:25:31.439238] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:28:55.587 17:25:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.IQbS7Uen3N 00:28:55.588 17:25:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:28:55.588 17:25:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:28:55.588 17:25:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.69 00:28:55.588 17:25:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy raid0 00:28:55.588 17:25:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:28:55.588 17:25:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@200 -- # return 1 00:28:55.588 ************************************ 00:28:55.588 END TEST raid_write_error_test 00:28:55.588 ************************************ 00:28:55.588 17:25:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@849 -- # [[ 0.69 != \0\.\0\0 ]] 00:28:55.588 00:28:55.588 real 0m4.817s 00:28:55.588 user 0m5.769s 00:28:55.588 sys 0m0.620s 00:28:55.588 17:25:32 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:28:55.588 17:25:32 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:28:55.588 17:25:32 bdev_raid -- bdev/bdev_raid.sh@967 -- # for level in raid0 concat raid1 00:28:55.588 17:25:32 bdev_raid -- bdev/bdev_raid.sh@968 -- # run_test raid_state_function_test raid_state_function_test concat 2 false 00:28:55.588 17:25:32 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:28:55.588 17:25:32 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:28:55.588 17:25:32 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:28:55.588 ************************************ 00:28:55.588 START TEST raid_state_function_test 00:28:55.588 ************************************ 00:28:55.588 17:25:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1129 -- # raid_state_function_test concat 2 false 00:28:55.588 17:25:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # local raid_level=concat 00:28:55.588 17:25:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=2 00:28:55.588 17:25:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # local superblock=false 00:28:55.588 17:25:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:28:55.588 17:25:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:28:55.588 17:25:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:28:55.588 17:25:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:28:55.588 17:25:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:28:55.588 17:25:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:28:55.588 17:25:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:28:55.588 17:25:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:28:55.588 17:25:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:28:55.588 17:25:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:28:55.588 17:25:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:28:55.588 17:25:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:28:55.588 17:25:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # local strip_size 00:28:55.588 17:25:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:28:55.588 17:25:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:28:55.588 17:25:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@215 -- # '[' concat '!=' raid1 ']' 00:28:55.588 17:25:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:28:55.588 17:25:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:28:55.588 17:25:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@222 -- # '[' false = true ']' 00:28:55.588 17:25:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # superblock_create_arg= 00:28:55.588 17:25:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:28:55.588 17:25:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@229 -- # raid_pid=62076 00:28:55.588 17:25:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 62076' 00:28:55.588 Process raid pid: 62076 00:28:55.588 17:25:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@231 -- # waitforlisten 62076 00:28:55.588 17:25:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@835 -- # '[' -z 62076 ']' 00:28:55.588 17:25:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:28:55.588 17:25:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:28:55.588 17:25:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:28:55.588 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:28:55.588 17:25:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:28:55.588 17:25:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:28:55.846 [2024-11-26 17:25:33.055916] Starting SPDK v25.01-pre git sha1 f7ce15267 / DPDK 24.03.0 initialization... 00:28:55.846 [2024-11-26 17:25:33.056379] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:28:55.846 [2024-11-26 17:25:33.257608] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:56.105 [2024-11-26 17:25:33.393836] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:28:56.363 [2024-11-26 17:25:33.644949] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:28:56.363 [2024-11-26 17:25:33.644998] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:28:56.930 17:25:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:28:56.930 17:25:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@868 -- # return 0 00:28:56.930 17:25:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:28:56.930 17:25:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:56.930 17:25:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:28:56.930 [2024-11-26 17:25:34.073043] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:28:56.930 [2024-11-26 17:25:34.073143] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:28:56.930 [2024-11-26 17:25:34.073165] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:28:56.930 [2024-11-26 17:25:34.073207] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:28:56.930 17:25:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:56.930 17:25:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 2 00:28:56.930 17:25:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:28:56.930 17:25:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:28:56.930 17:25:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:28:56.930 17:25:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:28:56.930 17:25:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:28:56.930 17:25:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:28:56.930 17:25:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:28:56.930 17:25:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:28:56.930 17:25:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:28:56.930 17:25:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:28:56.930 17:25:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:56.930 17:25:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:28:56.930 17:25:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:28:56.930 17:25:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:56.930 17:25:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:28:56.930 "name": "Existed_Raid", 00:28:56.930 "uuid": "00000000-0000-0000-0000-000000000000", 00:28:56.930 "strip_size_kb": 64, 00:28:56.930 "state": "configuring", 00:28:56.930 "raid_level": "concat", 00:28:56.930 "superblock": false, 00:28:56.930 "num_base_bdevs": 2, 00:28:56.930 "num_base_bdevs_discovered": 0, 00:28:56.930 "num_base_bdevs_operational": 2, 00:28:56.930 "base_bdevs_list": [ 00:28:56.930 { 00:28:56.930 "name": "BaseBdev1", 00:28:56.930 "uuid": "00000000-0000-0000-0000-000000000000", 00:28:56.930 "is_configured": false, 00:28:56.930 "data_offset": 0, 00:28:56.930 "data_size": 0 00:28:56.930 }, 00:28:56.930 { 00:28:56.930 "name": "BaseBdev2", 00:28:56.930 "uuid": "00000000-0000-0000-0000-000000000000", 00:28:56.930 "is_configured": false, 00:28:56.930 "data_offset": 0, 00:28:56.930 "data_size": 0 00:28:56.930 } 00:28:56.930 ] 00:28:56.930 }' 00:28:56.930 17:25:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:28:56.930 17:25:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:28:57.218 17:25:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:28:57.218 17:25:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:57.218 17:25:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:28:57.218 [2024-11-26 17:25:34.520970] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:28:57.218 [2024-11-26 17:25:34.521008] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:28:57.218 17:25:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:57.218 17:25:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:28:57.218 17:25:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:57.218 17:25:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:28:57.218 [2024-11-26 17:25:34.528978] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:28:57.218 [2024-11-26 17:25:34.529034] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:28:57.218 [2024-11-26 17:25:34.529059] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:28:57.218 [2024-11-26 17:25:34.529078] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:28:57.218 17:25:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:57.218 17:25:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:28:57.218 17:25:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:57.218 17:25:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:28:57.218 [2024-11-26 17:25:34.575490] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:28:57.218 BaseBdev1 00:28:57.218 17:25:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:57.218 17:25:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:28:57.218 17:25:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:28:57.218 17:25:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:28:57.218 17:25:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:28:57.218 17:25:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:28:57.218 17:25:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:28:57.218 17:25:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:28:57.218 17:25:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:57.218 17:25:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:28:57.218 17:25:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:57.218 17:25:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:28:57.218 17:25:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:57.218 17:25:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:28:57.218 [ 00:28:57.218 { 00:28:57.218 "name": "BaseBdev1", 00:28:57.218 "aliases": [ 00:28:57.218 "c23c756e-48f2-4f55-8305-945336033671" 00:28:57.218 ], 00:28:57.218 "product_name": "Malloc disk", 00:28:57.218 "block_size": 512, 00:28:57.218 "num_blocks": 65536, 00:28:57.218 "uuid": "c23c756e-48f2-4f55-8305-945336033671", 00:28:57.218 "assigned_rate_limits": { 00:28:57.218 "rw_ios_per_sec": 0, 00:28:57.218 "rw_mbytes_per_sec": 0, 00:28:57.218 "r_mbytes_per_sec": 0, 00:28:57.218 "w_mbytes_per_sec": 0 00:28:57.218 }, 00:28:57.218 "claimed": true, 00:28:57.218 "claim_type": "exclusive_write", 00:28:57.218 "zoned": false, 00:28:57.218 "supported_io_types": { 00:28:57.218 "read": true, 00:28:57.218 "write": true, 00:28:57.218 "unmap": true, 00:28:57.218 "flush": true, 00:28:57.218 "reset": true, 00:28:57.218 "nvme_admin": false, 00:28:57.218 "nvme_io": false, 00:28:57.218 "nvme_io_md": false, 00:28:57.218 "write_zeroes": true, 00:28:57.218 "zcopy": true, 00:28:57.218 "get_zone_info": false, 00:28:57.218 "zone_management": false, 00:28:57.218 "zone_append": false, 00:28:57.218 "compare": false, 00:28:57.218 "compare_and_write": false, 00:28:57.218 "abort": true, 00:28:57.218 "seek_hole": false, 00:28:57.218 "seek_data": false, 00:28:57.218 "copy": true, 00:28:57.218 "nvme_iov_md": false 00:28:57.218 }, 00:28:57.218 "memory_domains": [ 00:28:57.218 { 00:28:57.218 "dma_device_id": "system", 00:28:57.218 "dma_device_type": 1 00:28:57.218 }, 00:28:57.218 { 00:28:57.218 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:28:57.218 "dma_device_type": 2 00:28:57.218 } 00:28:57.218 ], 00:28:57.218 "driver_specific": {} 00:28:57.218 } 00:28:57.218 ] 00:28:57.218 17:25:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:57.218 17:25:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:28:57.218 17:25:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 2 00:28:57.218 17:25:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:28:57.218 17:25:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:28:57.218 17:25:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:28:57.218 17:25:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:28:57.218 17:25:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:28:57.218 17:25:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:28:57.218 17:25:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:28:57.218 17:25:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:28:57.218 17:25:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:28:57.218 17:25:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:28:57.218 17:25:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:57.218 17:25:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:28:57.218 17:25:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:28:57.522 17:25:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:57.522 17:25:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:28:57.522 "name": "Existed_Raid", 00:28:57.522 "uuid": "00000000-0000-0000-0000-000000000000", 00:28:57.522 "strip_size_kb": 64, 00:28:57.522 "state": "configuring", 00:28:57.522 "raid_level": "concat", 00:28:57.522 "superblock": false, 00:28:57.522 "num_base_bdevs": 2, 00:28:57.522 "num_base_bdevs_discovered": 1, 00:28:57.522 "num_base_bdevs_operational": 2, 00:28:57.522 "base_bdevs_list": [ 00:28:57.522 { 00:28:57.522 "name": "BaseBdev1", 00:28:57.522 "uuid": "c23c756e-48f2-4f55-8305-945336033671", 00:28:57.522 "is_configured": true, 00:28:57.522 "data_offset": 0, 00:28:57.522 "data_size": 65536 00:28:57.522 }, 00:28:57.522 { 00:28:57.522 "name": "BaseBdev2", 00:28:57.522 "uuid": "00000000-0000-0000-0000-000000000000", 00:28:57.522 "is_configured": false, 00:28:57.522 "data_offset": 0, 00:28:57.522 "data_size": 0 00:28:57.522 } 00:28:57.522 ] 00:28:57.522 }' 00:28:57.522 17:25:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:28:57.522 17:25:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:28:57.781 17:25:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:28:57.781 17:25:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:57.781 17:25:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:28:57.781 [2024-11-26 17:25:35.091681] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:28:57.781 [2024-11-26 17:25:35.091742] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:28:57.781 17:25:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:57.781 17:25:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:28:57.781 17:25:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:57.781 17:25:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:28:57.781 [2024-11-26 17:25:35.099722] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:28:57.781 [2024-11-26 17:25:35.101955] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:28:57.781 [2024-11-26 17:25:35.102006] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:28:57.781 17:25:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:57.781 17:25:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:28:57.781 17:25:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:28:57.781 17:25:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 2 00:28:57.781 17:25:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:28:57.781 17:25:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:28:57.781 17:25:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:28:57.781 17:25:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:28:57.781 17:25:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:28:57.781 17:25:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:28:57.781 17:25:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:28:57.781 17:25:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:28:57.781 17:25:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:28:57.781 17:25:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:28:57.781 17:25:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:57.781 17:25:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:28:57.781 17:25:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:28:57.781 17:25:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:57.781 17:25:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:28:57.781 "name": "Existed_Raid", 00:28:57.781 "uuid": "00000000-0000-0000-0000-000000000000", 00:28:57.781 "strip_size_kb": 64, 00:28:57.781 "state": "configuring", 00:28:57.781 "raid_level": "concat", 00:28:57.781 "superblock": false, 00:28:57.781 "num_base_bdevs": 2, 00:28:57.781 "num_base_bdevs_discovered": 1, 00:28:57.781 "num_base_bdevs_operational": 2, 00:28:57.781 "base_bdevs_list": [ 00:28:57.781 { 00:28:57.781 "name": "BaseBdev1", 00:28:57.781 "uuid": "c23c756e-48f2-4f55-8305-945336033671", 00:28:57.781 "is_configured": true, 00:28:57.781 "data_offset": 0, 00:28:57.781 "data_size": 65536 00:28:57.781 }, 00:28:57.781 { 00:28:57.781 "name": "BaseBdev2", 00:28:57.781 "uuid": "00000000-0000-0000-0000-000000000000", 00:28:57.781 "is_configured": false, 00:28:57.781 "data_offset": 0, 00:28:57.781 "data_size": 0 00:28:57.781 } 00:28:57.781 ] 00:28:57.781 }' 00:28:57.781 17:25:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:28:57.781 17:25:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:28:58.349 17:25:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:28:58.349 17:25:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:58.349 17:25:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:28:58.349 [2024-11-26 17:25:35.576195] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:28:58.349 [2024-11-26 17:25:35.576259] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:28:58.349 [2024-11-26 17:25:35.576271] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 131072, blocklen 512 00:28:58.349 [2024-11-26 17:25:35.576598] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:28:58.349 [2024-11-26 17:25:35.576791] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:28:58.349 [2024-11-26 17:25:35.576807] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:28:58.349 [2024-11-26 17:25:35.577107] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:28:58.349 BaseBdev2 00:28:58.349 17:25:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:58.349 17:25:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:28:58.349 17:25:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:28:58.349 17:25:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:28:58.349 17:25:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:28:58.349 17:25:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:28:58.349 17:25:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:28:58.349 17:25:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:28:58.349 17:25:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:58.349 17:25:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:28:58.349 17:25:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:58.349 17:25:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:28:58.349 17:25:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:58.349 17:25:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:28:58.349 [ 00:28:58.349 { 00:28:58.349 "name": "BaseBdev2", 00:28:58.349 "aliases": [ 00:28:58.349 "837cf782-8c42-4e76-b35e-d05dba149620" 00:28:58.349 ], 00:28:58.349 "product_name": "Malloc disk", 00:28:58.349 "block_size": 512, 00:28:58.349 "num_blocks": 65536, 00:28:58.349 "uuid": "837cf782-8c42-4e76-b35e-d05dba149620", 00:28:58.349 "assigned_rate_limits": { 00:28:58.349 "rw_ios_per_sec": 0, 00:28:58.349 "rw_mbytes_per_sec": 0, 00:28:58.349 "r_mbytes_per_sec": 0, 00:28:58.349 "w_mbytes_per_sec": 0 00:28:58.349 }, 00:28:58.349 "claimed": true, 00:28:58.349 "claim_type": "exclusive_write", 00:28:58.349 "zoned": false, 00:28:58.349 "supported_io_types": { 00:28:58.349 "read": true, 00:28:58.349 "write": true, 00:28:58.349 "unmap": true, 00:28:58.349 "flush": true, 00:28:58.349 "reset": true, 00:28:58.349 "nvme_admin": false, 00:28:58.349 "nvme_io": false, 00:28:58.349 "nvme_io_md": false, 00:28:58.349 "write_zeroes": true, 00:28:58.349 "zcopy": true, 00:28:58.349 "get_zone_info": false, 00:28:58.349 "zone_management": false, 00:28:58.349 "zone_append": false, 00:28:58.349 "compare": false, 00:28:58.349 "compare_and_write": false, 00:28:58.349 "abort": true, 00:28:58.349 "seek_hole": false, 00:28:58.349 "seek_data": false, 00:28:58.349 "copy": true, 00:28:58.349 "nvme_iov_md": false 00:28:58.349 }, 00:28:58.349 "memory_domains": [ 00:28:58.349 { 00:28:58.349 "dma_device_id": "system", 00:28:58.349 "dma_device_type": 1 00:28:58.349 }, 00:28:58.349 { 00:28:58.349 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:28:58.349 "dma_device_type": 2 00:28:58.349 } 00:28:58.349 ], 00:28:58.349 "driver_specific": {} 00:28:58.349 } 00:28:58.349 ] 00:28:58.349 17:25:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:58.349 17:25:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:28:58.349 17:25:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:28:58.349 17:25:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:28:58.349 17:25:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online concat 64 2 00:28:58.349 17:25:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:28:58.349 17:25:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:28:58.349 17:25:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:28:58.349 17:25:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:28:58.349 17:25:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:28:58.349 17:25:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:28:58.349 17:25:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:28:58.349 17:25:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:28:58.349 17:25:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:28:58.349 17:25:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:28:58.349 17:25:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:58.349 17:25:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:28:58.349 17:25:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:28:58.350 17:25:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:58.350 17:25:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:28:58.350 "name": "Existed_Raid", 00:28:58.350 "uuid": "97c1ef04-4b5f-44f7-9778-392249965135", 00:28:58.350 "strip_size_kb": 64, 00:28:58.350 "state": "online", 00:28:58.350 "raid_level": "concat", 00:28:58.350 "superblock": false, 00:28:58.350 "num_base_bdevs": 2, 00:28:58.350 "num_base_bdevs_discovered": 2, 00:28:58.350 "num_base_bdevs_operational": 2, 00:28:58.350 "base_bdevs_list": [ 00:28:58.350 { 00:28:58.350 "name": "BaseBdev1", 00:28:58.350 "uuid": "c23c756e-48f2-4f55-8305-945336033671", 00:28:58.350 "is_configured": true, 00:28:58.350 "data_offset": 0, 00:28:58.350 "data_size": 65536 00:28:58.350 }, 00:28:58.350 { 00:28:58.350 "name": "BaseBdev2", 00:28:58.350 "uuid": "837cf782-8c42-4e76-b35e-d05dba149620", 00:28:58.350 "is_configured": true, 00:28:58.350 "data_offset": 0, 00:28:58.350 "data_size": 65536 00:28:58.350 } 00:28:58.350 ] 00:28:58.350 }' 00:28:58.350 17:25:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:28:58.350 17:25:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:28:58.607 17:25:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:28:58.607 17:25:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:28:58.607 17:25:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:28:58.607 17:25:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:28:58.607 17:25:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:28:58.607 17:25:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:28:58.607 17:25:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:28:58.607 17:25:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:58.607 17:25:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:28:58.607 17:25:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:28:58.607 [2024-11-26 17:25:36.052683] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:28:58.865 17:25:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:58.865 17:25:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:28:58.865 "name": "Existed_Raid", 00:28:58.865 "aliases": [ 00:28:58.865 "97c1ef04-4b5f-44f7-9778-392249965135" 00:28:58.865 ], 00:28:58.866 "product_name": "Raid Volume", 00:28:58.866 "block_size": 512, 00:28:58.866 "num_blocks": 131072, 00:28:58.866 "uuid": "97c1ef04-4b5f-44f7-9778-392249965135", 00:28:58.866 "assigned_rate_limits": { 00:28:58.866 "rw_ios_per_sec": 0, 00:28:58.866 "rw_mbytes_per_sec": 0, 00:28:58.866 "r_mbytes_per_sec": 0, 00:28:58.866 "w_mbytes_per_sec": 0 00:28:58.866 }, 00:28:58.866 "claimed": false, 00:28:58.866 "zoned": false, 00:28:58.866 "supported_io_types": { 00:28:58.866 "read": true, 00:28:58.866 "write": true, 00:28:58.866 "unmap": true, 00:28:58.866 "flush": true, 00:28:58.866 "reset": true, 00:28:58.866 "nvme_admin": false, 00:28:58.866 "nvme_io": false, 00:28:58.866 "nvme_io_md": false, 00:28:58.866 "write_zeroes": true, 00:28:58.866 "zcopy": false, 00:28:58.866 "get_zone_info": false, 00:28:58.866 "zone_management": false, 00:28:58.866 "zone_append": false, 00:28:58.866 "compare": false, 00:28:58.866 "compare_and_write": false, 00:28:58.866 "abort": false, 00:28:58.866 "seek_hole": false, 00:28:58.866 "seek_data": false, 00:28:58.866 "copy": false, 00:28:58.866 "nvme_iov_md": false 00:28:58.866 }, 00:28:58.866 "memory_domains": [ 00:28:58.866 { 00:28:58.866 "dma_device_id": "system", 00:28:58.866 "dma_device_type": 1 00:28:58.866 }, 00:28:58.866 { 00:28:58.866 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:28:58.866 "dma_device_type": 2 00:28:58.866 }, 00:28:58.866 { 00:28:58.866 "dma_device_id": "system", 00:28:58.866 "dma_device_type": 1 00:28:58.866 }, 00:28:58.866 { 00:28:58.866 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:28:58.866 "dma_device_type": 2 00:28:58.866 } 00:28:58.866 ], 00:28:58.866 "driver_specific": { 00:28:58.866 "raid": { 00:28:58.866 "uuid": "97c1ef04-4b5f-44f7-9778-392249965135", 00:28:58.866 "strip_size_kb": 64, 00:28:58.866 "state": "online", 00:28:58.866 "raid_level": "concat", 00:28:58.866 "superblock": false, 00:28:58.866 "num_base_bdevs": 2, 00:28:58.866 "num_base_bdevs_discovered": 2, 00:28:58.866 "num_base_bdevs_operational": 2, 00:28:58.866 "base_bdevs_list": [ 00:28:58.866 { 00:28:58.866 "name": "BaseBdev1", 00:28:58.866 "uuid": "c23c756e-48f2-4f55-8305-945336033671", 00:28:58.866 "is_configured": true, 00:28:58.866 "data_offset": 0, 00:28:58.866 "data_size": 65536 00:28:58.866 }, 00:28:58.866 { 00:28:58.866 "name": "BaseBdev2", 00:28:58.866 "uuid": "837cf782-8c42-4e76-b35e-d05dba149620", 00:28:58.866 "is_configured": true, 00:28:58.866 "data_offset": 0, 00:28:58.866 "data_size": 65536 00:28:58.866 } 00:28:58.866 ] 00:28:58.866 } 00:28:58.866 } 00:28:58.866 }' 00:28:58.866 17:25:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:28:58.866 17:25:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:28:58.866 BaseBdev2' 00:28:58.866 17:25:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:28:58.866 17:25:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:28:58.866 17:25:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:28:58.866 17:25:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:28:58.866 17:25:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:28:58.866 17:25:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:58.866 17:25:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:28:58.866 17:25:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:58.866 17:25:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:28:58.866 17:25:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:28:58.866 17:25:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:28:58.866 17:25:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:28:58.866 17:25:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:58.866 17:25:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:28:58.866 17:25:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:28:58.866 17:25:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:58.866 17:25:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:28:58.866 17:25:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:28:58.866 17:25:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:28:58.866 17:25:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:58.866 17:25:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:28:58.866 [2024-11-26 17:25:36.284467] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:28:58.866 [2024-11-26 17:25:36.284510] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:28:58.866 [2024-11-26 17:25:36.284564] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:28:59.124 17:25:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:59.124 17:25:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@260 -- # local expected_state 00:28:59.124 17:25:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@261 -- # has_redundancy concat 00:28:59.124 17:25:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:28:59.124 17:25:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # return 1 00:28:59.124 17:25:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@262 -- # expected_state=offline 00:28:59.124 17:25:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid offline concat 64 1 00:28:59.124 17:25:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:28:59.124 17:25:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=offline 00:28:59.124 17:25:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:28:59.124 17:25:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:28:59.124 17:25:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:28:59.124 17:25:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:28:59.124 17:25:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:28:59.124 17:25:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:28:59.124 17:25:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:28:59.124 17:25:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:28:59.124 17:25:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:28:59.124 17:25:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:59.124 17:25:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:28:59.124 17:25:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:59.124 17:25:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:28:59.124 "name": "Existed_Raid", 00:28:59.124 "uuid": "97c1ef04-4b5f-44f7-9778-392249965135", 00:28:59.124 "strip_size_kb": 64, 00:28:59.124 "state": "offline", 00:28:59.124 "raid_level": "concat", 00:28:59.124 "superblock": false, 00:28:59.124 "num_base_bdevs": 2, 00:28:59.124 "num_base_bdevs_discovered": 1, 00:28:59.124 "num_base_bdevs_operational": 1, 00:28:59.124 "base_bdevs_list": [ 00:28:59.124 { 00:28:59.124 "name": null, 00:28:59.124 "uuid": "00000000-0000-0000-0000-000000000000", 00:28:59.124 "is_configured": false, 00:28:59.124 "data_offset": 0, 00:28:59.124 "data_size": 65536 00:28:59.124 }, 00:28:59.124 { 00:28:59.124 "name": "BaseBdev2", 00:28:59.124 "uuid": "837cf782-8c42-4e76-b35e-d05dba149620", 00:28:59.124 "is_configured": true, 00:28:59.124 "data_offset": 0, 00:28:59.124 "data_size": 65536 00:28:59.124 } 00:28:59.124 ] 00:28:59.124 }' 00:28:59.124 17:25:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:28:59.124 17:25:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:28:59.383 17:25:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:28:59.383 17:25:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:28:59.383 17:25:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:28:59.383 17:25:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:28:59.383 17:25:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:59.383 17:25:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:28:59.642 17:25:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:59.642 17:25:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:28:59.642 17:25:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:28:59.642 17:25:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:28:59.642 17:25:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:59.642 17:25:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:28:59.642 [2024-11-26 17:25:36.871306] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:28:59.642 [2024-11-26 17:25:36.871372] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:28:59.642 17:25:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:59.642 17:25:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:28:59.642 17:25:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:28:59.642 17:25:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:28:59.642 17:25:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:59.642 17:25:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:28:59.642 17:25:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:28:59.642 17:25:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:59.642 17:25:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:28:59.642 17:25:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:28:59.642 17:25:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@284 -- # '[' 2 -gt 2 ']' 00:28:59.642 17:25:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@326 -- # killprocess 62076 00:28:59.642 17:25:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@954 -- # '[' -z 62076 ']' 00:28:59.642 17:25:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@958 -- # kill -0 62076 00:28:59.642 17:25:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@959 -- # uname 00:28:59.642 17:25:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:28:59.642 17:25:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 62076 00:28:59.642 17:25:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:28:59.642 killing process with pid 62076 00:28:59.642 17:25:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:28:59.642 17:25:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 62076' 00:28:59.642 17:25:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@973 -- # kill 62076 00:28:59.642 [2024-11-26 17:25:37.071009] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:28:59.642 17:25:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@978 -- # wait 62076 00:28:59.900 [2024-11-26 17:25:37.091434] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:29:01.335 17:25:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@328 -- # return 0 00:29:01.335 00:29:01.335 real 0m5.427s 00:29:01.335 user 0m7.793s 00:29:01.335 sys 0m0.921s 00:29:01.335 17:25:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:29:01.335 17:25:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:29:01.335 ************************************ 00:29:01.335 END TEST raid_state_function_test 00:29:01.335 ************************************ 00:29:01.335 17:25:38 bdev_raid -- bdev/bdev_raid.sh@969 -- # run_test raid_state_function_test_sb raid_state_function_test concat 2 true 00:29:01.335 17:25:38 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:29:01.335 17:25:38 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:29:01.335 17:25:38 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:29:01.335 ************************************ 00:29:01.335 START TEST raid_state_function_test_sb 00:29:01.335 ************************************ 00:29:01.335 17:25:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1129 -- # raid_state_function_test concat 2 true 00:29:01.335 17:25:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # local raid_level=concat 00:29:01.335 17:25:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=2 00:29:01.335 17:25:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:29:01.335 17:25:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:29:01.335 17:25:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:29:01.335 17:25:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:29:01.335 17:25:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:29:01.335 17:25:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:29:01.336 17:25:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:29:01.336 17:25:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:29:01.336 17:25:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:29:01.336 17:25:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:29:01.336 17:25:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:29:01.336 17:25:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:29:01.336 17:25:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:29:01.336 17:25:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # local strip_size 00:29:01.336 17:25:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:29:01.336 17:25:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:29:01.336 17:25:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@215 -- # '[' concat '!=' raid1 ']' 00:29:01.336 17:25:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:29:01.336 17:25:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:29:01.336 17:25:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:29:01.336 17:25:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:29:01.336 17:25:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@229 -- # raid_pid=62335 00:29:01.336 Process raid pid: 62335 00:29:01.336 17:25:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 62335' 00:29:01.336 17:25:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # waitforlisten 62335 00:29:01.336 17:25:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@835 -- # '[' -z 62335 ']' 00:29:01.336 17:25:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:29:01.336 17:25:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:01.336 17:25:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@840 -- # local max_retries=100 00:29:01.336 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:29:01.336 17:25:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:01.336 17:25:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@844 -- # xtrace_disable 00:29:01.336 17:25:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:29:01.336 [2024-11-26 17:25:38.508800] Starting SPDK v25.01-pre git sha1 f7ce15267 / DPDK 24.03.0 initialization... 00:29:01.336 [2024-11-26 17:25:38.508940] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:29:01.336 [2024-11-26 17:25:38.684626] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:01.594 [2024-11-26 17:25:38.810137] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:29:01.853 [2024-11-26 17:25:39.041327] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:29:01.853 [2024-11-26 17:25:39.041379] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:29:02.111 17:25:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:29:02.111 17:25:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@868 -- # return 0 00:29:02.111 17:25:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -s -r concat -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:29:02.111 17:25:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:02.111 17:25:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:29:02.111 [2024-11-26 17:25:39.533337] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:29:02.111 [2024-11-26 17:25:39.533397] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:29:02.111 [2024-11-26 17:25:39.533409] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:29:02.111 [2024-11-26 17:25:39.533422] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:29:02.111 17:25:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:02.111 17:25:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 2 00:29:02.111 17:25:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:29:02.111 17:25:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:29:02.111 17:25:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:29:02.111 17:25:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:29:02.111 17:25:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:29:02.111 17:25:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:29:02.111 17:25:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:29:02.111 17:25:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:29:02.111 17:25:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:29:02.111 17:25:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:29:02.111 17:25:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:02.112 17:25:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:29:02.112 17:25:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:29:02.370 17:25:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:02.370 17:25:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:29:02.370 "name": "Existed_Raid", 00:29:02.370 "uuid": "23ab8aab-1518-4a6b-a366-7673635ab5fa", 00:29:02.370 "strip_size_kb": 64, 00:29:02.370 "state": "configuring", 00:29:02.370 "raid_level": "concat", 00:29:02.370 "superblock": true, 00:29:02.370 "num_base_bdevs": 2, 00:29:02.370 "num_base_bdevs_discovered": 0, 00:29:02.370 "num_base_bdevs_operational": 2, 00:29:02.370 "base_bdevs_list": [ 00:29:02.370 { 00:29:02.370 "name": "BaseBdev1", 00:29:02.370 "uuid": "00000000-0000-0000-0000-000000000000", 00:29:02.370 "is_configured": false, 00:29:02.370 "data_offset": 0, 00:29:02.370 "data_size": 0 00:29:02.370 }, 00:29:02.370 { 00:29:02.370 "name": "BaseBdev2", 00:29:02.370 "uuid": "00000000-0000-0000-0000-000000000000", 00:29:02.370 "is_configured": false, 00:29:02.370 "data_offset": 0, 00:29:02.370 "data_size": 0 00:29:02.370 } 00:29:02.370 ] 00:29:02.370 }' 00:29:02.370 17:25:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:29:02.370 17:25:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:29:02.629 17:25:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:29:02.629 17:25:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:02.629 17:25:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:29:02.629 [2024-11-26 17:25:39.993421] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:29:02.629 [2024-11-26 17:25:39.993468] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:29:02.629 17:25:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:02.629 17:25:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -s -r concat -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:29:02.629 17:25:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:02.629 17:25:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:29:02.629 [2024-11-26 17:25:40.001469] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:29:02.629 [2024-11-26 17:25:40.001531] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:29:02.629 [2024-11-26 17:25:40.001551] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:29:02.629 [2024-11-26 17:25:40.001575] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:29:02.629 17:25:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:02.629 17:25:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:29:02.629 17:25:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:02.629 17:25:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:29:02.629 [2024-11-26 17:25:40.053578] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:29:02.629 BaseBdev1 00:29:02.629 17:25:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:02.629 17:25:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:29:02.629 17:25:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:29:02.629 17:25:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:29:02.629 17:25:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:29:02.629 17:25:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:29:02.629 17:25:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:29:02.629 17:25:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:29:02.629 17:25:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:02.629 17:25:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:29:02.629 17:25:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:02.629 17:25:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:29:02.629 17:25:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:02.629 17:25:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:29:02.629 [ 00:29:02.629 { 00:29:02.629 "name": "BaseBdev1", 00:29:02.888 "aliases": [ 00:29:02.888 "ac557a36-b8dd-434b-8918-aa9877c6fca5" 00:29:02.888 ], 00:29:02.888 "product_name": "Malloc disk", 00:29:02.888 "block_size": 512, 00:29:02.888 "num_blocks": 65536, 00:29:02.888 "uuid": "ac557a36-b8dd-434b-8918-aa9877c6fca5", 00:29:02.888 "assigned_rate_limits": { 00:29:02.888 "rw_ios_per_sec": 0, 00:29:02.888 "rw_mbytes_per_sec": 0, 00:29:02.888 "r_mbytes_per_sec": 0, 00:29:02.888 "w_mbytes_per_sec": 0 00:29:02.888 }, 00:29:02.888 "claimed": true, 00:29:02.888 "claim_type": "exclusive_write", 00:29:02.888 "zoned": false, 00:29:02.888 "supported_io_types": { 00:29:02.888 "read": true, 00:29:02.888 "write": true, 00:29:02.888 "unmap": true, 00:29:02.888 "flush": true, 00:29:02.888 "reset": true, 00:29:02.888 "nvme_admin": false, 00:29:02.888 "nvme_io": false, 00:29:02.888 "nvme_io_md": false, 00:29:02.888 "write_zeroes": true, 00:29:02.888 "zcopy": true, 00:29:02.888 "get_zone_info": false, 00:29:02.888 "zone_management": false, 00:29:02.888 "zone_append": false, 00:29:02.888 "compare": false, 00:29:02.888 "compare_and_write": false, 00:29:02.888 "abort": true, 00:29:02.888 "seek_hole": false, 00:29:02.888 "seek_data": false, 00:29:02.888 "copy": true, 00:29:02.888 "nvme_iov_md": false 00:29:02.888 }, 00:29:02.888 "memory_domains": [ 00:29:02.888 { 00:29:02.888 "dma_device_id": "system", 00:29:02.888 "dma_device_type": 1 00:29:02.888 }, 00:29:02.888 { 00:29:02.888 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:29:02.888 "dma_device_type": 2 00:29:02.888 } 00:29:02.888 ], 00:29:02.888 "driver_specific": {} 00:29:02.888 } 00:29:02.888 ] 00:29:02.888 17:25:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:02.888 17:25:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:29:02.888 17:25:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 2 00:29:02.888 17:25:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:29:02.888 17:25:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:29:02.888 17:25:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:29:02.888 17:25:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:29:02.888 17:25:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:29:02.888 17:25:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:29:02.888 17:25:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:29:02.888 17:25:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:29:02.888 17:25:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:29:02.888 17:25:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:29:02.888 17:25:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:29:02.888 17:25:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:02.888 17:25:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:29:02.888 17:25:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:02.888 17:25:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:29:02.888 "name": "Existed_Raid", 00:29:02.888 "uuid": "03d50a2c-0b74-4720-823e-08e33ef4a17e", 00:29:02.888 "strip_size_kb": 64, 00:29:02.888 "state": "configuring", 00:29:02.888 "raid_level": "concat", 00:29:02.888 "superblock": true, 00:29:02.888 "num_base_bdevs": 2, 00:29:02.888 "num_base_bdevs_discovered": 1, 00:29:02.888 "num_base_bdevs_operational": 2, 00:29:02.888 "base_bdevs_list": [ 00:29:02.888 { 00:29:02.888 "name": "BaseBdev1", 00:29:02.888 "uuid": "ac557a36-b8dd-434b-8918-aa9877c6fca5", 00:29:02.888 "is_configured": true, 00:29:02.888 "data_offset": 2048, 00:29:02.888 "data_size": 63488 00:29:02.888 }, 00:29:02.888 { 00:29:02.888 "name": "BaseBdev2", 00:29:02.888 "uuid": "00000000-0000-0000-0000-000000000000", 00:29:02.888 "is_configured": false, 00:29:02.888 "data_offset": 0, 00:29:02.888 "data_size": 0 00:29:02.888 } 00:29:02.888 ] 00:29:02.888 }' 00:29:02.888 17:25:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:29:02.888 17:25:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:29:03.146 17:25:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:29:03.146 17:25:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:03.146 17:25:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:29:03.146 [2024-11-26 17:25:40.565767] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:29:03.146 [2024-11-26 17:25:40.565828] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:29:03.146 17:25:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:03.146 17:25:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -s -r concat -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:29:03.146 17:25:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:03.146 17:25:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:29:03.146 [2024-11-26 17:25:40.573828] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:29:03.146 [2024-11-26 17:25:40.576046] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:29:03.146 [2024-11-26 17:25:40.576109] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:29:03.146 17:25:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:03.146 17:25:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:29:03.146 17:25:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:29:03.146 17:25:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 2 00:29:03.146 17:25:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:29:03.146 17:25:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:29:03.146 17:25:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:29:03.147 17:25:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:29:03.147 17:25:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:29:03.147 17:25:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:29:03.147 17:25:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:29:03.147 17:25:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:29:03.147 17:25:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:29:03.147 17:25:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:29:03.147 17:25:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:29:03.147 17:25:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:03.147 17:25:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:29:03.405 17:25:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:03.405 17:25:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:29:03.405 "name": "Existed_Raid", 00:29:03.405 "uuid": "ee5dca95-d657-4a85-bc54-d09ae3a72b25", 00:29:03.405 "strip_size_kb": 64, 00:29:03.405 "state": "configuring", 00:29:03.405 "raid_level": "concat", 00:29:03.405 "superblock": true, 00:29:03.405 "num_base_bdevs": 2, 00:29:03.405 "num_base_bdevs_discovered": 1, 00:29:03.405 "num_base_bdevs_operational": 2, 00:29:03.405 "base_bdevs_list": [ 00:29:03.405 { 00:29:03.405 "name": "BaseBdev1", 00:29:03.405 "uuid": "ac557a36-b8dd-434b-8918-aa9877c6fca5", 00:29:03.405 "is_configured": true, 00:29:03.405 "data_offset": 2048, 00:29:03.405 "data_size": 63488 00:29:03.405 }, 00:29:03.405 { 00:29:03.405 "name": "BaseBdev2", 00:29:03.405 "uuid": "00000000-0000-0000-0000-000000000000", 00:29:03.405 "is_configured": false, 00:29:03.405 "data_offset": 0, 00:29:03.405 "data_size": 0 00:29:03.405 } 00:29:03.405 ] 00:29:03.405 }' 00:29:03.405 17:25:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:29:03.405 17:25:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:29:03.663 17:25:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:29:03.663 17:25:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:03.663 17:25:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:29:03.663 [2024-11-26 17:25:41.083754] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:29:03.663 [2024-11-26 17:25:41.084055] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:29:03.663 [2024-11-26 17:25:41.084093] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:29:03.663 [2024-11-26 17:25:41.084416] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:29:03.663 BaseBdev2 00:29:03.663 [2024-11-26 17:25:41.084588] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:29:03.663 [2024-11-26 17:25:41.084608] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:29:03.663 [2024-11-26 17:25:41.084760] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:29:03.663 17:25:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:03.663 17:25:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:29:03.663 17:25:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:29:03.663 17:25:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:29:03.663 17:25:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:29:03.663 17:25:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:29:03.663 17:25:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:29:03.663 17:25:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:29:03.663 17:25:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:03.663 17:25:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:29:03.663 17:25:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:03.663 17:25:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:29:03.663 17:25:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:03.663 17:25:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:29:03.663 [ 00:29:03.663 { 00:29:03.663 "name": "BaseBdev2", 00:29:03.663 "aliases": [ 00:29:03.663 "8bc52a04-df5c-456d-9401-434c20260c5b" 00:29:03.663 ], 00:29:03.663 "product_name": "Malloc disk", 00:29:03.663 "block_size": 512, 00:29:03.663 "num_blocks": 65536, 00:29:03.663 "uuid": "8bc52a04-df5c-456d-9401-434c20260c5b", 00:29:03.663 "assigned_rate_limits": { 00:29:03.663 "rw_ios_per_sec": 0, 00:29:03.663 "rw_mbytes_per_sec": 0, 00:29:03.663 "r_mbytes_per_sec": 0, 00:29:03.921 "w_mbytes_per_sec": 0 00:29:03.921 }, 00:29:03.921 "claimed": true, 00:29:03.921 "claim_type": "exclusive_write", 00:29:03.921 "zoned": false, 00:29:03.921 "supported_io_types": { 00:29:03.921 "read": true, 00:29:03.921 "write": true, 00:29:03.921 "unmap": true, 00:29:03.921 "flush": true, 00:29:03.921 "reset": true, 00:29:03.921 "nvme_admin": false, 00:29:03.921 "nvme_io": false, 00:29:03.921 "nvme_io_md": false, 00:29:03.921 "write_zeroes": true, 00:29:03.921 "zcopy": true, 00:29:03.921 "get_zone_info": false, 00:29:03.921 "zone_management": false, 00:29:03.921 "zone_append": false, 00:29:03.921 "compare": false, 00:29:03.921 "compare_and_write": false, 00:29:03.921 "abort": true, 00:29:03.921 "seek_hole": false, 00:29:03.921 "seek_data": false, 00:29:03.921 "copy": true, 00:29:03.921 "nvme_iov_md": false 00:29:03.921 }, 00:29:03.921 "memory_domains": [ 00:29:03.921 { 00:29:03.921 "dma_device_id": "system", 00:29:03.921 "dma_device_type": 1 00:29:03.921 }, 00:29:03.921 { 00:29:03.921 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:29:03.921 "dma_device_type": 2 00:29:03.921 } 00:29:03.921 ], 00:29:03.921 "driver_specific": {} 00:29:03.921 } 00:29:03.921 ] 00:29:03.921 17:25:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:03.921 17:25:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:29:03.921 17:25:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:29:03.921 17:25:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:29:03.921 17:25:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online concat 64 2 00:29:03.921 17:25:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:29:03.921 17:25:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:29:03.921 17:25:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:29:03.921 17:25:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:29:03.921 17:25:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:29:03.921 17:25:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:29:03.921 17:25:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:29:03.921 17:25:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:29:03.921 17:25:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:29:03.921 17:25:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:29:03.921 17:25:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:29:03.921 17:25:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:03.921 17:25:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:29:03.921 17:25:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:03.921 17:25:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:29:03.921 "name": "Existed_Raid", 00:29:03.921 "uuid": "ee5dca95-d657-4a85-bc54-d09ae3a72b25", 00:29:03.921 "strip_size_kb": 64, 00:29:03.921 "state": "online", 00:29:03.921 "raid_level": "concat", 00:29:03.921 "superblock": true, 00:29:03.921 "num_base_bdevs": 2, 00:29:03.921 "num_base_bdevs_discovered": 2, 00:29:03.921 "num_base_bdevs_operational": 2, 00:29:03.921 "base_bdevs_list": [ 00:29:03.921 { 00:29:03.921 "name": "BaseBdev1", 00:29:03.921 "uuid": "ac557a36-b8dd-434b-8918-aa9877c6fca5", 00:29:03.921 "is_configured": true, 00:29:03.921 "data_offset": 2048, 00:29:03.921 "data_size": 63488 00:29:03.921 }, 00:29:03.921 { 00:29:03.921 "name": "BaseBdev2", 00:29:03.921 "uuid": "8bc52a04-df5c-456d-9401-434c20260c5b", 00:29:03.921 "is_configured": true, 00:29:03.921 "data_offset": 2048, 00:29:03.921 "data_size": 63488 00:29:03.921 } 00:29:03.921 ] 00:29:03.921 }' 00:29:03.921 17:25:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:29:03.921 17:25:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:29:04.189 17:25:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:29:04.189 17:25:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:29:04.189 17:25:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:29:04.189 17:25:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:29:04.189 17:25:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:29:04.189 17:25:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:29:04.189 17:25:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:29:04.189 17:25:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:04.189 17:25:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:29:04.189 17:25:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:29:04.189 [2024-11-26 17:25:41.556332] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:29:04.189 17:25:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:04.189 17:25:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:29:04.189 "name": "Existed_Raid", 00:29:04.189 "aliases": [ 00:29:04.189 "ee5dca95-d657-4a85-bc54-d09ae3a72b25" 00:29:04.189 ], 00:29:04.189 "product_name": "Raid Volume", 00:29:04.189 "block_size": 512, 00:29:04.189 "num_blocks": 126976, 00:29:04.189 "uuid": "ee5dca95-d657-4a85-bc54-d09ae3a72b25", 00:29:04.189 "assigned_rate_limits": { 00:29:04.190 "rw_ios_per_sec": 0, 00:29:04.190 "rw_mbytes_per_sec": 0, 00:29:04.190 "r_mbytes_per_sec": 0, 00:29:04.190 "w_mbytes_per_sec": 0 00:29:04.190 }, 00:29:04.190 "claimed": false, 00:29:04.190 "zoned": false, 00:29:04.190 "supported_io_types": { 00:29:04.190 "read": true, 00:29:04.190 "write": true, 00:29:04.190 "unmap": true, 00:29:04.190 "flush": true, 00:29:04.190 "reset": true, 00:29:04.190 "nvme_admin": false, 00:29:04.190 "nvme_io": false, 00:29:04.190 "nvme_io_md": false, 00:29:04.190 "write_zeroes": true, 00:29:04.190 "zcopy": false, 00:29:04.190 "get_zone_info": false, 00:29:04.190 "zone_management": false, 00:29:04.190 "zone_append": false, 00:29:04.190 "compare": false, 00:29:04.190 "compare_and_write": false, 00:29:04.190 "abort": false, 00:29:04.190 "seek_hole": false, 00:29:04.190 "seek_data": false, 00:29:04.190 "copy": false, 00:29:04.190 "nvme_iov_md": false 00:29:04.190 }, 00:29:04.190 "memory_domains": [ 00:29:04.190 { 00:29:04.190 "dma_device_id": "system", 00:29:04.190 "dma_device_type": 1 00:29:04.190 }, 00:29:04.190 { 00:29:04.190 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:29:04.190 "dma_device_type": 2 00:29:04.190 }, 00:29:04.190 { 00:29:04.190 "dma_device_id": "system", 00:29:04.190 "dma_device_type": 1 00:29:04.190 }, 00:29:04.190 { 00:29:04.190 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:29:04.190 "dma_device_type": 2 00:29:04.190 } 00:29:04.190 ], 00:29:04.190 "driver_specific": { 00:29:04.190 "raid": { 00:29:04.190 "uuid": "ee5dca95-d657-4a85-bc54-d09ae3a72b25", 00:29:04.190 "strip_size_kb": 64, 00:29:04.190 "state": "online", 00:29:04.190 "raid_level": "concat", 00:29:04.190 "superblock": true, 00:29:04.190 "num_base_bdevs": 2, 00:29:04.190 "num_base_bdevs_discovered": 2, 00:29:04.190 "num_base_bdevs_operational": 2, 00:29:04.190 "base_bdevs_list": [ 00:29:04.190 { 00:29:04.190 "name": "BaseBdev1", 00:29:04.190 "uuid": "ac557a36-b8dd-434b-8918-aa9877c6fca5", 00:29:04.190 "is_configured": true, 00:29:04.190 "data_offset": 2048, 00:29:04.190 "data_size": 63488 00:29:04.190 }, 00:29:04.190 { 00:29:04.190 "name": "BaseBdev2", 00:29:04.190 "uuid": "8bc52a04-df5c-456d-9401-434c20260c5b", 00:29:04.190 "is_configured": true, 00:29:04.190 "data_offset": 2048, 00:29:04.190 "data_size": 63488 00:29:04.190 } 00:29:04.190 ] 00:29:04.190 } 00:29:04.190 } 00:29:04.190 }' 00:29:04.190 17:25:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:29:04.451 17:25:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:29:04.451 BaseBdev2' 00:29:04.451 17:25:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:29:04.451 17:25:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:29:04.451 17:25:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:29:04.451 17:25:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:29:04.451 17:25:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:29:04.451 17:25:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:04.451 17:25:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:29:04.451 17:25:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:04.451 17:25:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:29:04.451 17:25:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:29:04.451 17:25:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:29:04.451 17:25:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:29:04.451 17:25:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:29:04.451 17:25:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:04.451 17:25:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:29:04.451 17:25:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:04.451 17:25:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:29:04.451 17:25:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:29:04.451 17:25:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:29:04.451 17:25:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:04.451 17:25:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:29:04.451 [2024-11-26 17:25:41.800126] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:29:04.451 [2024-11-26 17:25:41.800170] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:29:04.452 [2024-11-26 17:25:41.800232] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:29:04.711 17:25:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:04.711 17:25:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # local expected_state 00:29:04.711 17:25:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@261 -- # has_redundancy concat 00:29:04.711 17:25:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # case $1 in 00:29:04.711 17:25:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # return 1 00:29:04.711 17:25:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@262 -- # expected_state=offline 00:29:04.711 17:25:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid offline concat 64 1 00:29:04.711 17:25:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:29:04.711 17:25:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=offline 00:29:04.711 17:25:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:29:04.711 17:25:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:29:04.711 17:25:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:29:04.711 17:25:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:29:04.711 17:25:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:29:04.711 17:25:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:29:04.711 17:25:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:29:04.711 17:25:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:29:04.711 17:25:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:04.711 17:25:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:29:04.711 17:25:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:29:04.711 17:25:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:04.711 17:25:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:29:04.711 "name": "Existed_Raid", 00:29:04.711 "uuid": "ee5dca95-d657-4a85-bc54-d09ae3a72b25", 00:29:04.711 "strip_size_kb": 64, 00:29:04.711 "state": "offline", 00:29:04.711 "raid_level": "concat", 00:29:04.711 "superblock": true, 00:29:04.711 "num_base_bdevs": 2, 00:29:04.711 "num_base_bdevs_discovered": 1, 00:29:04.711 "num_base_bdevs_operational": 1, 00:29:04.711 "base_bdevs_list": [ 00:29:04.711 { 00:29:04.711 "name": null, 00:29:04.711 "uuid": "00000000-0000-0000-0000-000000000000", 00:29:04.711 "is_configured": false, 00:29:04.711 "data_offset": 0, 00:29:04.711 "data_size": 63488 00:29:04.711 }, 00:29:04.711 { 00:29:04.711 "name": "BaseBdev2", 00:29:04.711 "uuid": "8bc52a04-df5c-456d-9401-434c20260c5b", 00:29:04.711 "is_configured": true, 00:29:04.711 "data_offset": 2048, 00:29:04.711 "data_size": 63488 00:29:04.711 } 00:29:04.711 ] 00:29:04.711 }' 00:29:04.711 17:25:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:29:04.711 17:25:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:29:04.969 17:25:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:29:04.969 17:25:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:29:04.969 17:25:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:29:04.969 17:25:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:04.969 17:25:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:29:04.969 17:25:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:29:04.969 17:25:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:04.969 17:25:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:29:04.969 17:25:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:29:04.969 17:25:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:29:04.969 17:25:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:04.969 17:25:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:29:04.969 [2024-11-26 17:25:42.396279] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:29:04.969 [2024-11-26 17:25:42.396365] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:29:05.227 17:25:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:05.227 17:25:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:29:05.227 17:25:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:29:05.227 17:25:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:29:05.227 17:25:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:29:05.227 17:25:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:05.227 17:25:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:29:05.227 17:25:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:05.227 17:25:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:29:05.227 17:25:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:29:05.227 17:25:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@284 -- # '[' 2 -gt 2 ']' 00:29:05.227 17:25:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@326 -- # killprocess 62335 00:29:05.227 17:25:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@954 -- # '[' -z 62335 ']' 00:29:05.227 17:25:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@958 -- # kill -0 62335 00:29:05.227 17:25:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@959 -- # uname 00:29:05.227 17:25:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:29:05.227 17:25:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 62335 00:29:05.227 17:25:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:29:05.227 17:25:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:29:05.227 killing process with pid 62335 00:29:05.227 17:25:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@972 -- # echo 'killing process with pid 62335' 00:29:05.228 17:25:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@973 -- # kill 62335 00:29:05.228 17:25:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@978 -- # wait 62335 00:29:05.228 [2024-11-26 17:25:42.578515] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:29:05.228 [2024-11-26 17:25:42.598099] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:29:06.599 17:25:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@328 -- # return 0 00:29:06.599 00:29:06.599 real 0m5.498s 00:29:06.599 user 0m7.980s 00:29:06.599 sys 0m0.830s 00:29:06.599 17:25:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1130 -- # xtrace_disable 00:29:06.599 17:25:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:29:06.599 ************************************ 00:29:06.600 END TEST raid_state_function_test_sb 00:29:06.600 ************************************ 00:29:06.600 17:25:43 bdev_raid -- bdev/bdev_raid.sh@970 -- # run_test raid_superblock_test raid_superblock_test concat 2 00:29:06.600 17:25:43 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:29:06.600 17:25:43 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:29:06.600 17:25:43 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:29:06.600 ************************************ 00:29:06.600 START TEST raid_superblock_test 00:29:06.600 ************************************ 00:29:06.600 17:25:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1129 -- # raid_superblock_test concat 2 00:29:06.600 17:25:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@393 -- # local raid_level=concat 00:29:06.600 17:25:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=2 00:29:06.600 17:25:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:29:06.600 17:25:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:29:06.600 17:25:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:29:06.600 17:25:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:29:06.600 17:25:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:29:06.600 17:25:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:29:06.600 17:25:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:29:06.600 17:25:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size 00:29:06.600 17:25:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:29:06.600 17:25:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:29:06.600 17:25:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:29:06.600 17:25:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@404 -- # '[' concat '!=' raid1 ']' 00:29:06.600 17:25:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@405 -- # strip_size=64 00:29:06.600 17:25:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@406 -- # strip_size_create_arg='-z 64' 00:29:06.600 17:25:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@412 -- # raid_pid=62587 00:29:06.600 17:25:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:29:06.600 17:25:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@413 -- # waitforlisten 62587 00:29:06.600 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:29:06.600 17:25:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@835 -- # '[' -z 62587 ']' 00:29:06.600 17:25:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:06.600 17:25:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:29:06.600 17:25:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:06.600 17:25:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:29:06.600 17:25:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:29:06.855 [2024-11-26 17:25:44.051587] Starting SPDK v25.01-pre git sha1 f7ce15267 / DPDK 24.03.0 initialization... 00:29:06.855 [2024-11-26 17:25:44.051744] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62587 ] 00:29:06.855 [2024-11-26 17:25:44.227450] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:07.111 [2024-11-26 17:25:44.429396] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:29:07.368 [2024-11-26 17:25:44.748547] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:29:07.368 [2024-11-26 17:25:44.748599] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:29:07.932 17:25:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:29:07.932 17:25:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@868 -- # return 0 00:29:07.932 17:25:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:29:07.932 17:25:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:29:07.932 17:25:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:29:07.932 17:25:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:29:07.932 17:25:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:29:07.932 17:25:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:29:07.932 17:25:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:29:07.932 17:25:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:29:07.932 17:25:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc1 00:29:07.932 17:25:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:07.932 17:25:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:29:07.932 malloc1 00:29:07.932 17:25:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:07.932 17:25:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:29:07.932 17:25:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:07.932 17:25:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:29:07.932 [2024-11-26 17:25:45.237531] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:29:07.932 [2024-11-26 17:25:45.237808] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:29:07.932 [2024-11-26 17:25:45.237984] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:29:07.932 [2024-11-26 17:25:45.238156] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:29:07.932 [2024-11-26 17:25:45.241738] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:29:07.932 [2024-11-26 17:25:45.241943] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:29:07.932 pt1 00:29:07.932 17:25:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:07.932 17:25:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:29:07.932 17:25:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:29:07.932 17:25:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:29:07.932 17:25:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:29:07.932 17:25:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:29:07.932 17:25:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:29:07.932 17:25:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:29:07.932 17:25:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:29:07.932 17:25:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc2 00:29:07.932 17:25:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:07.932 17:25:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:29:07.932 malloc2 00:29:07.932 17:25:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:07.932 17:25:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:29:07.932 17:25:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:07.932 17:25:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:29:07.932 [2024-11-26 17:25:45.296761] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:29:07.932 [2024-11-26 17:25:45.296847] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:29:07.932 [2024-11-26 17:25:45.296898] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:29:07.932 [2024-11-26 17:25:45.296914] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:29:07.932 [2024-11-26 17:25:45.300041] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:29:07.932 [2024-11-26 17:25:45.300109] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:29:07.932 pt2 00:29:07.932 17:25:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:07.932 17:25:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:29:07.932 17:25:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:29:07.932 17:25:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''pt1 pt2'\''' -n raid_bdev1 -s 00:29:07.932 17:25:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:07.932 17:25:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:29:07.932 [2024-11-26 17:25:45.304849] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:29:07.932 [2024-11-26 17:25:45.307495] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:29:07.932 [2024-11-26 17:25:45.307876] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:29:07.932 [2024-11-26 17:25:45.307905] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:29:07.932 [2024-11-26 17:25:45.308288] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:29:07.932 [2024-11-26 17:25:45.308514] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:29:07.932 [2024-11-26 17:25:45.308535] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:29:07.932 [2024-11-26 17:25:45.308803] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:29:07.932 17:25:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:07.932 17:25:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online concat 64 2 00:29:07.932 17:25:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:29:07.932 17:25:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:29:07.932 17:25:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:29:07.932 17:25:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:29:07.932 17:25:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:29:07.933 17:25:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:29:07.933 17:25:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:29:07.933 17:25:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:29:07.933 17:25:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:29:07.933 17:25:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:29:07.933 17:25:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:07.933 17:25:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:29:07.933 17:25:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:29:07.933 17:25:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:07.933 17:25:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:29:07.933 "name": "raid_bdev1", 00:29:07.933 "uuid": "026f68fe-50ad-4c4e-a535-435b1fa468be", 00:29:07.933 "strip_size_kb": 64, 00:29:07.933 "state": "online", 00:29:07.933 "raid_level": "concat", 00:29:07.933 "superblock": true, 00:29:07.933 "num_base_bdevs": 2, 00:29:07.933 "num_base_bdevs_discovered": 2, 00:29:07.933 "num_base_bdevs_operational": 2, 00:29:07.933 "base_bdevs_list": [ 00:29:07.933 { 00:29:07.933 "name": "pt1", 00:29:07.933 "uuid": "00000000-0000-0000-0000-000000000001", 00:29:07.933 "is_configured": true, 00:29:07.933 "data_offset": 2048, 00:29:07.933 "data_size": 63488 00:29:07.933 }, 00:29:07.933 { 00:29:07.933 "name": "pt2", 00:29:07.933 "uuid": "00000000-0000-0000-0000-000000000002", 00:29:07.933 "is_configured": true, 00:29:07.933 "data_offset": 2048, 00:29:07.933 "data_size": 63488 00:29:07.933 } 00:29:07.933 ] 00:29:07.933 }' 00:29:07.933 17:25:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:29:07.933 17:25:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:29:08.559 17:25:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:29:08.559 17:25:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:29:08.559 17:25:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:29:08.559 17:25:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:29:08.559 17:25:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:29:08.559 17:25:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:29:08.559 17:25:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:29:08.559 17:25:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:08.559 17:25:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:29:08.559 17:25:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:29:08.559 [2024-11-26 17:25:45.729377] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:29:08.559 17:25:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:08.559 17:25:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:29:08.559 "name": "raid_bdev1", 00:29:08.559 "aliases": [ 00:29:08.559 "026f68fe-50ad-4c4e-a535-435b1fa468be" 00:29:08.559 ], 00:29:08.559 "product_name": "Raid Volume", 00:29:08.559 "block_size": 512, 00:29:08.559 "num_blocks": 126976, 00:29:08.559 "uuid": "026f68fe-50ad-4c4e-a535-435b1fa468be", 00:29:08.559 "assigned_rate_limits": { 00:29:08.559 "rw_ios_per_sec": 0, 00:29:08.559 "rw_mbytes_per_sec": 0, 00:29:08.559 "r_mbytes_per_sec": 0, 00:29:08.559 "w_mbytes_per_sec": 0 00:29:08.559 }, 00:29:08.559 "claimed": false, 00:29:08.559 "zoned": false, 00:29:08.559 "supported_io_types": { 00:29:08.559 "read": true, 00:29:08.559 "write": true, 00:29:08.559 "unmap": true, 00:29:08.559 "flush": true, 00:29:08.559 "reset": true, 00:29:08.559 "nvme_admin": false, 00:29:08.559 "nvme_io": false, 00:29:08.559 "nvme_io_md": false, 00:29:08.559 "write_zeroes": true, 00:29:08.559 "zcopy": false, 00:29:08.559 "get_zone_info": false, 00:29:08.559 "zone_management": false, 00:29:08.559 "zone_append": false, 00:29:08.559 "compare": false, 00:29:08.559 "compare_and_write": false, 00:29:08.559 "abort": false, 00:29:08.559 "seek_hole": false, 00:29:08.559 "seek_data": false, 00:29:08.559 "copy": false, 00:29:08.559 "nvme_iov_md": false 00:29:08.559 }, 00:29:08.559 "memory_domains": [ 00:29:08.559 { 00:29:08.559 "dma_device_id": "system", 00:29:08.559 "dma_device_type": 1 00:29:08.559 }, 00:29:08.559 { 00:29:08.559 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:29:08.559 "dma_device_type": 2 00:29:08.559 }, 00:29:08.559 { 00:29:08.559 "dma_device_id": "system", 00:29:08.559 "dma_device_type": 1 00:29:08.559 }, 00:29:08.559 { 00:29:08.559 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:29:08.559 "dma_device_type": 2 00:29:08.559 } 00:29:08.559 ], 00:29:08.559 "driver_specific": { 00:29:08.559 "raid": { 00:29:08.559 "uuid": "026f68fe-50ad-4c4e-a535-435b1fa468be", 00:29:08.559 "strip_size_kb": 64, 00:29:08.559 "state": "online", 00:29:08.559 "raid_level": "concat", 00:29:08.559 "superblock": true, 00:29:08.559 "num_base_bdevs": 2, 00:29:08.559 "num_base_bdevs_discovered": 2, 00:29:08.559 "num_base_bdevs_operational": 2, 00:29:08.559 "base_bdevs_list": [ 00:29:08.559 { 00:29:08.559 "name": "pt1", 00:29:08.559 "uuid": "00000000-0000-0000-0000-000000000001", 00:29:08.559 "is_configured": true, 00:29:08.559 "data_offset": 2048, 00:29:08.559 "data_size": 63488 00:29:08.559 }, 00:29:08.559 { 00:29:08.559 "name": "pt2", 00:29:08.559 "uuid": "00000000-0000-0000-0000-000000000002", 00:29:08.559 "is_configured": true, 00:29:08.559 "data_offset": 2048, 00:29:08.559 "data_size": 63488 00:29:08.559 } 00:29:08.559 ] 00:29:08.559 } 00:29:08.559 } 00:29:08.559 }' 00:29:08.559 17:25:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:29:08.559 17:25:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:29:08.559 pt2' 00:29:08.559 17:25:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:29:08.559 17:25:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:29:08.559 17:25:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:29:08.559 17:25:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:29:08.559 17:25:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:29:08.559 17:25:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:08.559 17:25:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:29:08.559 17:25:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:08.559 17:25:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:29:08.559 17:25:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:29:08.559 17:25:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:29:08.559 17:25:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:29:08.559 17:25:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:08.559 17:25:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:29:08.559 17:25:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:29:08.559 17:25:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:08.559 17:25:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:29:08.559 17:25:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:29:08.559 17:25:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:29:08.560 17:25:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:29:08.560 17:25:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:08.560 17:25:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:29:08.560 [2024-11-26 17:25:45.957396] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:29:08.560 17:25:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:08.560 17:25:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=026f68fe-50ad-4c4e-a535-435b1fa468be 00:29:08.560 17:25:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@436 -- # '[' -z 026f68fe-50ad-4c4e-a535-435b1fa468be ']' 00:29:08.560 17:25:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:29:08.560 17:25:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:08.560 17:25:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:29:08.560 [2024-11-26 17:25:46.001031] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:29:08.560 [2024-11-26 17:25:46.001074] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:29:08.560 [2024-11-26 17:25:46.001169] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:29:08.560 [2024-11-26 17:25:46.001221] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:29:08.560 [2024-11-26 17:25:46.001238] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:29:08.816 17:25:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:08.816 17:25:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:29:08.816 17:25:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:29:08.816 17:25:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:08.816 17:25:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:29:08.816 17:25:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:08.816 17:25:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:29:08.816 17:25:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:29:08.816 17:25:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:29:08.816 17:25:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:29:08.816 17:25:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:08.816 17:25:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:29:08.816 17:25:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:08.816 17:25:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:29:08.816 17:25:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:29:08.816 17:25:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:08.816 17:25:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:29:08.816 17:25:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:08.816 17:25:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:29:08.816 17:25:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:29:08.816 17:25:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:08.816 17:25:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:29:08.816 17:25:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:08.816 17:25:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:29:08.816 17:25:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:29:08.816 17:25:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@652 -- # local es=0 00:29:08.816 17:25:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:29:08.816 17:25:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:29:08.817 17:25:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:29:08.817 17:25:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:29:08.817 17:25:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:29:08.817 17:25:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:29:08.817 17:25:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:08.817 17:25:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:29:08.817 [2024-11-26 17:25:46.129150] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:29:08.817 [2024-11-26 17:25:46.131461] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:29:08.817 [2024-11-26 17:25:46.131678] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:29:08.817 [2024-11-26 17:25:46.131758] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:29:08.817 [2024-11-26 17:25:46.131779] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:29:08.817 [2024-11-26 17:25:46.131794] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state configuring 00:29:08.817 request: 00:29:08.817 { 00:29:08.817 "name": "raid_bdev1", 00:29:08.817 "raid_level": "concat", 00:29:08.817 "base_bdevs": [ 00:29:08.817 "malloc1", 00:29:08.817 "malloc2" 00:29:08.817 ], 00:29:08.817 "strip_size_kb": 64, 00:29:08.817 "superblock": false, 00:29:08.817 "method": "bdev_raid_create", 00:29:08.817 "req_id": 1 00:29:08.817 } 00:29:08.817 Got JSON-RPC error response 00:29:08.817 response: 00:29:08.817 { 00:29:08.817 "code": -17, 00:29:08.817 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:29:08.817 } 00:29:08.817 17:25:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:29:08.817 17:25:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@655 -- # es=1 00:29:08.817 17:25:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:29:08.817 17:25:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:29:08.817 17:25:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:29:08.817 17:25:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:29:08.817 17:25:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:29:08.817 17:25:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:08.817 17:25:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:29:08.817 17:25:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:08.817 17:25:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:29:08.817 17:25:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:29:08.817 17:25:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:29:08.817 17:25:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:08.817 17:25:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:29:08.817 [2024-11-26 17:25:46.193167] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:29:08.817 [2024-11-26 17:25:46.193379] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:29:08.817 [2024-11-26 17:25:46.193411] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:29:08.817 [2024-11-26 17:25:46.193428] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:29:08.817 [2024-11-26 17:25:46.196341] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:29:08.817 [2024-11-26 17:25:46.196496] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:29:08.817 [2024-11-26 17:25:46.196613] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:29:08.817 [2024-11-26 17:25:46.196685] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:29:08.817 pt1 00:29:08.817 17:25:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:08.817 17:25:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring concat 64 2 00:29:08.817 17:25:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:29:08.817 17:25:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:29:08.817 17:25:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:29:08.817 17:25:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:29:08.817 17:25:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:29:08.817 17:25:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:29:08.817 17:25:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:29:08.817 17:25:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:29:08.817 17:25:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:29:08.817 17:25:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:29:08.817 17:25:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:08.817 17:25:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:29:08.817 17:25:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:29:08.817 17:25:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:08.817 17:25:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:29:08.817 "name": "raid_bdev1", 00:29:08.817 "uuid": "026f68fe-50ad-4c4e-a535-435b1fa468be", 00:29:08.817 "strip_size_kb": 64, 00:29:08.817 "state": "configuring", 00:29:08.817 "raid_level": "concat", 00:29:08.817 "superblock": true, 00:29:08.817 "num_base_bdevs": 2, 00:29:08.817 "num_base_bdevs_discovered": 1, 00:29:08.817 "num_base_bdevs_operational": 2, 00:29:08.817 "base_bdevs_list": [ 00:29:08.817 { 00:29:08.817 "name": "pt1", 00:29:08.817 "uuid": "00000000-0000-0000-0000-000000000001", 00:29:08.817 "is_configured": true, 00:29:08.817 "data_offset": 2048, 00:29:08.818 "data_size": 63488 00:29:08.818 }, 00:29:08.818 { 00:29:08.818 "name": null, 00:29:08.818 "uuid": "00000000-0000-0000-0000-000000000002", 00:29:08.818 "is_configured": false, 00:29:08.818 "data_offset": 2048, 00:29:08.818 "data_size": 63488 00:29:08.818 } 00:29:08.818 ] 00:29:08.818 }' 00:29:08.818 17:25:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:29:08.818 17:25:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:29:09.380 17:25:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@470 -- # '[' 2 -gt 2 ']' 00:29:09.380 17:25:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:29:09.380 17:25:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:29:09.380 17:25:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:29:09.380 17:25:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:09.380 17:25:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:29:09.380 [2024-11-26 17:25:46.637251] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:29:09.380 [2024-11-26 17:25:46.637341] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:29:09.380 [2024-11-26 17:25:46.637369] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:29:09.380 [2024-11-26 17:25:46.637386] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:29:09.380 [2024-11-26 17:25:46.637910] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:29:09.380 [2024-11-26 17:25:46.637937] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:29:09.380 [2024-11-26 17:25:46.638045] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:29:09.380 [2024-11-26 17:25:46.638105] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:29:09.380 [2024-11-26 17:25:46.638250] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:29:09.380 [2024-11-26 17:25:46.638266] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:29:09.380 [2024-11-26 17:25:46.638556] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:29:09.380 [2024-11-26 17:25:46.638717] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:29:09.380 [2024-11-26 17:25:46.638727] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:29:09.380 [2024-11-26 17:25:46.638881] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:29:09.380 pt2 00:29:09.380 17:25:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:09.380 17:25:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:29:09.380 17:25:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:29:09.380 17:25:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online concat 64 2 00:29:09.380 17:25:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:29:09.380 17:25:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:29:09.380 17:25:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:29:09.380 17:25:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:29:09.381 17:25:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:29:09.381 17:25:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:29:09.381 17:25:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:29:09.381 17:25:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:29:09.381 17:25:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:29:09.381 17:25:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:29:09.381 17:25:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:09.381 17:25:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:29:09.381 17:25:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:29:09.381 17:25:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:09.381 17:25:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:29:09.381 "name": "raid_bdev1", 00:29:09.381 "uuid": "026f68fe-50ad-4c4e-a535-435b1fa468be", 00:29:09.381 "strip_size_kb": 64, 00:29:09.381 "state": "online", 00:29:09.381 "raid_level": "concat", 00:29:09.381 "superblock": true, 00:29:09.381 "num_base_bdevs": 2, 00:29:09.381 "num_base_bdevs_discovered": 2, 00:29:09.381 "num_base_bdevs_operational": 2, 00:29:09.381 "base_bdevs_list": [ 00:29:09.381 { 00:29:09.381 "name": "pt1", 00:29:09.381 "uuid": "00000000-0000-0000-0000-000000000001", 00:29:09.381 "is_configured": true, 00:29:09.381 "data_offset": 2048, 00:29:09.381 "data_size": 63488 00:29:09.381 }, 00:29:09.381 { 00:29:09.381 "name": "pt2", 00:29:09.381 "uuid": "00000000-0000-0000-0000-000000000002", 00:29:09.381 "is_configured": true, 00:29:09.381 "data_offset": 2048, 00:29:09.381 "data_size": 63488 00:29:09.381 } 00:29:09.381 ] 00:29:09.381 }' 00:29:09.381 17:25:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:29:09.381 17:25:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:29:09.638 17:25:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:29:09.638 17:25:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:29:09.638 17:25:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:29:09.638 17:25:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:29:09.638 17:25:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:29:09.638 17:25:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:29:09.638 17:25:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:29:09.638 17:25:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:29:09.638 17:25:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:09.638 17:25:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:29:09.638 [2024-11-26 17:25:47.081600] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:29:09.896 17:25:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:09.896 17:25:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:29:09.896 "name": "raid_bdev1", 00:29:09.896 "aliases": [ 00:29:09.896 "026f68fe-50ad-4c4e-a535-435b1fa468be" 00:29:09.896 ], 00:29:09.896 "product_name": "Raid Volume", 00:29:09.896 "block_size": 512, 00:29:09.896 "num_blocks": 126976, 00:29:09.896 "uuid": "026f68fe-50ad-4c4e-a535-435b1fa468be", 00:29:09.896 "assigned_rate_limits": { 00:29:09.896 "rw_ios_per_sec": 0, 00:29:09.896 "rw_mbytes_per_sec": 0, 00:29:09.896 "r_mbytes_per_sec": 0, 00:29:09.896 "w_mbytes_per_sec": 0 00:29:09.896 }, 00:29:09.896 "claimed": false, 00:29:09.896 "zoned": false, 00:29:09.896 "supported_io_types": { 00:29:09.896 "read": true, 00:29:09.896 "write": true, 00:29:09.896 "unmap": true, 00:29:09.896 "flush": true, 00:29:09.896 "reset": true, 00:29:09.896 "nvme_admin": false, 00:29:09.896 "nvme_io": false, 00:29:09.896 "nvme_io_md": false, 00:29:09.896 "write_zeroes": true, 00:29:09.896 "zcopy": false, 00:29:09.896 "get_zone_info": false, 00:29:09.896 "zone_management": false, 00:29:09.896 "zone_append": false, 00:29:09.896 "compare": false, 00:29:09.896 "compare_and_write": false, 00:29:09.896 "abort": false, 00:29:09.896 "seek_hole": false, 00:29:09.896 "seek_data": false, 00:29:09.896 "copy": false, 00:29:09.896 "nvme_iov_md": false 00:29:09.896 }, 00:29:09.896 "memory_domains": [ 00:29:09.896 { 00:29:09.896 "dma_device_id": "system", 00:29:09.896 "dma_device_type": 1 00:29:09.896 }, 00:29:09.896 { 00:29:09.896 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:29:09.896 "dma_device_type": 2 00:29:09.896 }, 00:29:09.896 { 00:29:09.896 "dma_device_id": "system", 00:29:09.896 "dma_device_type": 1 00:29:09.896 }, 00:29:09.896 { 00:29:09.896 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:29:09.896 "dma_device_type": 2 00:29:09.896 } 00:29:09.896 ], 00:29:09.896 "driver_specific": { 00:29:09.896 "raid": { 00:29:09.896 "uuid": "026f68fe-50ad-4c4e-a535-435b1fa468be", 00:29:09.896 "strip_size_kb": 64, 00:29:09.896 "state": "online", 00:29:09.896 "raid_level": "concat", 00:29:09.896 "superblock": true, 00:29:09.896 "num_base_bdevs": 2, 00:29:09.896 "num_base_bdevs_discovered": 2, 00:29:09.896 "num_base_bdevs_operational": 2, 00:29:09.896 "base_bdevs_list": [ 00:29:09.896 { 00:29:09.896 "name": "pt1", 00:29:09.896 "uuid": "00000000-0000-0000-0000-000000000001", 00:29:09.896 "is_configured": true, 00:29:09.896 "data_offset": 2048, 00:29:09.896 "data_size": 63488 00:29:09.896 }, 00:29:09.896 { 00:29:09.896 "name": "pt2", 00:29:09.896 "uuid": "00000000-0000-0000-0000-000000000002", 00:29:09.896 "is_configured": true, 00:29:09.896 "data_offset": 2048, 00:29:09.896 "data_size": 63488 00:29:09.896 } 00:29:09.896 ] 00:29:09.896 } 00:29:09.896 } 00:29:09.896 }' 00:29:09.896 17:25:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:29:09.896 17:25:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:29:09.896 pt2' 00:29:09.896 17:25:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:29:09.897 17:25:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:29:09.897 17:25:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:29:09.897 17:25:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:29:09.897 17:25:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:29:09.897 17:25:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:09.897 17:25:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:29:09.897 17:25:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:09.897 17:25:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:29:09.897 17:25:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:29:09.897 17:25:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:29:09.897 17:25:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:29:09.897 17:25:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:09.897 17:25:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:29:09.897 17:25:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:29:09.897 17:25:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:09.897 17:25:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:29:09.897 17:25:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:29:09.897 17:25:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:29:09.897 17:25:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:09.897 17:25:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:29:09.897 17:25:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:29:09.897 [2024-11-26 17:25:47.309690] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:29:09.897 17:25:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:10.155 17:25:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # '[' 026f68fe-50ad-4c4e-a535-435b1fa468be '!=' 026f68fe-50ad-4c4e-a535-435b1fa468be ']' 00:29:10.155 17:25:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@491 -- # has_redundancy concat 00:29:10.155 17:25:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:29:10.155 17:25:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # return 1 00:29:10.155 17:25:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@563 -- # killprocess 62587 00:29:10.155 17:25:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@954 -- # '[' -z 62587 ']' 00:29:10.155 17:25:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@958 -- # kill -0 62587 00:29:10.155 17:25:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@959 -- # uname 00:29:10.155 17:25:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:29:10.155 17:25:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 62587 00:29:10.155 killing process with pid 62587 00:29:10.155 17:25:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:29:10.155 17:25:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:29:10.155 17:25:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 62587' 00:29:10.155 17:25:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@973 -- # kill 62587 00:29:10.155 [2024-11-26 17:25:47.391816] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:29:10.155 17:25:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@978 -- # wait 62587 00:29:10.155 [2024-11-26 17:25:47.391917] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:29:10.155 [2024-11-26 17:25:47.391974] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:29:10.155 [2024-11-26 17:25:47.391991] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:29:10.414 [2024-11-26 17:25:47.637368] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:29:11.789 17:25:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@565 -- # return 0 00:29:11.789 00:29:11.789 real 0m4.957s 00:29:11.789 user 0m7.018s 00:29:11.789 sys 0m0.795s 00:29:11.789 17:25:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:29:11.789 ************************************ 00:29:11.789 END TEST raid_superblock_test 00:29:11.789 ************************************ 00:29:11.789 17:25:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:29:11.789 17:25:48 bdev_raid -- bdev/bdev_raid.sh@971 -- # run_test raid_read_error_test raid_io_error_test concat 2 read 00:29:11.789 17:25:48 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:29:11.789 17:25:48 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:29:11.789 17:25:48 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:29:11.789 ************************************ 00:29:11.789 START TEST raid_read_error_test 00:29:11.789 ************************************ 00:29:11.789 17:25:48 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1129 -- # raid_io_error_test concat 2 read 00:29:11.789 17:25:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=concat 00:29:11.789 17:25:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=2 00:29:11.789 17:25:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=read 00:29:11.789 17:25:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:29:11.789 17:25:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:29:11.789 17:25:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:29:11.789 17:25:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:29:11.789 17:25:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:29:11.790 17:25:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:29:11.790 17:25:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:29:11.790 17:25:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:29:11.790 17:25:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:29:11.790 17:25:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:29:11.790 17:25:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:29:11.790 17:25:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:29:11.790 17:25:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:29:11.790 17:25:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:29:11.790 17:25:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:29:11.790 17:25:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@800 -- # '[' concat '!=' raid1 ']' 00:29:11.790 17:25:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@801 -- # strip_size=64 00:29:11.790 17:25:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@802 -- # create_arg+=' -z 64' 00:29:11.790 17:25:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:29:11.790 17:25:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.zWxs6dx4JN 00:29:11.790 17:25:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=62804 00:29:11.790 17:25:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 62804 00:29:11.790 17:25:48 bdev_raid.raid_read_error_test -- common/autotest_common.sh@835 -- # '[' -z 62804 ']' 00:29:11.790 17:25:48 bdev_raid.raid_read_error_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:11.790 17:25:48 bdev_raid.raid_read_error_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:29:11.790 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:29:11.790 17:25:48 bdev_raid.raid_read_error_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:11.790 17:25:48 bdev_raid.raid_read_error_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:29:11.790 17:25:48 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:29:11.790 17:25:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:29:11.790 [2024-11-26 17:25:49.085314] Starting SPDK v25.01-pre git sha1 f7ce15267 / DPDK 24.03.0 initialization... 00:29:11.790 [2024-11-26 17:25:49.085458] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62804 ] 00:29:12.048 [2024-11-26 17:25:49.271833] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:12.048 [2024-11-26 17:25:49.434894] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:29:12.338 [2024-11-26 17:25:49.691675] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:29:12.338 [2024-11-26 17:25:49.691741] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:29:12.905 17:25:50 bdev_raid.raid_read_error_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:29:12.905 17:25:50 bdev_raid.raid_read_error_test -- common/autotest_common.sh@868 -- # return 0 00:29:12.905 17:25:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:29:12.905 17:25:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:29:12.905 17:25:50 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:12.905 17:25:50 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:29:12.905 BaseBdev1_malloc 00:29:12.905 17:25:50 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:12.905 17:25:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:29:12.905 17:25:50 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:12.905 17:25:50 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:29:12.905 true 00:29:12.905 17:25:50 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:12.905 17:25:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:29:12.905 17:25:50 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:12.905 17:25:50 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:29:12.905 [2024-11-26 17:25:50.194238] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:29:12.905 [2024-11-26 17:25:50.194300] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:29:12.905 [2024-11-26 17:25:50.194323] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:29:12.905 [2024-11-26 17:25:50.194338] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:29:12.905 [2024-11-26 17:25:50.196858] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:29:12.905 [2024-11-26 17:25:50.196906] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:29:12.905 BaseBdev1 00:29:12.905 17:25:50 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:12.905 17:25:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:29:12.905 17:25:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:29:12.905 17:25:50 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:12.905 17:25:50 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:29:12.905 BaseBdev2_malloc 00:29:12.905 17:25:50 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:12.906 17:25:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:29:12.906 17:25:50 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:12.906 17:25:50 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:29:12.906 true 00:29:12.906 17:25:50 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:12.906 17:25:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:29:12.906 17:25:50 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:12.906 17:25:50 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:29:12.906 [2024-11-26 17:25:50.260965] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:29:12.906 [2024-11-26 17:25:50.261031] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:29:12.906 [2024-11-26 17:25:50.261062] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:29:12.906 [2024-11-26 17:25:50.261077] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:29:12.906 [2024-11-26 17:25:50.263600] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:29:12.906 [2024-11-26 17:25:50.263769] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:29:12.906 BaseBdev2 00:29:12.906 17:25:50 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:12.906 17:25:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2'\''' -n raid_bdev1 -s 00:29:12.906 17:25:50 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:12.906 17:25:50 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:29:12.906 [2024-11-26 17:25:50.269029] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:29:12.906 [2024-11-26 17:25:50.271280] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:29:12.906 [2024-11-26 17:25:50.271523] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:29:12.906 [2024-11-26 17:25:50.271577] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:29:12.906 [2024-11-26 17:25:50.271960] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:29:12.906 [2024-11-26 17:25:50.272259] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:29:12.906 [2024-11-26 17:25:50.272364] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:29:12.906 [2024-11-26 17:25:50.272650] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:29:12.906 17:25:50 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:12.906 17:25:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online concat 64 2 00:29:12.906 17:25:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:29:12.906 17:25:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:29:12.906 17:25:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:29:12.906 17:25:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:29:12.906 17:25:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:29:12.906 17:25:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:29:12.906 17:25:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:29:12.906 17:25:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:29:12.906 17:25:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:29:12.906 17:25:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:29:12.906 17:25:50 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:12.906 17:25:50 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:29:12.906 17:25:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:29:12.906 17:25:50 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:12.906 17:25:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:29:12.906 "name": "raid_bdev1", 00:29:12.906 "uuid": "cd022b64-2460-4625-9772-f93b6c9300e3", 00:29:12.906 "strip_size_kb": 64, 00:29:12.906 "state": "online", 00:29:12.906 "raid_level": "concat", 00:29:12.906 "superblock": true, 00:29:12.906 "num_base_bdevs": 2, 00:29:12.906 "num_base_bdevs_discovered": 2, 00:29:12.906 "num_base_bdevs_operational": 2, 00:29:12.906 "base_bdevs_list": [ 00:29:12.906 { 00:29:12.906 "name": "BaseBdev1", 00:29:12.906 "uuid": "fa8b1c96-d0a3-5cc7-b8d7-5944dea6b7c8", 00:29:12.906 "is_configured": true, 00:29:12.906 "data_offset": 2048, 00:29:12.906 "data_size": 63488 00:29:12.906 }, 00:29:12.906 { 00:29:12.906 "name": "BaseBdev2", 00:29:12.906 "uuid": "d0e53cfa-3d41-5bab-b2ca-43786955876d", 00:29:12.906 "is_configured": true, 00:29:12.906 "data_offset": 2048, 00:29:12.906 "data_size": 63488 00:29:12.906 } 00:29:12.906 ] 00:29:12.906 }' 00:29:12.906 17:25:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:29:12.906 17:25:50 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:29:13.473 17:25:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:29:13.473 17:25:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:29:13.473 [2024-11-26 17:25:50.866549] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000063c0 00:29:14.410 17:25:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc read failure 00:29:14.410 17:25:51 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:14.410 17:25:51 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:29:14.410 17:25:51 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:14.410 17:25:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:29:14.410 17:25:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@832 -- # [[ concat = \r\a\i\d\1 ]] 00:29:14.410 17:25:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=2 00:29:14.410 17:25:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online concat 64 2 00:29:14.410 17:25:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:29:14.410 17:25:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:29:14.410 17:25:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:29:14.410 17:25:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:29:14.410 17:25:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:29:14.410 17:25:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:29:14.410 17:25:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:29:14.410 17:25:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:29:14.411 17:25:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:29:14.411 17:25:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:29:14.411 17:25:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:29:14.411 17:25:51 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:14.411 17:25:51 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:29:14.411 17:25:51 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:14.411 17:25:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:29:14.411 "name": "raid_bdev1", 00:29:14.411 "uuid": "cd022b64-2460-4625-9772-f93b6c9300e3", 00:29:14.411 "strip_size_kb": 64, 00:29:14.411 "state": "online", 00:29:14.411 "raid_level": "concat", 00:29:14.411 "superblock": true, 00:29:14.411 "num_base_bdevs": 2, 00:29:14.411 "num_base_bdevs_discovered": 2, 00:29:14.411 "num_base_bdevs_operational": 2, 00:29:14.411 "base_bdevs_list": [ 00:29:14.411 { 00:29:14.411 "name": "BaseBdev1", 00:29:14.411 "uuid": "fa8b1c96-d0a3-5cc7-b8d7-5944dea6b7c8", 00:29:14.411 "is_configured": true, 00:29:14.411 "data_offset": 2048, 00:29:14.411 "data_size": 63488 00:29:14.411 }, 00:29:14.411 { 00:29:14.411 "name": "BaseBdev2", 00:29:14.411 "uuid": "d0e53cfa-3d41-5bab-b2ca-43786955876d", 00:29:14.411 "is_configured": true, 00:29:14.411 "data_offset": 2048, 00:29:14.411 "data_size": 63488 00:29:14.411 } 00:29:14.411 ] 00:29:14.411 }' 00:29:14.411 17:25:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:29:14.411 17:25:51 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:29:14.979 17:25:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:29:14.979 17:25:52 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:14.979 17:25:52 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:29:14.979 [2024-11-26 17:25:52.181183] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:29:14.979 [2024-11-26 17:25:52.181226] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:29:14.979 [2024-11-26 17:25:52.184415] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:29:14.979 [2024-11-26 17:25:52.184612] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:29:14.979 [2024-11-26 17:25:52.184689] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:29:14.979 [2024-11-26 17:25:52.184908] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:29:14.979 { 00:29:14.979 "results": [ 00:29:14.979 { 00:29:14.979 "job": "raid_bdev1", 00:29:14.979 "core_mask": "0x1", 00:29:14.979 "workload": "randrw", 00:29:14.979 "percentage": 50, 00:29:14.979 "status": "finished", 00:29:14.979 "queue_depth": 1, 00:29:14.979 "io_size": 131072, 00:29:14.979 "runtime": 1.312426, 00:29:14.979 "iops": 14292.615355075257, 00:29:14.979 "mibps": 1786.5769193844071, 00:29:14.979 "io_failed": 1, 00:29:14.979 "io_timeout": 0, 00:29:14.979 "avg_latency_us": 96.44096654558193, 00:29:14.979 "min_latency_us": 27.67238095238095, 00:29:14.979 "max_latency_us": 1638.4 00:29:14.979 } 00:29:14.979 ], 00:29:14.979 "core_count": 1 00:29:14.979 } 00:29:14.979 17:25:52 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:14.979 17:25:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 62804 00:29:14.979 17:25:52 bdev_raid.raid_read_error_test -- common/autotest_common.sh@954 -- # '[' -z 62804 ']' 00:29:14.979 17:25:52 bdev_raid.raid_read_error_test -- common/autotest_common.sh@958 -- # kill -0 62804 00:29:14.979 17:25:52 bdev_raid.raid_read_error_test -- common/autotest_common.sh@959 -- # uname 00:29:14.979 17:25:52 bdev_raid.raid_read_error_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:29:14.979 17:25:52 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 62804 00:29:14.979 killing process with pid 62804 00:29:14.979 17:25:52 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:29:14.979 17:25:52 bdev_raid.raid_read_error_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:29:14.979 17:25:52 bdev_raid.raid_read_error_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 62804' 00:29:14.979 17:25:52 bdev_raid.raid_read_error_test -- common/autotest_common.sh@973 -- # kill 62804 00:29:14.979 [2024-11-26 17:25:52.222349] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:29:14.979 17:25:52 bdev_raid.raid_read_error_test -- common/autotest_common.sh@978 -- # wait 62804 00:29:14.979 [2024-11-26 17:25:52.365042] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:29:16.373 17:25:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:29:16.373 17:25:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.zWxs6dx4JN 00:29:16.373 17:25:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:29:16.373 17:25:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.76 00:29:16.373 17:25:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy concat 00:29:16.373 17:25:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:29:16.373 17:25:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@200 -- # return 1 00:29:16.373 17:25:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@849 -- # [[ 0.76 != \0\.\0\0 ]] 00:29:16.373 00:29:16.373 real 0m4.669s 00:29:16.373 user 0m5.714s 00:29:16.373 sys 0m0.597s 00:29:16.373 17:25:53 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:29:16.373 ************************************ 00:29:16.373 END TEST raid_read_error_test 00:29:16.373 ************************************ 00:29:16.373 17:25:53 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:29:16.373 17:25:53 bdev_raid -- bdev/bdev_raid.sh@972 -- # run_test raid_write_error_test raid_io_error_test concat 2 write 00:29:16.373 17:25:53 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:29:16.373 17:25:53 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:29:16.373 17:25:53 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:29:16.373 ************************************ 00:29:16.373 START TEST raid_write_error_test 00:29:16.373 ************************************ 00:29:16.373 17:25:53 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1129 -- # raid_io_error_test concat 2 write 00:29:16.373 17:25:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=concat 00:29:16.373 17:25:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=2 00:29:16.373 17:25:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=write 00:29:16.373 17:25:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:29:16.373 17:25:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:29:16.373 17:25:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:29:16.373 17:25:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:29:16.373 17:25:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:29:16.373 17:25:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:29:16.373 17:25:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:29:16.373 17:25:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:29:16.373 17:25:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:29:16.373 17:25:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:29:16.373 17:25:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:29:16.373 17:25:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:29:16.373 17:25:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:29:16.373 17:25:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:29:16.373 17:25:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:29:16.373 17:25:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@800 -- # '[' concat '!=' raid1 ']' 00:29:16.373 17:25:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@801 -- # strip_size=64 00:29:16.373 17:25:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@802 -- # create_arg+=' -z 64' 00:29:16.373 17:25:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:29:16.373 17:25:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.Q8ljr6XDJ6 00:29:16.373 17:25:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=62950 00:29:16.373 17:25:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:29:16.373 17:25:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 62950 00:29:16.373 17:25:53 bdev_raid.raid_write_error_test -- common/autotest_common.sh@835 -- # '[' -z 62950 ']' 00:29:16.373 17:25:53 bdev_raid.raid_write_error_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:16.373 17:25:53 bdev_raid.raid_write_error_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:29:16.373 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:29:16.373 17:25:53 bdev_raid.raid_write_error_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:16.373 17:25:53 bdev_raid.raid_write_error_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:29:16.373 17:25:53 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:29:16.631 [2024-11-26 17:25:53.845735] Starting SPDK v25.01-pre git sha1 f7ce15267 / DPDK 24.03.0 initialization... 00:29:16.631 [2024-11-26 17:25:53.845917] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62950 ] 00:29:16.631 [2024-11-26 17:25:54.038403] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:16.890 [2024-11-26 17:25:54.159805] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:29:17.149 [2024-11-26 17:25:54.369368] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:29:17.149 [2024-11-26 17:25:54.369436] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:29:17.407 17:25:54 bdev_raid.raid_write_error_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:29:17.407 17:25:54 bdev_raid.raid_write_error_test -- common/autotest_common.sh@868 -- # return 0 00:29:17.407 17:25:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:29:17.407 17:25:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:29:17.407 17:25:54 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:17.407 17:25:54 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:29:17.407 BaseBdev1_malloc 00:29:17.407 17:25:54 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:17.408 17:25:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:29:17.408 17:25:54 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:17.408 17:25:54 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:29:17.408 true 00:29:17.408 17:25:54 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:17.408 17:25:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:29:17.408 17:25:54 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:17.408 17:25:54 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:29:17.408 [2024-11-26 17:25:54.820788] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:29:17.408 [2024-11-26 17:25:54.820871] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:29:17.408 [2024-11-26 17:25:54.820897] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:29:17.408 [2024-11-26 17:25:54.820913] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:29:17.408 [2024-11-26 17:25:54.823497] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:29:17.408 [2024-11-26 17:25:54.823544] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:29:17.408 BaseBdev1 00:29:17.408 17:25:54 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:17.408 17:25:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:29:17.408 17:25:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:29:17.408 17:25:54 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:17.408 17:25:54 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:29:17.666 BaseBdev2_malloc 00:29:17.667 17:25:54 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:17.667 17:25:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:29:17.667 17:25:54 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:17.667 17:25:54 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:29:17.667 true 00:29:17.667 17:25:54 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:17.667 17:25:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:29:17.667 17:25:54 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:17.667 17:25:54 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:29:17.667 [2024-11-26 17:25:54.882570] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:29:17.667 [2024-11-26 17:25:54.882632] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:29:17.667 [2024-11-26 17:25:54.882653] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:29:17.667 [2024-11-26 17:25:54.882666] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:29:17.667 [2024-11-26 17:25:54.885042] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:29:17.667 [2024-11-26 17:25:54.885099] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:29:17.667 BaseBdev2 00:29:17.667 17:25:54 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:17.667 17:25:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2'\''' -n raid_bdev1 -s 00:29:17.667 17:25:54 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:17.667 17:25:54 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:29:17.667 [2024-11-26 17:25:54.890653] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:29:17.667 [2024-11-26 17:25:54.892873] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:29:17.667 [2024-11-26 17:25:54.893098] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:29:17.667 [2024-11-26 17:25:54.893134] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:29:17.667 [2024-11-26 17:25:54.893403] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:29:17.667 [2024-11-26 17:25:54.893597] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:29:17.667 [2024-11-26 17:25:54.893620] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:29:17.667 [2024-11-26 17:25:54.893772] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:29:17.667 17:25:54 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:17.667 17:25:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online concat 64 2 00:29:17.667 17:25:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:29:17.667 17:25:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:29:17.667 17:25:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:29:17.667 17:25:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:29:17.667 17:25:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:29:17.667 17:25:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:29:17.667 17:25:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:29:17.667 17:25:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:29:17.667 17:25:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:29:17.667 17:25:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:29:17.667 17:25:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:29:17.667 17:25:54 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:17.667 17:25:54 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:29:17.667 17:25:54 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:17.667 17:25:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:29:17.667 "name": "raid_bdev1", 00:29:17.667 "uuid": "8cab9d5c-b619-45d0-ac25-60392264181e", 00:29:17.667 "strip_size_kb": 64, 00:29:17.667 "state": "online", 00:29:17.667 "raid_level": "concat", 00:29:17.667 "superblock": true, 00:29:17.667 "num_base_bdevs": 2, 00:29:17.667 "num_base_bdevs_discovered": 2, 00:29:17.667 "num_base_bdevs_operational": 2, 00:29:17.667 "base_bdevs_list": [ 00:29:17.667 { 00:29:17.667 "name": "BaseBdev1", 00:29:17.667 "uuid": "69e81149-b1ca-5673-802d-48cdd6448264", 00:29:17.667 "is_configured": true, 00:29:17.667 "data_offset": 2048, 00:29:17.667 "data_size": 63488 00:29:17.667 }, 00:29:17.667 { 00:29:17.667 "name": "BaseBdev2", 00:29:17.667 "uuid": "6fe1e5bb-5a54-5087-8499-8e07c6834a6a", 00:29:17.667 "is_configured": true, 00:29:17.667 "data_offset": 2048, 00:29:17.667 "data_size": 63488 00:29:17.667 } 00:29:17.667 ] 00:29:17.667 }' 00:29:17.667 17:25:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:29:17.667 17:25:54 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:29:17.925 17:25:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:29:17.925 17:25:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:29:18.184 [2024-11-26 17:25:55.444178] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000063c0 00:29:19.121 17:25:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc write failure 00:29:19.121 17:25:56 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:19.121 17:25:56 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:29:19.121 17:25:56 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:19.121 17:25:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:29:19.121 17:25:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@832 -- # [[ concat = \r\a\i\d\1 ]] 00:29:19.121 17:25:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=2 00:29:19.122 17:25:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online concat 64 2 00:29:19.122 17:25:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:29:19.122 17:25:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:29:19.122 17:25:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:29:19.122 17:25:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:29:19.122 17:25:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:29:19.122 17:25:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:29:19.122 17:25:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:29:19.122 17:25:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:29:19.122 17:25:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:29:19.122 17:25:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:29:19.122 17:25:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:29:19.122 17:25:56 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:19.122 17:25:56 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:29:19.122 17:25:56 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:19.122 17:25:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:29:19.122 "name": "raid_bdev1", 00:29:19.122 "uuid": "8cab9d5c-b619-45d0-ac25-60392264181e", 00:29:19.122 "strip_size_kb": 64, 00:29:19.122 "state": "online", 00:29:19.122 "raid_level": "concat", 00:29:19.122 "superblock": true, 00:29:19.122 "num_base_bdevs": 2, 00:29:19.122 "num_base_bdevs_discovered": 2, 00:29:19.122 "num_base_bdevs_operational": 2, 00:29:19.122 "base_bdevs_list": [ 00:29:19.122 { 00:29:19.122 "name": "BaseBdev1", 00:29:19.122 "uuid": "69e81149-b1ca-5673-802d-48cdd6448264", 00:29:19.122 "is_configured": true, 00:29:19.122 "data_offset": 2048, 00:29:19.122 "data_size": 63488 00:29:19.122 }, 00:29:19.122 { 00:29:19.122 "name": "BaseBdev2", 00:29:19.122 "uuid": "6fe1e5bb-5a54-5087-8499-8e07c6834a6a", 00:29:19.122 "is_configured": true, 00:29:19.122 "data_offset": 2048, 00:29:19.122 "data_size": 63488 00:29:19.122 } 00:29:19.122 ] 00:29:19.122 }' 00:29:19.122 17:25:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:29:19.122 17:25:56 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:29:19.444 17:25:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:29:19.444 17:25:56 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:19.444 17:25:56 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:29:19.444 [2024-11-26 17:25:56.835923] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:29:19.444 [2024-11-26 17:25:56.835970] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:29:19.444 [2024-11-26 17:25:56.839127] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:29:19.444 [2024-11-26 17:25:56.839182] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:29:19.444 [2024-11-26 17:25:56.839220] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:29:19.444 [2024-11-26 17:25:56.839236] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:29:19.444 { 00:29:19.444 "results": [ 00:29:19.444 { 00:29:19.444 "job": "raid_bdev1", 00:29:19.444 "core_mask": "0x1", 00:29:19.444 "workload": "randrw", 00:29:19.444 "percentage": 50, 00:29:19.444 "status": "finished", 00:29:19.444 "queue_depth": 1, 00:29:19.444 "io_size": 131072, 00:29:19.444 "runtime": 1.3895, 00:29:19.444 "iops": 14574.307304785894, 00:29:19.444 "mibps": 1821.7884130982368, 00:29:19.444 "io_failed": 1, 00:29:19.444 "io_timeout": 0, 00:29:19.444 "avg_latency_us": 94.40596822888745, 00:29:19.444 "min_latency_us": 27.794285714285714, 00:29:19.444 "max_latency_us": 1575.9847619047619 00:29:19.444 } 00:29:19.444 ], 00:29:19.444 "core_count": 1 00:29:19.444 } 00:29:19.444 17:25:56 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:19.444 17:25:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 62950 00:29:19.444 17:25:56 bdev_raid.raid_write_error_test -- common/autotest_common.sh@954 -- # '[' -z 62950 ']' 00:29:19.444 17:25:56 bdev_raid.raid_write_error_test -- common/autotest_common.sh@958 -- # kill -0 62950 00:29:19.444 17:25:56 bdev_raid.raid_write_error_test -- common/autotest_common.sh@959 -- # uname 00:29:19.444 17:25:56 bdev_raid.raid_write_error_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:29:19.444 17:25:56 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 62950 00:29:19.703 17:25:56 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:29:19.703 killing process with pid 62950 00:29:19.703 17:25:56 bdev_raid.raid_write_error_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:29:19.703 17:25:56 bdev_raid.raid_write_error_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 62950' 00:29:19.703 17:25:56 bdev_raid.raid_write_error_test -- common/autotest_common.sh@973 -- # kill 62950 00:29:19.703 17:25:56 bdev_raid.raid_write_error_test -- common/autotest_common.sh@978 -- # wait 62950 00:29:19.703 [2024-11-26 17:25:56.881598] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:29:19.703 [2024-11-26 17:25:57.036381] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:29:21.080 17:25:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:29:21.080 17:25:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:29:21.080 17:25:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.Q8ljr6XDJ6 00:29:21.080 17:25:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.72 00:29:21.080 17:25:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy concat 00:29:21.080 17:25:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:29:21.080 17:25:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@200 -- # return 1 00:29:21.080 17:25:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@849 -- # [[ 0.72 != \0\.\0\0 ]] 00:29:21.080 00:29:21.080 real 0m4.656s 00:29:21.080 user 0m5.654s 00:29:21.080 sys 0m0.587s 00:29:21.080 17:25:58 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:29:21.080 ************************************ 00:29:21.080 END TEST raid_write_error_test 00:29:21.080 ************************************ 00:29:21.080 17:25:58 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:29:21.080 17:25:58 bdev_raid -- bdev/bdev_raid.sh@967 -- # for level in raid0 concat raid1 00:29:21.080 17:25:58 bdev_raid -- bdev/bdev_raid.sh@968 -- # run_test raid_state_function_test raid_state_function_test raid1 2 false 00:29:21.080 17:25:58 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:29:21.080 17:25:58 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:29:21.080 17:25:58 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:29:21.080 ************************************ 00:29:21.080 START TEST raid_state_function_test 00:29:21.080 ************************************ 00:29:21.080 17:25:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1129 -- # raid_state_function_test raid1 2 false 00:29:21.080 17:25:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # local raid_level=raid1 00:29:21.080 17:25:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=2 00:29:21.080 17:25:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # local superblock=false 00:29:21.080 17:25:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:29:21.080 17:25:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:29:21.080 17:25:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:29:21.080 17:25:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:29:21.080 17:25:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:29:21.080 17:25:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:29:21.080 17:25:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:29:21.080 17:25:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:29:21.080 17:25:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:29:21.080 17:25:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:29:21.080 17:25:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:29:21.080 17:25:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:29:21.080 17:25:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # local strip_size 00:29:21.080 17:25:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:29:21.080 17:25:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:29:21.080 17:25:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@215 -- # '[' raid1 '!=' raid1 ']' 00:29:21.081 17:25:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@219 -- # strip_size=0 00:29:21.081 17:25:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@222 -- # '[' false = true ']' 00:29:21.081 17:25:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # superblock_create_arg= 00:29:21.081 17:25:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@229 -- # raid_pid=63093 00:29:21.081 Process raid pid: 63093 00:29:21.081 17:25:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 63093' 00:29:21.081 17:25:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:29:21.081 17:25:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@231 -- # waitforlisten 63093 00:29:21.081 17:25:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@835 -- # '[' -z 63093 ']' 00:29:21.081 17:25:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:21.081 17:25:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:29:21.081 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:29:21.081 17:25:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:21.081 17:25:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:29:21.081 17:25:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:29:21.081 [2024-11-26 17:25:58.524533] Starting SPDK v25.01-pre git sha1 f7ce15267 / DPDK 24.03.0 initialization... 00:29:21.081 [2024-11-26 17:25:58.524684] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:29:21.340 [2024-11-26 17:25:58.702161] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:21.600 [2024-11-26 17:25:58.832781] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:29:22.005 [2024-11-26 17:25:59.070574] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:29:22.005 [2024-11-26 17:25:59.070625] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:29:22.274 17:25:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:29:22.274 17:25:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@868 -- # return 0 00:29:22.274 17:25:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:29:22.274 17:25:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:22.274 17:25:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:29:22.274 [2024-11-26 17:25:59.484086] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:29:22.274 [2024-11-26 17:25:59.484158] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:29:22.274 [2024-11-26 17:25:59.484172] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:29:22.274 [2024-11-26 17:25:59.484188] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:29:22.274 17:25:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:22.274 17:25:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:29:22.274 17:25:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:29:22.274 17:25:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:29:22.274 17:25:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:29:22.274 17:25:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:29:22.274 17:25:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:29:22.274 17:25:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:29:22.274 17:25:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:29:22.274 17:25:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:29:22.274 17:25:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:29:22.274 17:25:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:29:22.274 17:25:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:22.274 17:25:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:29:22.274 17:25:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:29:22.274 17:25:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:22.274 17:25:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:29:22.274 "name": "Existed_Raid", 00:29:22.274 "uuid": "00000000-0000-0000-0000-000000000000", 00:29:22.274 "strip_size_kb": 0, 00:29:22.274 "state": "configuring", 00:29:22.274 "raid_level": "raid1", 00:29:22.274 "superblock": false, 00:29:22.274 "num_base_bdevs": 2, 00:29:22.274 "num_base_bdevs_discovered": 0, 00:29:22.274 "num_base_bdevs_operational": 2, 00:29:22.274 "base_bdevs_list": [ 00:29:22.274 { 00:29:22.274 "name": "BaseBdev1", 00:29:22.274 "uuid": "00000000-0000-0000-0000-000000000000", 00:29:22.274 "is_configured": false, 00:29:22.274 "data_offset": 0, 00:29:22.274 "data_size": 0 00:29:22.274 }, 00:29:22.274 { 00:29:22.274 "name": "BaseBdev2", 00:29:22.274 "uuid": "00000000-0000-0000-0000-000000000000", 00:29:22.274 "is_configured": false, 00:29:22.274 "data_offset": 0, 00:29:22.274 "data_size": 0 00:29:22.274 } 00:29:22.274 ] 00:29:22.274 }' 00:29:22.274 17:25:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:29:22.274 17:25:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:29:22.532 17:25:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:29:22.532 17:25:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:22.532 17:25:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:29:22.532 [2024-11-26 17:25:59.948123] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:29:22.532 [2024-11-26 17:25:59.948167] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:29:22.532 17:25:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:22.532 17:25:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:29:22.532 17:25:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:22.532 17:25:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:29:22.532 [2024-11-26 17:25:59.956081] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:29:22.532 [2024-11-26 17:25:59.956127] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:29:22.532 [2024-11-26 17:25:59.956138] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:29:22.532 [2024-11-26 17:25:59.956154] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:29:22.532 17:25:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:22.532 17:25:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:29:22.532 17:25:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:22.532 17:25:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:29:22.792 [2024-11-26 17:26:00.002311] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:29:22.792 BaseBdev1 00:29:22.792 17:26:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:22.792 17:26:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:29:22.792 17:26:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:29:22.792 17:26:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:29:22.792 17:26:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:29:22.792 17:26:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:29:22.792 17:26:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:29:22.792 17:26:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:29:22.792 17:26:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:22.792 17:26:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:29:22.792 17:26:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:22.792 17:26:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:29:22.792 17:26:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:22.792 17:26:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:29:22.792 [ 00:29:22.792 { 00:29:22.792 "name": "BaseBdev1", 00:29:22.792 "aliases": [ 00:29:22.792 "1168a29c-e613-4654-9c45-9c60eecc1fee" 00:29:22.792 ], 00:29:22.792 "product_name": "Malloc disk", 00:29:22.792 "block_size": 512, 00:29:22.792 "num_blocks": 65536, 00:29:22.792 "uuid": "1168a29c-e613-4654-9c45-9c60eecc1fee", 00:29:22.792 "assigned_rate_limits": { 00:29:22.792 "rw_ios_per_sec": 0, 00:29:22.792 "rw_mbytes_per_sec": 0, 00:29:22.792 "r_mbytes_per_sec": 0, 00:29:22.792 "w_mbytes_per_sec": 0 00:29:22.792 }, 00:29:22.792 "claimed": true, 00:29:22.792 "claim_type": "exclusive_write", 00:29:22.792 "zoned": false, 00:29:22.792 "supported_io_types": { 00:29:22.792 "read": true, 00:29:22.792 "write": true, 00:29:22.792 "unmap": true, 00:29:22.792 "flush": true, 00:29:22.792 "reset": true, 00:29:22.792 "nvme_admin": false, 00:29:22.792 "nvme_io": false, 00:29:22.792 "nvme_io_md": false, 00:29:22.792 "write_zeroes": true, 00:29:22.792 "zcopy": true, 00:29:22.792 "get_zone_info": false, 00:29:22.792 "zone_management": false, 00:29:22.792 "zone_append": false, 00:29:22.792 "compare": false, 00:29:22.792 "compare_and_write": false, 00:29:22.792 "abort": true, 00:29:22.792 "seek_hole": false, 00:29:22.792 "seek_data": false, 00:29:22.792 "copy": true, 00:29:22.792 "nvme_iov_md": false 00:29:22.792 }, 00:29:22.792 "memory_domains": [ 00:29:22.792 { 00:29:22.792 "dma_device_id": "system", 00:29:22.792 "dma_device_type": 1 00:29:22.792 }, 00:29:22.792 { 00:29:22.792 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:29:22.792 "dma_device_type": 2 00:29:22.792 } 00:29:22.792 ], 00:29:22.792 "driver_specific": {} 00:29:22.792 } 00:29:22.792 ] 00:29:22.792 17:26:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:22.792 17:26:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:29:22.792 17:26:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:29:22.792 17:26:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:29:22.792 17:26:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:29:22.792 17:26:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:29:22.792 17:26:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:29:22.792 17:26:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:29:22.792 17:26:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:29:22.792 17:26:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:29:22.792 17:26:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:29:22.792 17:26:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:29:22.792 17:26:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:29:22.792 17:26:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:22.792 17:26:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:29:22.792 17:26:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:29:22.792 17:26:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:22.792 17:26:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:29:22.792 "name": "Existed_Raid", 00:29:22.792 "uuid": "00000000-0000-0000-0000-000000000000", 00:29:22.792 "strip_size_kb": 0, 00:29:22.792 "state": "configuring", 00:29:22.792 "raid_level": "raid1", 00:29:22.792 "superblock": false, 00:29:22.792 "num_base_bdevs": 2, 00:29:22.792 "num_base_bdevs_discovered": 1, 00:29:22.792 "num_base_bdevs_operational": 2, 00:29:22.792 "base_bdevs_list": [ 00:29:22.792 { 00:29:22.792 "name": "BaseBdev1", 00:29:22.792 "uuid": "1168a29c-e613-4654-9c45-9c60eecc1fee", 00:29:22.792 "is_configured": true, 00:29:22.792 "data_offset": 0, 00:29:22.792 "data_size": 65536 00:29:22.792 }, 00:29:22.792 { 00:29:22.792 "name": "BaseBdev2", 00:29:22.792 "uuid": "00000000-0000-0000-0000-000000000000", 00:29:22.792 "is_configured": false, 00:29:22.792 "data_offset": 0, 00:29:22.792 "data_size": 0 00:29:22.792 } 00:29:22.792 ] 00:29:22.792 }' 00:29:22.792 17:26:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:29:22.792 17:26:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:29:23.360 17:26:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:29:23.360 17:26:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:23.360 17:26:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:29:23.360 [2024-11-26 17:26:00.522476] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:29:23.360 [2024-11-26 17:26:00.522534] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:29:23.360 17:26:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:23.360 17:26:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:29:23.360 17:26:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:23.360 17:26:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:29:23.360 [2024-11-26 17:26:00.530512] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:29:23.360 [2024-11-26 17:26:00.532726] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:29:23.360 [2024-11-26 17:26:00.532777] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:29:23.360 17:26:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:23.360 17:26:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:29:23.360 17:26:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:29:23.360 17:26:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:29:23.360 17:26:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:29:23.360 17:26:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:29:23.360 17:26:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:29:23.360 17:26:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:29:23.360 17:26:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:29:23.360 17:26:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:29:23.360 17:26:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:29:23.360 17:26:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:29:23.360 17:26:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:29:23.360 17:26:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:29:23.360 17:26:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:29:23.360 17:26:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:23.360 17:26:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:29:23.360 17:26:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:23.360 17:26:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:29:23.360 "name": "Existed_Raid", 00:29:23.360 "uuid": "00000000-0000-0000-0000-000000000000", 00:29:23.360 "strip_size_kb": 0, 00:29:23.360 "state": "configuring", 00:29:23.361 "raid_level": "raid1", 00:29:23.361 "superblock": false, 00:29:23.361 "num_base_bdevs": 2, 00:29:23.361 "num_base_bdevs_discovered": 1, 00:29:23.361 "num_base_bdevs_operational": 2, 00:29:23.361 "base_bdevs_list": [ 00:29:23.361 { 00:29:23.361 "name": "BaseBdev1", 00:29:23.361 "uuid": "1168a29c-e613-4654-9c45-9c60eecc1fee", 00:29:23.361 "is_configured": true, 00:29:23.361 "data_offset": 0, 00:29:23.361 "data_size": 65536 00:29:23.361 }, 00:29:23.361 { 00:29:23.361 "name": "BaseBdev2", 00:29:23.361 "uuid": "00000000-0000-0000-0000-000000000000", 00:29:23.361 "is_configured": false, 00:29:23.361 "data_offset": 0, 00:29:23.361 "data_size": 0 00:29:23.361 } 00:29:23.361 ] 00:29:23.361 }' 00:29:23.361 17:26:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:29:23.361 17:26:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:29:23.620 17:26:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:29:23.620 17:26:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:23.620 17:26:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:29:23.620 [2024-11-26 17:26:00.956372] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:29:23.620 [2024-11-26 17:26:00.956441] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:29:23.620 [2024-11-26 17:26:00.956452] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:29:23.620 [2024-11-26 17:26:00.956724] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:29:23.620 [2024-11-26 17:26:00.956898] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:29:23.620 [2024-11-26 17:26:00.956912] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:29:23.620 [2024-11-26 17:26:00.957184] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:29:23.620 BaseBdev2 00:29:23.620 17:26:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:23.620 17:26:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:29:23.620 17:26:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:29:23.620 17:26:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:29:23.620 17:26:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:29:23.620 17:26:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:29:23.620 17:26:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:29:23.620 17:26:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:29:23.620 17:26:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:23.620 17:26:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:29:23.620 17:26:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:23.620 17:26:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:29:23.620 17:26:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:23.620 17:26:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:29:23.620 [ 00:29:23.620 { 00:29:23.620 "name": "BaseBdev2", 00:29:23.620 "aliases": [ 00:29:23.620 "1bebbbce-6633-4803-ac5e-b21b7195bf3a" 00:29:23.620 ], 00:29:23.620 "product_name": "Malloc disk", 00:29:23.620 "block_size": 512, 00:29:23.620 "num_blocks": 65536, 00:29:23.620 "uuid": "1bebbbce-6633-4803-ac5e-b21b7195bf3a", 00:29:23.620 "assigned_rate_limits": { 00:29:23.620 "rw_ios_per_sec": 0, 00:29:23.620 "rw_mbytes_per_sec": 0, 00:29:23.620 "r_mbytes_per_sec": 0, 00:29:23.620 "w_mbytes_per_sec": 0 00:29:23.620 }, 00:29:23.620 "claimed": true, 00:29:23.620 "claim_type": "exclusive_write", 00:29:23.620 "zoned": false, 00:29:23.620 "supported_io_types": { 00:29:23.620 "read": true, 00:29:23.620 "write": true, 00:29:23.620 "unmap": true, 00:29:23.620 "flush": true, 00:29:23.620 "reset": true, 00:29:23.620 "nvme_admin": false, 00:29:23.620 "nvme_io": false, 00:29:23.620 "nvme_io_md": false, 00:29:23.620 "write_zeroes": true, 00:29:23.620 "zcopy": true, 00:29:23.620 "get_zone_info": false, 00:29:23.620 "zone_management": false, 00:29:23.620 "zone_append": false, 00:29:23.620 "compare": false, 00:29:23.620 "compare_and_write": false, 00:29:23.620 "abort": true, 00:29:23.620 "seek_hole": false, 00:29:23.620 "seek_data": false, 00:29:23.620 "copy": true, 00:29:23.620 "nvme_iov_md": false 00:29:23.620 }, 00:29:23.620 "memory_domains": [ 00:29:23.620 { 00:29:23.620 "dma_device_id": "system", 00:29:23.620 "dma_device_type": 1 00:29:23.620 }, 00:29:23.620 { 00:29:23.620 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:29:23.620 "dma_device_type": 2 00:29:23.620 } 00:29:23.620 ], 00:29:23.620 "driver_specific": {} 00:29:23.620 } 00:29:23.620 ] 00:29:23.620 17:26:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:23.620 17:26:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:29:23.620 17:26:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:29:23.620 17:26:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:29:23.620 17:26:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid1 0 2 00:29:23.620 17:26:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:29:23.620 17:26:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:29:23.620 17:26:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:29:23.620 17:26:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:29:23.620 17:26:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:29:23.620 17:26:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:29:23.620 17:26:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:29:23.620 17:26:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:29:23.620 17:26:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:29:23.620 17:26:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:29:23.620 17:26:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:23.620 17:26:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:29:23.620 17:26:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:29:23.620 17:26:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:23.620 17:26:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:29:23.620 "name": "Existed_Raid", 00:29:23.620 "uuid": "c224b217-720c-4bc0-8367-cab578da0481", 00:29:23.620 "strip_size_kb": 0, 00:29:23.620 "state": "online", 00:29:23.620 "raid_level": "raid1", 00:29:23.620 "superblock": false, 00:29:23.620 "num_base_bdevs": 2, 00:29:23.620 "num_base_bdevs_discovered": 2, 00:29:23.620 "num_base_bdevs_operational": 2, 00:29:23.620 "base_bdevs_list": [ 00:29:23.620 { 00:29:23.620 "name": "BaseBdev1", 00:29:23.620 "uuid": "1168a29c-e613-4654-9c45-9c60eecc1fee", 00:29:23.621 "is_configured": true, 00:29:23.621 "data_offset": 0, 00:29:23.621 "data_size": 65536 00:29:23.621 }, 00:29:23.621 { 00:29:23.621 "name": "BaseBdev2", 00:29:23.621 "uuid": "1bebbbce-6633-4803-ac5e-b21b7195bf3a", 00:29:23.621 "is_configured": true, 00:29:23.621 "data_offset": 0, 00:29:23.621 "data_size": 65536 00:29:23.621 } 00:29:23.621 ] 00:29:23.621 }' 00:29:23.621 17:26:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:29:23.621 17:26:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:29:24.189 17:26:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:29:24.189 17:26:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:29:24.189 17:26:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:29:24.189 17:26:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:29:24.189 17:26:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:29:24.189 17:26:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:29:24.189 17:26:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:29:24.189 17:26:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:24.189 17:26:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:29:24.189 17:26:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:29:24.189 [2024-11-26 17:26:01.400827] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:29:24.189 17:26:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:24.189 17:26:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:29:24.189 "name": "Existed_Raid", 00:29:24.189 "aliases": [ 00:29:24.189 "c224b217-720c-4bc0-8367-cab578da0481" 00:29:24.189 ], 00:29:24.189 "product_name": "Raid Volume", 00:29:24.189 "block_size": 512, 00:29:24.189 "num_blocks": 65536, 00:29:24.189 "uuid": "c224b217-720c-4bc0-8367-cab578da0481", 00:29:24.189 "assigned_rate_limits": { 00:29:24.189 "rw_ios_per_sec": 0, 00:29:24.189 "rw_mbytes_per_sec": 0, 00:29:24.189 "r_mbytes_per_sec": 0, 00:29:24.189 "w_mbytes_per_sec": 0 00:29:24.189 }, 00:29:24.189 "claimed": false, 00:29:24.189 "zoned": false, 00:29:24.189 "supported_io_types": { 00:29:24.189 "read": true, 00:29:24.189 "write": true, 00:29:24.189 "unmap": false, 00:29:24.189 "flush": false, 00:29:24.189 "reset": true, 00:29:24.189 "nvme_admin": false, 00:29:24.189 "nvme_io": false, 00:29:24.189 "nvme_io_md": false, 00:29:24.189 "write_zeroes": true, 00:29:24.189 "zcopy": false, 00:29:24.189 "get_zone_info": false, 00:29:24.189 "zone_management": false, 00:29:24.189 "zone_append": false, 00:29:24.189 "compare": false, 00:29:24.189 "compare_and_write": false, 00:29:24.189 "abort": false, 00:29:24.189 "seek_hole": false, 00:29:24.189 "seek_data": false, 00:29:24.189 "copy": false, 00:29:24.189 "nvme_iov_md": false 00:29:24.189 }, 00:29:24.189 "memory_domains": [ 00:29:24.189 { 00:29:24.189 "dma_device_id": "system", 00:29:24.189 "dma_device_type": 1 00:29:24.189 }, 00:29:24.189 { 00:29:24.189 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:29:24.189 "dma_device_type": 2 00:29:24.189 }, 00:29:24.189 { 00:29:24.189 "dma_device_id": "system", 00:29:24.189 "dma_device_type": 1 00:29:24.189 }, 00:29:24.189 { 00:29:24.189 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:29:24.189 "dma_device_type": 2 00:29:24.189 } 00:29:24.189 ], 00:29:24.189 "driver_specific": { 00:29:24.189 "raid": { 00:29:24.189 "uuid": "c224b217-720c-4bc0-8367-cab578da0481", 00:29:24.189 "strip_size_kb": 0, 00:29:24.189 "state": "online", 00:29:24.189 "raid_level": "raid1", 00:29:24.190 "superblock": false, 00:29:24.190 "num_base_bdevs": 2, 00:29:24.190 "num_base_bdevs_discovered": 2, 00:29:24.190 "num_base_bdevs_operational": 2, 00:29:24.190 "base_bdevs_list": [ 00:29:24.190 { 00:29:24.190 "name": "BaseBdev1", 00:29:24.190 "uuid": "1168a29c-e613-4654-9c45-9c60eecc1fee", 00:29:24.190 "is_configured": true, 00:29:24.190 "data_offset": 0, 00:29:24.190 "data_size": 65536 00:29:24.190 }, 00:29:24.190 { 00:29:24.190 "name": "BaseBdev2", 00:29:24.190 "uuid": "1bebbbce-6633-4803-ac5e-b21b7195bf3a", 00:29:24.190 "is_configured": true, 00:29:24.190 "data_offset": 0, 00:29:24.190 "data_size": 65536 00:29:24.190 } 00:29:24.190 ] 00:29:24.190 } 00:29:24.190 } 00:29:24.190 }' 00:29:24.190 17:26:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:29:24.190 17:26:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:29:24.190 BaseBdev2' 00:29:24.190 17:26:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:29:24.190 17:26:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:29:24.190 17:26:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:29:24.190 17:26:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:29:24.190 17:26:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:29:24.190 17:26:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:24.190 17:26:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:29:24.190 17:26:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:24.190 17:26:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:29:24.190 17:26:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:29:24.190 17:26:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:29:24.190 17:26:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:29:24.190 17:26:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:29:24.190 17:26:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:24.190 17:26:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:29:24.190 17:26:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:24.190 17:26:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:29:24.190 17:26:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:29:24.190 17:26:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:29:24.190 17:26:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:24.190 17:26:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:29:24.190 [2024-11-26 17:26:01.616638] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:29:24.449 17:26:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:24.449 17:26:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@260 -- # local expected_state 00:29:24.449 17:26:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@261 -- # has_redundancy raid1 00:29:24.449 17:26:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:29:24.449 17:26:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@199 -- # return 0 00:29:24.449 17:26:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@264 -- # expected_state=online 00:29:24.449 17:26:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid online raid1 0 1 00:29:24.449 17:26:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:29:24.449 17:26:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:29:24.449 17:26:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:29:24.449 17:26:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:29:24.449 17:26:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:29:24.449 17:26:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:29:24.449 17:26:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:29:24.449 17:26:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:29:24.449 17:26:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:29:24.449 17:26:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:29:24.449 17:26:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:29:24.449 17:26:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:24.449 17:26:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:29:24.449 17:26:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:24.449 17:26:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:29:24.449 "name": "Existed_Raid", 00:29:24.449 "uuid": "c224b217-720c-4bc0-8367-cab578da0481", 00:29:24.449 "strip_size_kb": 0, 00:29:24.449 "state": "online", 00:29:24.449 "raid_level": "raid1", 00:29:24.449 "superblock": false, 00:29:24.449 "num_base_bdevs": 2, 00:29:24.449 "num_base_bdevs_discovered": 1, 00:29:24.449 "num_base_bdevs_operational": 1, 00:29:24.449 "base_bdevs_list": [ 00:29:24.449 { 00:29:24.449 "name": null, 00:29:24.449 "uuid": "00000000-0000-0000-0000-000000000000", 00:29:24.449 "is_configured": false, 00:29:24.449 "data_offset": 0, 00:29:24.449 "data_size": 65536 00:29:24.449 }, 00:29:24.449 { 00:29:24.449 "name": "BaseBdev2", 00:29:24.449 "uuid": "1bebbbce-6633-4803-ac5e-b21b7195bf3a", 00:29:24.449 "is_configured": true, 00:29:24.449 "data_offset": 0, 00:29:24.449 "data_size": 65536 00:29:24.449 } 00:29:24.449 ] 00:29:24.449 }' 00:29:24.449 17:26:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:29:24.449 17:26:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:29:24.730 17:26:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:29:24.730 17:26:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:29:24.730 17:26:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:29:24.730 17:26:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:29:24.730 17:26:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:24.730 17:26:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:29:24.730 17:26:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:24.989 17:26:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:29:24.989 17:26:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:29:24.989 17:26:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:29:24.989 17:26:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:24.989 17:26:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:29:24.989 [2024-11-26 17:26:02.182200] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:29:24.989 [2024-11-26 17:26:02.182302] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:29:24.989 [2024-11-26 17:26:02.282113] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:29:24.989 [2024-11-26 17:26:02.282168] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:29:24.989 [2024-11-26 17:26:02.282184] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:29:24.989 17:26:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:24.989 17:26:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:29:24.989 17:26:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:29:24.989 17:26:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:29:24.989 17:26:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:24.989 17:26:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:29:24.989 17:26:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:29:24.989 17:26:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:24.989 17:26:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:29:24.989 17:26:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:29:24.989 17:26:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@284 -- # '[' 2 -gt 2 ']' 00:29:24.989 17:26:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@326 -- # killprocess 63093 00:29:24.989 17:26:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@954 -- # '[' -z 63093 ']' 00:29:24.989 17:26:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@958 -- # kill -0 63093 00:29:24.989 17:26:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@959 -- # uname 00:29:24.989 17:26:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:29:24.989 17:26:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 63093 00:29:24.989 17:26:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:29:24.989 17:26:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:29:24.989 killing process with pid 63093 00:29:24.989 17:26:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 63093' 00:29:24.989 17:26:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@973 -- # kill 63093 00:29:24.989 [2024-11-26 17:26:02.379179] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:29:24.989 17:26:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@978 -- # wait 63093 00:29:24.989 [2024-11-26 17:26:02.399102] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:29:26.369 17:26:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@328 -- # return 0 00:29:26.369 00:29:26.369 real 0m5.162s 00:29:26.369 user 0m7.448s 00:29:26.369 sys 0m0.854s 00:29:26.369 17:26:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:29:26.369 17:26:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:29:26.369 ************************************ 00:29:26.369 END TEST raid_state_function_test 00:29:26.369 ************************************ 00:29:26.369 17:26:03 bdev_raid -- bdev/bdev_raid.sh@969 -- # run_test raid_state_function_test_sb raid_state_function_test raid1 2 true 00:29:26.369 17:26:03 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:29:26.369 17:26:03 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:29:26.369 17:26:03 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:29:26.369 ************************************ 00:29:26.369 START TEST raid_state_function_test_sb 00:29:26.369 ************************************ 00:29:26.369 17:26:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1129 -- # raid_state_function_test raid1 2 true 00:29:26.369 17:26:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # local raid_level=raid1 00:29:26.369 17:26:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=2 00:29:26.369 17:26:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:29:26.369 17:26:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:29:26.369 17:26:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:29:26.369 17:26:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:29:26.369 17:26:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:29:26.369 17:26:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:29:26.369 17:26:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:29:26.369 17:26:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:29:26.369 17:26:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:29:26.369 17:26:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:29:26.369 17:26:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:29:26.369 17:26:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:29:26.369 17:26:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:29:26.369 17:26:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # local strip_size 00:29:26.369 17:26:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:29:26.369 17:26:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:29:26.369 17:26:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@215 -- # '[' raid1 '!=' raid1 ']' 00:29:26.369 17:26:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@219 -- # strip_size=0 00:29:26.369 17:26:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:29:26.369 17:26:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:29:26.369 17:26:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@229 -- # raid_pid=63344 00:29:26.369 17:26:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 63344' 00:29:26.369 Process raid pid: 63344 00:29:26.369 17:26:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # waitforlisten 63344 00:29:26.369 17:26:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:29:26.369 17:26:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@835 -- # '[' -z 63344 ']' 00:29:26.369 17:26:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:26.369 17:26:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@840 -- # local max_retries=100 00:29:26.369 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:29:26.369 17:26:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:26.369 17:26:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@844 -- # xtrace_disable 00:29:26.369 17:26:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:29:26.369 [2024-11-26 17:26:03.778138] Starting SPDK v25.01-pre git sha1 f7ce15267 / DPDK 24.03.0 initialization... 00:29:26.369 [2024-11-26 17:26:03.778301] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:29:26.629 [2024-11-26 17:26:03.979368] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:26.887 [2024-11-26 17:26:04.098378] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:29:26.887 [2024-11-26 17:26:04.317177] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:29:26.887 [2024-11-26 17:26:04.317229] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:29:27.453 17:26:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:29:27.453 17:26:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@868 -- # return 0 00:29:27.453 17:26:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:29:27.453 17:26:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:27.453 17:26:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:29:27.453 [2024-11-26 17:26:04.759509] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:29:27.453 [2024-11-26 17:26:04.759564] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:29:27.453 [2024-11-26 17:26:04.759576] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:29:27.453 [2024-11-26 17:26:04.759607] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:29:27.453 17:26:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:27.453 17:26:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:29:27.453 17:26:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:29:27.454 17:26:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:29:27.454 17:26:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:29:27.454 17:26:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:29:27.454 17:26:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:29:27.454 17:26:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:29:27.454 17:26:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:29:27.454 17:26:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:29:27.454 17:26:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:29:27.454 17:26:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:29:27.454 17:26:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:29:27.454 17:26:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:27.454 17:26:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:29:27.454 17:26:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:27.454 17:26:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:29:27.454 "name": "Existed_Raid", 00:29:27.454 "uuid": "299a3b47-7d4f-4324-9f63-8434f30bf7eb", 00:29:27.454 "strip_size_kb": 0, 00:29:27.454 "state": "configuring", 00:29:27.454 "raid_level": "raid1", 00:29:27.454 "superblock": true, 00:29:27.454 "num_base_bdevs": 2, 00:29:27.454 "num_base_bdevs_discovered": 0, 00:29:27.454 "num_base_bdevs_operational": 2, 00:29:27.454 "base_bdevs_list": [ 00:29:27.454 { 00:29:27.454 "name": "BaseBdev1", 00:29:27.454 "uuid": "00000000-0000-0000-0000-000000000000", 00:29:27.454 "is_configured": false, 00:29:27.454 "data_offset": 0, 00:29:27.454 "data_size": 0 00:29:27.454 }, 00:29:27.454 { 00:29:27.454 "name": "BaseBdev2", 00:29:27.454 "uuid": "00000000-0000-0000-0000-000000000000", 00:29:27.454 "is_configured": false, 00:29:27.454 "data_offset": 0, 00:29:27.454 "data_size": 0 00:29:27.454 } 00:29:27.454 ] 00:29:27.454 }' 00:29:27.454 17:26:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:29:27.454 17:26:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:29:28.021 17:26:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:29:28.021 17:26:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:28.021 17:26:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:29:28.021 [2024-11-26 17:26:05.239551] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:29:28.021 [2024-11-26 17:26:05.239594] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:29:28.021 17:26:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:28.021 17:26:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:29:28.021 17:26:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:28.021 17:26:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:29:28.021 [2024-11-26 17:26:05.247544] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:29:28.021 [2024-11-26 17:26:05.247594] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:29:28.021 [2024-11-26 17:26:05.247605] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:29:28.021 [2024-11-26 17:26:05.247620] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:29:28.021 17:26:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:28.021 17:26:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:29:28.021 17:26:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:28.021 17:26:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:29:28.021 [2024-11-26 17:26:05.295351] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:29:28.021 BaseBdev1 00:29:28.021 17:26:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:28.021 17:26:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:29:28.021 17:26:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:29:28.021 17:26:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:29:28.021 17:26:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:29:28.021 17:26:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:29:28.021 17:26:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:29:28.021 17:26:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:29:28.022 17:26:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:28.022 17:26:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:29:28.022 17:26:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:28.022 17:26:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:29:28.022 17:26:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:28.022 17:26:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:29:28.022 [ 00:29:28.022 { 00:29:28.022 "name": "BaseBdev1", 00:29:28.022 "aliases": [ 00:29:28.022 "edd5caf7-6b06-4fb5-8484-b62b2328ee5b" 00:29:28.022 ], 00:29:28.022 "product_name": "Malloc disk", 00:29:28.022 "block_size": 512, 00:29:28.022 "num_blocks": 65536, 00:29:28.022 "uuid": "edd5caf7-6b06-4fb5-8484-b62b2328ee5b", 00:29:28.022 "assigned_rate_limits": { 00:29:28.022 "rw_ios_per_sec": 0, 00:29:28.022 "rw_mbytes_per_sec": 0, 00:29:28.022 "r_mbytes_per_sec": 0, 00:29:28.022 "w_mbytes_per_sec": 0 00:29:28.022 }, 00:29:28.022 "claimed": true, 00:29:28.022 "claim_type": "exclusive_write", 00:29:28.022 "zoned": false, 00:29:28.022 "supported_io_types": { 00:29:28.022 "read": true, 00:29:28.022 "write": true, 00:29:28.022 "unmap": true, 00:29:28.022 "flush": true, 00:29:28.022 "reset": true, 00:29:28.022 "nvme_admin": false, 00:29:28.022 "nvme_io": false, 00:29:28.022 "nvme_io_md": false, 00:29:28.022 "write_zeroes": true, 00:29:28.022 "zcopy": true, 00:29:28.022 "get_zone_info": false, 00:29:28.022 "zone_management": false, 00:29:28.022 "zone_append": false, 00:29:28.022 "compare": false, 00:29:28.022 "compare_and_write": false, 00:29:28.022 "abort": true, 00:29:28.022 "seek_hole": false, 00:29:28.022 "seek_data": false, 00:29:28.022 "copy": true, 00:29:28.022 "nvme_iov_md": false 00:29:28.022 }, 00:29:28.022 "memory_domains": [ 00:29:28.022 { 00:29:28.022 "dma_device_id": "system", 00:29:28.022 "dma_device_type": 1 00:29:28.022 }, 00:29:28.022 { 00:29:28.022 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:29:28.022 "dma_device_type": 2 00:29:28.022 } 00:29:28.022 ], 00:29:28.022 "driver_specific": {} 00:29:28.022 } 00:29:28.022 ] 00:29:28.022 17:26:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:28.022 17:26:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:29:28.022 17:26:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:29:28.022 17:26:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:29:28.022 17:26:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:29:28.022 17:26:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:29:28.022 17:26:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:29:28.022 17:26:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:29:28.022 17:26:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:29:28.022 17:26:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:29:28.022 17:26:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:29:28.022 17:26:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:29:28.022 17:26:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:29:28.022 17:26:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:29:28.022 17:26:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:28.022 17:26:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:29:28.022 17:26:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:28.022 17:26:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:29:28.022 "name": "Existed_Raid", 00:29:28.022 "uuid": "cae44a22-f98d-46fa-b8b7-57c1851a2bbc", 00:29:28.022 "strip_size_kb": 0, 00:29:28.022 "state": "configuring", 00:29:28.022 "raid_level": "raid1", 00:29:28.022 "superblock": true, 00:29:28.022 "num_base_bdevs": 2, 00:29:28.022 "num_base_bdevs_discovered": 1, 00:29:28.022 "num_base_bdevs_operational": 2, 00:29:28.022 "base_bdevs_list": [ 00:29:28.022 { 00:29:28.022 "name": "BaseBdev1", 00:29:28.022 "uuid": "edd5caf7-6b06-4fb5-8484-b62b2328ee5b", 00:29:28.022 "is_configured": true, 00:29:28.022 "data_offset": 2048, 00:29:28.022 "data_size": 63488 00:29:28.022 }, 00:29:28.022 { 00:29:28.022 "name": "BaseBdev2", 00:29:28.022 "uuid": "00000000-0000-0000-0000-000000000000", 00:29:28.022 "is_configured": false, 00:29:28.022 "data_offset": 0, 00:29:28.022 "data_size": 0 00:29:28.022 } 00:29:28.022 ] 00:29:28.022 }' 00:29:28.022 17:26:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:29:28.022 17:26:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:29:28.590 17:26:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:29:28.590 17:26:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:28.590 17:26:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:29:28.590 [2024-11-26 17:26:05.771561] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:29:28.590 [2024-11-26 17:26:05.771619] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:29:28.590 17:26:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:28.590 17:26:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:29:28.590 17:26:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:28.590 17:26:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:29:28.590 [2024-11-26 17:26:05.779594] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:29:28.590 [2024-11-26 17:26:05.781885] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:29:28.590 [2024-11-26 17:26:05.781938] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:29:28.590 17:26:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:28.590 17:26:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:29:28.590 17:26:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:29:28.590 17:26:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:29:28.590 17:26:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:29:28.590 17:26:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:29:28.590 17:26:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:29:28.590 17:26:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:29:28.590 17:26:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:29:28.590 17:26:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:29:28.590 17:26:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:29:28.590 17:26:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:29:28.590 17:26:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:29:28.590 17:26:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:29:28.590 17:26:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:28.590 17:26:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:29:28.590 17:26:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:29:28.590 17:26:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:28.590 17:26:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:29:28.590 "name": "Existed_Raid", 00:29:28.590 "uuid": "8527c49e-393c-44da-a6fc-8e9ddf251855", 00:29:28.590 "strip_size_kb": 0, 00:29:28.590 "state": "configuring", 00:29:28.590 "raid_level": "raid1", 00:29:28.590 "superblock": true, 00:29:28.590 "num_base_bdevs": 2, 00:29:28.590 "num_base_bdevs_discovered": 1, 00:29:28.590 "num_base_bdevs_operational": 2, 00:29:28.590 "base_bdevs_list": [ 00:29:28.590 { 00:29:28.590 "name": "BaseBdev1", 00:29:28.590 "uuid": "edd5caf7-6b06-4fb5-8484-b62b2328ee5b", 00:29:28.590 "is_configured": true, 00:29:28.590 "data_offset": 2048, 00:29:28.590 "data_size": 63488 00:29:28.590 }, 00:29:28.590 { 00:29:28.590 "name": "BaseBdev2", 00:29:28.590 "uuid": "00000000-0000-0000-0000-000000000000", 00:29:28.590 "is_configured": false, 00:29:28.590 "data_offset": 0, 00:29:28.590 "data_size": 0 00:29:28.590 } 00:29:28.590 ] 00:29:28.590 }' 00:29:28.590 17:26:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:29:28.590 17:26:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:29:28.850 17:26:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:29:28.850 17:26:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:28.850 17:26:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:29:28.850 [2024-11-26 17:26:06.243354] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:29:28.850 [2024-11-26 17:26:06.243602] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:29:28.850 [2024-11-26 17:26:06.243617] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:29:28.850 [2024-11-26 17:26:06.243886] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:29:28.850 [2024-11-26 17:26:06.244065] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:29:28.850 [2024-11-26 17:26:06.244083] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:29:28.850 [2024-11-26 17:26:06.244273] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:29:28.850 BaseBdev2 00:29:28.850 17:26:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:28.850 17:26:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:29:28.850 17:26:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:29:28.850 17:26:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:29:28.850 17:26:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:29:28.850 17:26:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:29:28.850 17:26:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:29:28.850 17:26:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:29:28.850 17:26:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:28.850 17:26:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:29:28.850 17:26:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:28.850 17:26:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:29:28.850 17:26:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:28.850 17:26:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:29:28.850 [ 00:29:28.850 { 00:29:28.850 "name": "BaseBdev2", 00:29:28.850 "aliases": [ 00:29:28.850 "6cec15b4-136f-43f5-86f0-e15807753027" 00:29:28.850 ], 00:29:28.850 "product_name": "Malloc disk", 00:29:28.850 "block_size": 512, 00:29:28.850 "num_blocks": 65536, 00:29:28.850 "uuid": "6cec15b4-136f-43f5-86f0-e15807753027", 00:29:28.850 "assigned_rate_limits": { 00:29:28.850 "rw_ios_per_sec": 0, 00:29:28.850 "rw_mbytes_per_sec": 0, 00:29:28.850 "r_mbytes_per_sec": 0, 00:29:28.850 "w_mbytes_per_sec": 0 00:29:28.850 }, 00:29:28.850 "claimed": true, 00:29:28.850 "claim_type": "exclusive_write", 00:29:28.850 "zoned": false, 00:29:28.850 "supported_io_types": { 00:29:28.850 "read": true, 00:29:28.850 "write": true, 00:29:28.850 "unmap": true, 00:29:28.850 "flush": true, 00:29:28.850 "reset": true, 00:29:28.850 "nvme_admin": false, 00:29:28.850 "nvme_io": false, 00:29:28.850 "nvme_io_md": false, 00:29:28.850 "write_zeroes": true, 00:29:28.850 "zcopy": true, 00:29:28.850 "get_zone_info": false, 00:29:28.850 "zone_management": false, 00:29:28.850 "zone_append": false, 00:29:28.850 "compare": false, 00:29:28.850 "compare_and_write": false, 00:29:28.850 "abort": true, 00:29:28.850 "seek_hole": false, 00:29:28.850 "seek_data": false, 00:29:28.850 "copy": true, 00:29:28.850 "nvme_iov_md": false 00:29:28.850 }, 00:29:28.850 "memory_domains": [ 00:29:28.850 { 00:29:28.850 "dma_device_id": "system", 00:29:28.850 "dma_device_type": 1 00:29:28.850 }, 00:29:28.850 { 00:29:28.850 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:29:28.850 "dma_device_type": 2 00:29:28.850 } 00:29:28.850 ], 00:29:28.850 "driver_specific": {} 00:29:28.850 } 00:29:28.850 ] 00:29:28.850 17:26:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:28.850 17:26:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:29:28.850 17:26:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:29:28.850 17:26:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:29:28.850 17:26:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid1 0 2 00:29:28.850 17:26:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:29:28.850 17:26:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:29:28.850 17:26:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:29:28.850 17:26:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:29:28.850 17:26:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:29:28.850 17:26:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:29:28.850 17:26:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:29:28.850 17:26:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:29:28.850 17:26:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:29:28.850 17:26:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:29:28.850 17:26:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:28.850 17:26:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:29:28.850 17:26:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:29:29.109 17:26:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:29.109 17:26:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:29:29.109 "name": "Existed_Raid", 00:29:29.109 "uuid": "8527c49e-393c-44da-a6fc-8e9ddf251855", 00:29:29.109 "strip_size_kb": 0, 00:29:29.109 "state": "online", 00:29:29.109 "raid_level": "raid1", 00:29:29.109 "superblock": true, 00:29:29.109 "num_base_bdevs": 2, 00:29:29.109 "num_base_bdevs_discovered": 2, 00:29:29.109 "num_base_bdevs_operational": 2, 00:29:29.109 "base_bdevs_list": [ 00:29:29.109 { 00:29:29.109 "name": "BaseBdev1", 00:29:29.109 "uuid": "edd5caf7-6b06-4fb5-8484-b62b2328ee5b", 00:29:29.109 "is_configured": true, 00:29:29.109 "data_offset": 2048, 00:29:29.109 "data_size": 63488 00:29:29.109 }, 00:29:29.109 { 00:29:29.109 "name": "BaseBdev2", 00:29:29.109 "uuid": "6cec15b4-136f-43f5-86f0-e15807753027", 00:29:29.109 "is_configured": true, 00:29:29.109 "data_offset": 2048, 00:29:29.109 "data_size": 63488 00:29:29.109 } 00:29:29.109 ] 00:29:29.109 }' 00:29:29.109 17:26:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:29:29.109 17:26:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:29:29.368 17:26:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:29:29.368 17:26:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:29:29.368 17:26:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:29:29.368 17:26:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:29:29.368 17:26:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:29:29.368 17:26:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:29:29.368 17:26:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:29:29.368 17:26:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:29.368 17:26:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:29:29.368 17:26:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:29:29.368 [2024-11-26 17:26:06.707796] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:29:29.368 17:26:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:29.368 17:26:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:29:29.368 "name": "Existed_Raid", 00:29:29.368 "aliases": [ 00:29:29.368 "8527c49e-393c-44da-a6fc-8e9ddf251855" 00:29:29.368 ], 00:29:29.368 "product_name": "Raid Volume", 00:29:29.368 "block_size": 512, 00:29:29.368 "num_blocks": 63488, 00:29:29.368 "uuid": "8527c49e-393c-44da-a6fc-8e9ddf251855", 00:29:29.368 "assigned_rate_limits": { 00:29:29.368 "rw_ios_per_sec": 0, 00:29:29.368 "rw_mbytes_per_sec": 0, 00:29:29.368 "r_mbytes_per_sec": 0, 00:29:29.368 "w_mbytes_per_sec": 0 00:29:29.368 }, 00:29:29.368 "claimed": false, 00:29:29.368 "zoned": false, 00:29:29.368 "supported_io_types": { 00:29:29.368 "read": true, 00:29:29.368 "write": true, 00:29:29.368 "unmap": false, 00:29:29.368 "flush": false, 00:29:29.368 "reset": true, 00:29:29.368 "nvme_admin": false, 00:29:29.368 "nvme_io": false, 00:29:29.368 "nvme_io_md": false, 00:29:29.368 "write_zeroes": true, 00:29:29.368 "zcopy": false, 00:29:29.368 "get_zone_info": false, 00:29:29.368 "zone_management": false, 00:29:29.368 "zone_append": false, 00:29:29.368 "compare": false, 00:29:29.368 "compare_and_write": false, 00:29:29.368 "abort": false, 00:29:29.368 "seek_hole": false, 00:29:29.368 "seek_data": false, 00:29:29.368 "copy": false, 00:29:29.368 "nvme_iov_md": false 00:29:29.368 }, 00:29:29.368 "memory_domains": [ 00:29:29.368 { 00:29:29.368 "dma_device_id": "system", 00:29:29.368 "dma_device_type": 1 00:29:29.368 }, 00:29:29.368 { 00:29:29.368 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:29:29.368 "dma_device_type": 2 00:29:29.368 }, 00:29:29.368 { 00:29:29.368 "dma_device_id": "system", 00:29:29.368 "dma_device_type": 1 00:29:29.368 }, 00:29:29.368 { 00:29:29.368 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:29:29.368 "dma_device_type": 2 00:29:29.368 } 00:29:29.368 ], 00:29:29.368 "driver_specific": { 00:29:29.368 "raid": { 00:29:29.368 "uuid": "8527c49e-393c-44da-a6fc-8e9ddf251855", 00:29:29.368 "strip_size_kb": 0, 00:29:29.368 "state": "online", 00:29:29.368 "raid_level": "raid1", 00:29:29.368 "superblock": true, 00:29:29.368 "num_base_bdevs": 2, 00:29:29.368 "num_base_bdevs_discovered": 2, 00:29:29.368 "num_base_bdevs_operational": 2, 00:29:29.368 "base_bdevs_list": [ 00:29:29.368 { 00:29:29.368 "name": "BaseBdev1", 00:29:29.369 "uuid": "edd5caf7-6b06-4fb5-8484-b62b2328ee5b", 00:29:29.369 "is_configured": true, 00:29:29.369 "data_offset": 2048, 00:29:29.369 "data_size": 63488 00:29:29.369 }, 00:29:29.369 { 00:29:29.369 "name": "BaseBdev2", 00:29:29.369 "uuid": "6cec15b4-136f-43f5-86f0-e15807753027", 00:29:29.369 "is_configured": true, 00:29:29.369 "data_offset": 2048, 00:29:29.369 "data_size": 63488 00:29:29.369 } 00:29:29.369 ] 00:29:29.369 } 00:29:29.369 } 00:29:29.369 }' 00:29:29.369 17:26:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:29:29.369 17:26:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:29:29.369 BaseBdev2' 00:29:29.369 17:26:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:29:29.628 17:26:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:29:29.628 17:26:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:29:29.628 17:26:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:29:29.628 17:26:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:29.628 17:26:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:29:29.628 17:26:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:29:29.628 17:26:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:29.628 17:26:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:29:29.628 17:26:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:29:29.628 17:26:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:29:29.628 17:26:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:29:29.628 17:26:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:29.628 17:26:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:29:29.628 17:26:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:29:29.628 17:26:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:29.628 17:26:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:29:29.628 17:26:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:29:29.628 17:26:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:29:29.628 17:26:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:29.628 17:26:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:29:29.628 [2024-11-26 17:26:06.931646] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:29:29.628 17:26:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:29.628 17:26:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # local expected_state 00:29:29.628 17:26:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@261 -- # has_redundancy raid1 00:29:29.629 17:26:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # case $1 in 00:29:29.629 17:26:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@199 -- # return 0 00:29:29.629 17:26:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@264 -- # expected_state=online 00:29:29.629 17:26:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid online raid1 0 1 00:29:29.629 17:26:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:29:29.629 17:26:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:29:29.629 17:26:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:29:29.629 17:26:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:29:29.629 17:26:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:29:29.629 17:26:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:29:29.629 17:26:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:29:29.629 17:26:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:29:29.629 17:26:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:29:29.629 17:26:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:29:29.629 17:26:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:29:29.629 17:26:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:29.629 17:26:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:29:29.629 17:26:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:29.887 17:26:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:29:29.887 "name": "Existed_Raid", 00:29:29.887 "uuid": "8527c49e-393c-44da-a6fc-8e9ddf251855", 00:29:29.887 "strip_size_kb": 0, 00:29:29.887 "state": "online", 00:29:29.887 "raid_level": "raid1", 00:29:29.887 "superblock": true, 00:29:29.887 "num_base_bdevs": 2, 00:29:29.887 "num_base_bdevs_discovered": 1, 00:29:29.887 "num_base_bdevs_operational": 1, 00:29:29.887 "base_bdevs_list": [ 00:29:29.887 { 00:29:29.887 "name": null, 00:29:29.887 "uuid": "00000000-0000-0000-0000-000000000000", 00:29:29.887 "is_configured": false, 00:29:29.887 "data_offset": 0, 00:29:29.887 "data_size": 63488 00:29:29.887 }, 00:29:29.887 { 00:29:29.887 "name": "BaseBdev2", 00:29:29.887 "uuid": "6cec15b4-136f-43f5-86f0-e15807753027", 00:29:29.887 "is_configured": true, 00:29:29.887 "data_offset": 2048, 00:29:29.887 "data_size": 63488 00:29:29.887 } 00:29:29.887 ] 00:29:29.887 }' 00:29:29.887 17:26:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:29:29.887 17:26:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:29:30.145 17:26:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:29:30.145 17:26:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:29:30.145 17:26:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:29:30.146 17:26:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:29:30.146 17:26:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:30.146 17:26:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:29:30.146 17:26:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:30.146 17:26:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:29:30.146 17:26:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:29:30.146 17:26:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:29:30.146 17:26:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:30.146 17:26:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:29:30.146 [2024-11-26 17:26:07.528758] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:29:30.146 [2024-11-26 17:26:07.528872] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:29:30.404 [2024-11-26 17:26:07.628873] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:29:30.404 [2024-11-26 17:26:07.628934] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:29:30.404 [2024-11-26 17:26:07.628950] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:29:30.404 17:26:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:30.405 17:26:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:29:30.405 17:26:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:29:30.405 17:26:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:29:30.405 17:26:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:29:30.405 17:26:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:30.405 17:26:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:29:30.405 17:26:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:30.405 17:26:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:29:30.405 17:26:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:29:30.405 17:26:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@284 -- # '[' 2 -gt 2 ']' 00:29:30.405 17:26:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@326 -- # killprocess 63344 00:29:30.405 17:26:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@954 -- # '[' -z 63344 ']' 00:29:30.405 17:26:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@958 -- # kill -0 63344 00:29:30.405 17:26:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@959 -- # uname 00:29:30.405 17:26:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:29:30.405 17:26:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 63344 00:29:30.405 17:26:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:29:30.405 17:26:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:29:30.405 killing process with pid 63344 00:29:30.405 17:26:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@972 -- # echo 'killing process with pid 63344' 00:29:30.405 17:26:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@973 -- # kill 63344 00:29:30.405 [2024-11-26 17:26:07.702762] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:29:30.405 17:26:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@978 -- # wait 63344 00:29:30.405 [2024-11-26 17:26:07.721346] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:29:31.781 17:26:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@328 -- # return 0 00:29:31.781 00:29:31.781 real 0m5.281s 00:29:31.781 user 0m7.682s 00:29:31.781 sys 0m0.854s 00:29:31.781 17:26:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1130 -- # xtrace_disable 00:29:31.781 17:26:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:29:31.781 ************************************ 00:29:31.781 END TEST raid_state_function_test_sb 00:29:31.781 ************************************ 00:29:31.781 17:26:08 bdev_raid -- bdev/bdev_raid.sh@970 -- # run_test raid_superblock_test raid_superblock_test raid1 2 00:29:31.781 17:26:08 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:29:31.781 17:26:08 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:29:31.781 17:26:08 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:29:31.781 ************************************ 00:29:31.781 START TEST raid_superblock_test 00:29:31.781 ************************************ 00:29:31.781 17:26:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1129 -- # raid_superblock_test raid1 2 00:29:31.781 17:26:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@393 -- # local raid_level=raid1 00:29:31.781 17:26:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=2 00:29:31.781 17:26:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:29:31.781 17:26:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:29:31.781 17:26:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:29:31.781 17:26:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:29:31.781 17:26:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:29:31.781 17:26:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:29:31.781 17:26:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:29:31.781 17:26:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size 00:29:31.781 17:26:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:29:31.781 17:26:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:29:31.781 17:26:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:29:31.781 17:26:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@404 -- # '[' raid1 '!=' raid1 ']' 00:29:31.781 17:26:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@408 -- # strip_size=0 00:29:31.781 17:26:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@412 -- # raid_pid=63600 00:29:31.781 17:26:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@413 -- # waitforlisten 63600 00:29:31.781 17:26:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@835 -- # '[' -z 63600 ']' 00:29:31.781 17:26:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:31.781 17:26:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:29:31.781 17:26:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:29:31.781 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:29:31.781 17:26:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:31.781 17:26:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:29:31.781 17:26:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:29:31.781 [2024-11-26 17:26:09.078589] Starting SPDK v25.01-pre git sha1 f7ce15267 / DPDK 24.03.0 initialization... 00:29:31.781 [2024-11-26 17:26:09.078733] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63600 ] 00:29:32.039 [2024-11-26 17:26:09.249213] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:32.039 [2024-11-26 17:26:09.375471] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:29:32.300 [2024-11-26 17:26:09.601016] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:29:32.300 [2024-11-26 17:26:09.601078] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:29:32.938 17:26:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:29:32.938 17:26:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@868 -- # return 0 00:29:32.938 17:26:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:29:32.938 17:26:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:29:32.938 17:26:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:29:32.938 17:26:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:29:32.938 17:26:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:29:32.938 17:26:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:29:32.938 17:26:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:29:32.938 17:26:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:29:32.938 17:26:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc1 00:29:32.938 17:26:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:32.938 17:26:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:29:32.938 malloc1 00:29:32.938 17:26:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:32.938 17:26:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:29:32.938 17:26:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:32.938 17:26:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:29:32.938 [2024-11-26 17:26:10.139603] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:29:32.938 [2024-11-26 17:26:10.139668] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:29:32.938 [2024-11-26 17:26:10.139710] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:29:32.938 [2024-11-26 17:26:10.139724] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:29:32.938 [2024-11-26 17:26:10.142435] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:29:32.938 [2024-11-26 17:26:10.142479] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:29:32.938 pt1 00:29:32.938 17:26:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:32.938 17:26:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:29:32.938 17:26:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:29:32.938 17:26:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:29:32.938 17:26:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:29:32.938 17:26:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:29:32.938 17:26:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:29:32.939 17:26:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:29:32.939 17:26:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:29:32.939 17:26:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc2 00:29:32.939 17:26:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:32.939 17:26:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:29:32.939 malloc2 00:29:32.939 17:26:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:32.939 17:26:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:29:32.939 17:26:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:32.939 17:26:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:29:32.939 [2024-11-26 17:26:10.196707] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:29:32.939 [2024-11-26 17:26:10.196770] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:29:32.939 [2024-11-26 17:26:10.196800] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:29:32.939 [2024-11-26 17:26:10.196811] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:29:32.939 [2024-11-26 17:26:10.199377] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:29:32.939 [2024-11-26 17:26:10.199432] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:29:32.939 pt2 00:29:32.939 17:26:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:32.939 17:26:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:29:32.939 17:26:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:29:32.939 17:26:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''pt1 pt2'\''' -n raid_bdev1 -s 00:29:32.939 17:26:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:32.939 17:26:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:29:32.939 [2024-11-26 17:26:10.208759] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:29:32.939 [2024-11-26 17:26:10.211021] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:29:32.939 [2024-11-26 17:26:10.211217] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:29:32.939 [2024-11-26 17:26:10.211250] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:29:32.939 [2024-11-26 17:26:10.211528] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:29:32.939 [2024-11-26 17:26:10.211694] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:29:32.939 [2024-11-26 17:26:10.211720] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:29:32.939 [2024-11-26 17:26:10.211875] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:29:32.939 17:26:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:32.939 17:26:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:29:32.939 17:26:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:29:32.939 17:26:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:29:32.939 17:26:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:29:32.939 17:26:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:29:32.939 17:26:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:29:32.939 17:26:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:29:32.939 17:26:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:29:32.939 17:26:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:29:32.939 17:26:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:29:32.939 17:26:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:29:32.939 17:26:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:29:32.939 17:26:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:32.939 17:26:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:29:32.939 17:26:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:32.939 17:26:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:29:32.939 "name": "raid_bdev1", 00:29:32.939 "uuid": "004d5fef-1f8c-4ee9-b767-004d853fc5c4", 00:29:32.939 "strip_size_kb": 0, 00:29:32.939 "state": "online", 00:29:32.939 "raid_level": "raid1", 00:29:32.939 "superblock": true, 00:29:32.939 "num_base_bdevs": 2, 00:29:32.939 "num_base_bdevs_discovered": 2, 00:29:32.939 "num_base_bdevs_operational": 2, 00:29:32.939 "base_bdevs_list": [ 00:29:32.939 { 00:29:32.939 "name": "pt1", 00:29:32.939 "uuid": "00000000-0000-0000-0000-000000000001", 00:29:32.939 "is_configured": true, 00:29:32.939 "data_offset": 2048, 00:29:32.939 "data_size": 63488 00:29:32.939 }, 00:29:32.939 { 00:29:32.939 "name": "pt2", 00:29:32.939 "uuid": "00000000-0000-0000-0000-000000000002", 00:29:32.939 "is_configured": true, 00:29:32.939 "data_offset": 2048, 00:29:32.939 "data_size": 63488 00:29:32.939 } 00:29:32.939 ] 00:29:32.939 }' 00:29:32.939 17:26:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:29:32.939 17:26:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:29:33.507 17:26:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:29:33.507 17:26:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:29:33.507 17:26:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:29:33.507 17:26:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:29:33.507 17:26:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:29:33.507 17:26:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:29:33.507 17:26:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:29:33.507 17:26:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:33.507 17:26:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:29:33.507 17:26:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:29:33.507 [2024-11-26 17:26:10.677175] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:29:33.507 17:26:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:33.507 17:26:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:29:33.507 "name": "raid_bdev1", 00:29:33.507 "aliases": [ 00:29:33.507 "004d5fef-1f8c-4ee9-b767-004d853fc5c4" 00:29:33.507 ], 00:29:33.507 "product_name": "Raid Volume", 00:29:33.507 "block_size": 512, 00:29:33.507 "num_blocks": 63488, 00:29:33.507 "uuid": "004d5fef-1f8c-4ee9-b767-004d853fc5c4", 00:29:33.507 "assigned_rate_limits": { 00:29:33.507 "rw_ios_per_sec": 0, 00:29:33.507 "rw_mbytes_per_sec": 0, 00:29:33.507 "r_mbytes_per_sec": 0, 00:29:33.507 "w_mbytes_per_sec": 0 00:29:33.507 }, 00:29:33.507 "claimed": false, 00:29:33.507 "zoned": false, 00:29:33.507 "supported_io_types": { 00:29:33.507 "read": true, 00:29:33.507 "write": true, 00:29:33.507 "unmap": false, 00:29:33.507 "flush": false, 00:29:33.507 "reset": true, 00:29:33.507 "nvme_admin": false, 00:29:33.507 "nvme_io": false, 00:29:33.507 "nvme_io_md": false, 00:29:33.507 "write_zeroes": true, 00:29:33.507 "zcopy": false, 00:29:33.507 "get_zone_info": false, 00:29:33.507 "zone_management": false, 00:29:33.507 "zone_append": false, 00:29:33.507 "compare": false, 00:29:33.507 "compare_and_write": false, 00:29:33.507 "abort": false, 00:29:33.507 "seek_hole": false, 00:29:33.507 "seek_data": false, 00:29:33.507 "copy": false, 00:29:33.507 "nvme_iov_md": false 00:29:33.507 }, 00:29:33.507 "memory_domains": [ 00:29:33.507 { 00:29:33.507 "dma_device_id": "system", 00:29:33.507 "dma_device_type": 1 00:29:33.507 }, 00:29:33.507 { 00:29:33.507 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:29:33.507 "dma_device_type": 2 00:29:33.507 }, 00:29:33.507 { 00:29:33.507 "dma_device_id": "system", 00:29:33.507 "dma_device_type": 1 00:29:33.507 }, 00:29:33.507 { 00:29:33.507 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:29:33.507 "dma_device_type": 2 00:29:33.507 } 00:29:33.507 ], 00:29:33.507 "driver_specific": { 00:29:33.507 "raid": { 00:29:33.507 "uuid": "004d5fef-1f8c-4ee9-b767-004d853fc5c4", 00:29:33.507 "strip_size_kb": 0, 00:29:33.507 "state": "online", 00:29:33.507 "raid_level": "raid1", 00:29:33.507 "superblock": true, 00:29:33.507 "num_base_bdevs": 2, 00:29:33.507 "num_base_bdevs_discovered": 2, 00:29:33.507 "num_base_bdevs_operational": 2, 00:29:33.507 "base_bdevs_list": [ 00:29:33.507 { 00:29:33.507 "name": "pt1", 00:29:33.507 "uuid": "00000000-0000-0000-0000-000000000001", 00:29:33.507 "is_configured": true, 00:29:33.507 "data_offset": 2048, 00:29:33.507 "data_size": 63488 00:29:33.507 }, 00:29:33.507 { 00:29:33.507 "name": "pt2", 00:29:33.507 "uuid": "00000000-0000-0000-0000-000000000002", 00:29:33.507 "is_configured": true, 00:29:33.507 "data_offset": 2048, 00:29:33.507 "data_size": 63488 00:29:33.507 } 00:29:33.507 ] 00:29:33.507 } 00:29:33.507 } 00:29:33.507 }' 00:29:33.508 17:26:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:29:33.508 17:26:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:29:33.508 pt2' 00:29:33.508 17:26:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:29:33.508 17:26:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:29:33.508 17:26:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:29:33.508 17:26:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:29:33.508 17:26:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:33.508 17:26:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:29:33.508 17:26:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:29:33.508 17:26:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:33.508 17:26:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:29:33.508 17:26:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:29:33.508 17:26:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:29:33.508 17:26:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:29:33.508 17:26:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:29:33.508 17:26:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:33.508 17:26:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:29:33.508 17:26:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:33.508 17:26:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:29:33.508 17:26:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:29:33.508 17:26:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:29:33.508 17:26:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:29:33.508 17:26:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:33.508 17:26:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:29:33.508 [2024-11-26 17:26:10.913222] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:29:33.508 17:26:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:33.768 17:26:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=004d5fef-1f8c-4ee9-b767-004d853fc5c4 00:29:33.768 17:26:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@436 -- # '[' -z 004d5fef-1f8c-4ee9-b767-004d853fc5c4 ']' 00:29:33.768 17:26:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:29:33.768 17:26:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:33.768 17:26:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:29:33.768 [2024-11-26 17:26:10.960907] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:29:33.768 [2024-11-26 17:26:10.960941] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:29:33.768 [2024-11-26 17:26:10.961033] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:29:33.768 [2024-11-26 17:26:10.961111] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:29:33.768 [2024-11-26 17:26:10.961127] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:29:33.768 17:26:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:33.768 17:26:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:29:33.768 17:26:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:33.768 17:26:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:29:33.768 17:26:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:29:33.768 17:26:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:33.768 17:26:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:29:33.768 17:26:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:29:33.768 17:26:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:29:33.768 17:26:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:29:33.768 17:26:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:33.768 17:26:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:29:33.768 17:26:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:33.768 17:26:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:29:33.768 17:26:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:29:33.768 17:26:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:33.768 17:26:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:29:33.768 17:26:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:33.768 17:26:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:29:33.768 17:26:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:33.768 17:26:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:29:33.768 17:26:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:29:33.768 17:26:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:33.768 17:26:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:29:33.768 17:26:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:29:33.768 17:26:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@652 -- # local es=0 00:29:33.768 17:26:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:29:33.768 17:26:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:29:33.768 17:26:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:29:33.768 17:26:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:29:33.768 17:26:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:29:33.768 17:26:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:29:33.768 17:26:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:33.768 17:26:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:29:33.768 [2024-11-26 17:26:11.100998] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:29:33.768 [2024-11-26 17:26:11.103392] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:29:33.768 [2024-11-26 17:26:11.103472] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:29:33.768 [2024-11-26 17:26:11.103535] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:29:33.768 [2024-11-26 17:26:11.103554] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:29:33.768 [2024-11-26 17:26:11.103567] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state configuring 00:29:33.768 request: 00:29:33.768 { 00:29:33.768 "name": "raid_bdev1", 00:29:33.768 "raid_level": "raid1", 00:29:33.768 "base_bdevs": [ 00:29:33.768 "malloc1", 00:29:33.768 "malloc2" 00:29:33.768 ], 00:29:33.768 "superblock": false, 00:29:33.768 "method": "bdev_raid_create", 00:29:33.768 "req_id": 1 00:29:33.768 } 00:29:33.768 Got JSON-RPC error response 00:29:33.768 response: 00:29:33.768 { 00:29:33.768 "code": -17, 00:29:33.768 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:29:33.768 } 00:29:33.768 17:26:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:29:33.768 17:26:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@655 -- # es=1 00:29:33.768 17:26:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:29:33.768 17:26:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:29:33.768 17:26:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:29:33.768 17:26:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:29:33.768 17:26:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:33.768 17:26:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:29:33.768 17:26:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:29:33.768 17:26:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:33.768 17:26:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:29:33.768 17:26:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:29:33.768 17:26:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:29:33.768 17:26:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:33.768 17:26:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:29:33.768 [2024-11-26 17:26:11.160962] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:29:33.768 [2024-11-26 17:26:11.161058] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:29:33.768 [2024-11-26 17:26:11.161083] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:29:33.768 [2024-11-26 17:26:11.161098] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:29:33.768 [2024-11-26 17:26:11.163728] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:29:33.768 [2024-11-26 17:26:11.163775] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:29:33.768 [2024-11-26 17:26:11.163867] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:29:33.768 [2024-11-26 17:26:11.163935] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:29:33.768 pt1 00:29:33.768 17:26:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:33.768 17:26:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 2 00:29:33.769 17:26:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:29:33.769 17:26:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:29:33.769 17:26:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:29:33.769 17:26:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:29:33.769 17:26:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:29:33.769 17:26:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:29:33.769 17:26:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:29:33.769 17:26:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:29:33.769 17:26:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:29:33.769 17:26:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:29:33.769 17:26:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:29:33.769 17:26:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:33.769 17:26:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:29:33.769 17:26:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:33.769 17:26:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:29:33.769 "name": "raid_bdev1", 00:29:33.769 "uuid": "004d5fef-1f8c-4ee9-b767-004d853fc5c4", 00:29:33.769 "strip_size_kb": 0, 00:29:33.769 "state": "configuring", 00:29:33.769 "raid_level": "raid1", 00:29:33.769 "superblock": true, 00:29:33.769 "num_base_bdevs": 2, 00:29:33.769 "num_base_bdevs_discovered": 1, 00:29:33.769 "num_base_bdevs_operational": 2, 00:29:33.769 "base_bdevs_list": [ 00:29:33.769 { 00:29:33.769 "name": "pt1", 00:29:33.769 "uuid": "00000000-0000-0000-0000-000000000001", 00:29:33.769 "is_configured": true, 00:29:33.769 "data_offset": 2048, 00:29:33.769 "data_size": 63488 00:29:33.769 }, 00:29:33.769 { 00:29:33.769 "name": null, 00:29:33.769 "uuid": "00000000-0000-0000-0000-000000000002", 00:29:33.769 "is_configured": false, 00:29:33.769 "data_offset": 2048, 00:29:33.769 "data_size": 63488 00:29:33.769 } 00:29:33.769 ] 00:29:33.769 }' 00:29:33.769 17:26:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:29:33.769 17:26:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:29:34.338 17:26:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@470 -- # '[' 2 -gt 2 ']' 00:29:34.338 17:26:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:29:34.338 17:26:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:29:34.338 17:26:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:29:34.338 17:26:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:34.338 17:26:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:29:34.338 [2024-11-26 17:26:11.609135] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:29:34.338 [2024-11-26 17:26:11.609218] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:29:34.338 [2024-11-26 17:26:11.609244] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:29:34.338 [2024-11-26 17:26:11.609260] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:29:34.338 [2024-11-26 17:26:11.609741] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:29:34.338 [2024-11-26 17:26:11.609779] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:29:34.338 [2024-11-26 17:26:11.609868] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:29:34.338 [2024-11-26 17:26:11.609898] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:29:34.338 [2024-11-26 17:26:11.610031] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:29:34.338 [2024-11-26 17:26:11.610076] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:29:34.338 [2024-11-26 17:26:11.610375] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:29:34.338 [2024-11-26 17:26:11.610542] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:29:34.338 [2024-11-26 17:26:11.610556] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:29:34.338 [2024-11-26 17:26:11.610711] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:29:34.338 pt2 00:29:34.338 17:26:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:34.338 17:26:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:29:34.338 17:26:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:29:34.338 17:26:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:29:34.338 17:26:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:29:34.338 17:26:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:29:34.338 17:26:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:29:34.338 17:26:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:29:34.338 17:26:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:29:34.338 17:26:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:29:34.338 17:26:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:29:34.338 17:26:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:29:34.338 17:26:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:29:34.338 17:26:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:29:34.338 17:26:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:34.338 17:26:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:29:34.338 17:26:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:29:34.338 17:26:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:34.338 17:26:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:29:34.338 "name": "raid_bdev1", 00:29:34.338 "uuid": "004d5fef-1f8c-4ee9-b767-004d853fc5c4", 00:29:34.338 "strip_size_kb": 0, 00:29:34.338 "state": "online", 00:29:34.338 "raid_level": "raid1", 00:29:34.338 "superblock": true, 00:29:34.338 "num_base_bdevs": 2, 00:29:34.338 "num_base_bdevs_discovered": 2, 00:29:34.338 "num_base_bdevs_operational": 2, 00:29:34.338 "base_bdevs_list": [ 00:29:34.338 { 00:29:34.338 "name": "pt1", 00:29:34.338 "uuid": "00000000-0000-0000-0000-000000000001", 00:29:34.338 "is_configured": true, 00:29:34.338 "data_offset": 2048, 00:29:34.338 "data_size": 63488 00:29:34.338 }, 00:29:34.338 { 00:29:34.338 "name": "pt2", 00:29:34.338 "uuid": "00000000-0000-0000-0000-000000000002", 00:29:34.338 "is_configured": true, 00:29:34.338 "data_offset": 2048, 00:29:34.338 "data_size": 63488 00:29:34.338 } 00:29:34.338 ] 00:29:34.338 }' 00:29:34.338 17:26:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:29:34.338 17:26:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:29:34.907 17:26:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:29:34.907 17:26:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:29:34.907 17:26:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:29:34.907 17:26:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:29:34.907 17:26:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:29:34.907 17:26:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:29:34.907 17:26:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:29:34.907 17:26:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:29:34.907 17:26:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:34.907 17:26:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:29:34.907 [2024-11-26 17:26:12.105490] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:29:34.907 17:26:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:34.907 17:26:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:29:34.907 "name": "raid_bdev1", 00:29:34.907 "aliases": [ 00:29:34.907 "004d5fef-1f8c-4ee9-b767-004d853fc5c4" 00:29:34.907 ], 00:29:34.907 "product_name": "Raid Volume", 00:29:34.907 "block_size": 512, 00:29:34.907 "num_blocks": 63488, 00:29:34.907 "uuid": "004d5fef-1f8c-4ee9-b767-004d853fc5c4", 00:29:34.907 "assigned_rate_limits": { 00:29:34.907 "rw_ios_per_sec": 0, 00:29:34.907 "rw_mbytes_per_sec": 0, 00:29:34.907 "r_mbytes_per_sec": 0, 00:29:34.907 "w_mbytes_per_sec": 0 00:29:34.907 }, 00:29:34.907 "claimed": false, 00:29:34.907 "zoned": false, 00:29:34.907 "supported_io_types": { 00:29:34.907 "read": true, 00:29:34.907 "write": true, 00:29:34.907 "unmap": false, 00:29:34.907 "flush": false, 00:29:34.907 "reset": true, 00:29:34.907 "nvme_admin": false, 00:29:34.907 "nvme_io": false, 00:29:34.907 "nvme_io_md": false, 00:29:34.907 "write_zeroes": true, 00:29:34.907 "zcopy": false, 00:29:34.907 "get_zone_info": false, 00:29:34.907 "zone_management": false, 00:29:34.907 "zone_append": false, 00:29:34.907 "compare": false, 00:29:34.907 "compare_and_write": false, 00:29:34.907 "abort": false, 00:29:34.907 "seek_hole": false, 00:29:34.907 "seek_data": false, 00:29:34.907 "copy": false, 00:29:34.907 "nvme_iov_md": false 00:29:34.907 }, 00:29:34.907 "memory_domains": [ 00:29:34.907 { 00:29:34.907 "dma_device_id": "system", 00:29:34.907 "dma_device_type": 1 00:29:34.907 }, 00:29:34.907 { 00:29:34.907 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:29:34.907 "dma_device_type": 2 00:29:34.907 }, 00:29:34.907 { 00:29:34.907 "dma_device_id": "system", 00:29:34.907 "dma_device_type": 1 00:29:34.907 }, 00:29:34.907 { 00:29:34.907 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:29:34.907 "dma_device_type": 2 00:29:34.907 } 00:29:34.907 ], 00:29:34.907 "driver_specific": { 00:29:34.907 "raid": { 00:29:34.907 "uuid": "004d5fef-1f8c-4ee9-b767-004d853fc5c4", 00:29:34.907 "strip_size_kb": 0, 00:29:34.907 "state": "online", 00:29:34.907 "raid_level": "raid1", 00:29:34.907 "superblock": true, 00:29:34.907 "num_base_bdevs": 2, 00:29:34.907 "num_base_bdevs_discovered": 2, 00:29:34.907 "num_base_bdevs_operational": 2, 00:29:34.907 "base_bdevs_list": [ 00:29:34.907 { 00:29:34.907 "name": "pt1", 00:29:34.907 "uuid": "00000000-0000-0000-0000-000000000001", 00:29:34.907 "is_configured": true, 00:29:34.907 "data_offset": 2048, 00:29:34.907 "data_size": 63488 00:29:34.907 }, 00:29:34.907 { 00:29:34.907 "name": "pt2", 00:29:34.907 "uuid": "00000000-0000-0000-0000-000000000002", 00:29:34.907 "is_configured": true, 00:29:34.907 "data_offset": 2048, 00:29:34.907 "data_size": 63488 00:29:34.907 } 00:29:34.907 ] 00:29:34.907 } 00:29:34.907 } 00:29:34.907 }' 00:29:34.908 17:26:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:29:34.908 17:26:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:29:34.908 pt2' 00:29:34.908 17:26:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:29:34.908 17:26:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:29:34.908 17:26:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:29:34.908 17:26:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:29:34.908 17:26:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:34.908 17:26:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:29:34.908 17:26:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:29:34.908 17:26:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:34.908 17:26:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:29:34.908 17:26:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:29:34.908 17:26:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:29:34.908 17:26:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:29:34.908 17:26:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:29:34.908 17:26:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:34.908 17:26:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:29:34.908 17:26:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:34.908 17:26:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:29:34.908 17:26:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:29:34.908 17:26:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:29:34.908 17:26:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:29:34.908 17:26:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:34.908 17:26:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:29:34.908 [2024-11-26 17:26:12.333540] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:29:35.168 17:26:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:35.168 17:26:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # '[' 004d5fef-1f8c-4ee9-b767-004d853fc5c4 '!=' 004d5fef-1f8c-4ee9-b767-004d853fc5c4 ']' 00:29:35.168 17:26:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@491 -- # has_redundancy raid1 00:29:35.168 17:26:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:29:35.168 17:26:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@199 -- # return 0 00:29:35.168 17:26:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@493 -- # rpc_cmd bdev_passthru_delete pt1 00:29:35.168 17:26:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:35.168 17:26:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:29:35.168 [2024-11-26 17:26:12.381342] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: pt1 00:29:35.168 17:26:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:35.168 17:26:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@496 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:29:35.168 17:26:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:29:35.168 17:26:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:29:35.168 17:26:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:29:35.168 17:26:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:29:35.168 17:26:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:29:35.168 17:26:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:29:35.168 17:26:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:29:35.168 17:26:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:29:35.168 17:26:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:29:35.168 17:26:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:29:35.168 17:26:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:29:35.168 17:26:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:35.168 17:26:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:29:35.168 17:26:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:35.168 17:26:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:29:35.168 "name": "raid_bdev1", 00:29:35.168 "uuid": "004d5fef-1f8c-4ee9-b767-004d853fc5c4", 00:29:35.168 "strip_size_kb": 0, 00:29:35.168 "state": "online", 00:29:35.168 "raid_level": "raid1", 00:29:35.168 "superblock": true, 00:29:35.168 "num_base_bdevs": 2, 00:29:35.168 "num_base_bdevs_discovered": 1, 00:29:35.168 "num_base_bdevs_operational": 1, 00:29:35.168 "base_bdevs_list": [ 00:29:35.168 { 00:29:35.168 "name": null, 00:29:35.168 "uuid": "00000000-0000-0000-0000-000000000000", 00:29:35.168 "is_configured": false, 00:29:35.168 "data_offset": 0, 00:29:35.168 "data_size": 63488 00:29:35.168 }, 00:29:35.168 { 00:29:35.168 "name": "pt2", 00:29:35.168 "uuid": "00000000-0000-0000-0000-000000000002", 00:29:35.168 "is_configured": true, 00:29:35.168 "data_offset": 2048, 00:29:35.168 "data_size": 63488 00:29:35.168 } 00:29:35.168 ] 00:29:35.168 }' 00:29:35.168 17:26:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:29:35.168 17:26:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:29:35.427 17:26:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@499 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:29:35.427 17:26:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:35.427 17:26:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:29:35.685 [2024-11-26 17:26:12.873486] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:29:35.685 [2024-11-26 17:26:12.873528] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:29:35.685 [2024-11-26 17:26:12.873625] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:29:35.685 [2024-11-26 17:26:12.873675] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:29:35.685 [2024-11-26 17:26:12.873691] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:29:35.685 17:26:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:35.685 17:26:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@500 -- # rpc_cmd bdev_raid_get_bdevs all 00:29:35.685 17:26:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:35.685 17:26:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:29:35.685 17:26:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@500 -- # jq -r '.[]' 00:29:35.685 17:26:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:35.685 17:26:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@500 -- # raid_bdev= 00:29:35.685 17:26:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@501 -- # '[' -n '' ']' 00:29:35.685 17:26:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i = 1 )) 00:29:35.685 17:26:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:29:35.685 17:26:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt2 00:29:35.685 17:26:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:35.685 17:26:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:29:35.685 17:26:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:35.685 17:26:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:29:35.685 17:26:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:29:35.685 17:26:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i = 1 )) 00:29:35.685 17:26:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:29:35.685 17:26:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@519 -- # i=1 00:29:35.685 17:26:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@520 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:29:35.685 17:26:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:35.685 17:26:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:29:35.685 [2024-11-26 17:26:12.945490] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:29:35.685 [2024-11-26 17:26:12.945580] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:29:35.685 [2024-11-26 17:26:12.945614] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:29:35.685 [2024-11-26 17:26:12.945629] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:29:35.685 [2024-11-26 17:26:12.948465] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:29:35.685 [2024-11-26 17:26:12.948656] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:29:35.685 [2024-11-26 17:26:12.948776] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:29:35.685 [2024-11-26 17:26:12.948847] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:29:35.685 [2024-11-26 17:26:12.948974] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:29:35.685 [2024-11-26 17:26:12.948993] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:29:35.685 [2024-11-26 17:26:12.949347] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:29:35.685 [2024-11-26 17:26:12.949521] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:29:35.685 [2024-11-26 17:26:12.949533] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008200 00:29:35.685 [2024-11-26 17:26:12.949750] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:29:35.685 pt2 00:29:35.685 17:26:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:35.685 17:26:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@523 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:29:35.685 17:26:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:29:35.685 17:26:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:29:35.685 17:26:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:29:35.685 17:26:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:29:35.685 17:26:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:29:35.685 17:26:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:29:35.685 17:26:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:29:35.685 17:26:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:29:35.685 17:26:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:29:35.685 17:26:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:29:35.685 17:26:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:35.685 17:26:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:29:35.685 17:26:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:29:35.685 17:26:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:35.685 17:26:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:29:35.685 "name": "raid_bdev1", 00:29:35.685 "uuid": "004d5fef-1f8c-4ee9-b767-004d853fc5c4", 00:29:35.685 "strip_size_kb": 0, 00:29:35.685 "state": "online", 00:29:35.685 "raid_level": "raid1", 00:29:35.685 "superblock": true, 00:29:35.685 "num_base_bdevs": 2, 00:29:35.685 "num_base_bdevs_discovered": 1, 00:29:35.685 "num_base_bdevs_operational": 1, 00:29:35.685 "base_bdevs_list": [ 00:29:35.685 { 00:29:35.685 "name": null, 00:29:35.685 "uuid": "00000000-0000-0000-0000-000000000000", 00:29:35.685 "is_configured": false, 00:29:35.685 "data_offset": 2048, 00:29:35.685 "data_size": 63488 00:29:35.685 }, 00:29:35.685 { 00:29:35.685 "name": "pt2", 00:29:35.685 "uuid": "00000000-0000-0000-0000-000000000002", 00:29:35.685 "is_configured": true, 00:29:35.685 "data_offset": 2048, 00:29:35.685 "data_size": 63488 00:29:35.685 } 00:29:35.685 ] 00:29:35.685 }' 00:29:35.685 17:26:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:29:35.685 17:26:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:29:36.251 17:26:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@526 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:29:36.251 17:26:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:36.251 17:26:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:29:36.251 [2024-11-26 17:26:13.401797] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:29:36.251 [2024-11-26 17:26:13.401832] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:29:36.251 [2024-11-26 17:26:13.401913] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:29:36.251 [2024-11-26 17:26:13.401968] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:29:36.251 [2024-11-26 17:26:13.401980] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name raid_bdev1, state offline 00:29:36.251 17:26:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:36.251 17:26:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@527 -- # rpc_cmd bdev_raid_get_bdevs all 00:29:36.251 17:26:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@527 -- # jq -r '.[]' 00:29:36.251 17:26:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:36.251 17:26:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:29:36.252 17:26:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:36.252 17:26:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@527 -- # raid_bdev= 00:29:36.252 17:26:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@528 -- # '[' -n '' ']' 00:29:36.252 17:26:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@532 -- # '[' 2 -gt 2 ']' 00:29:36.252 17:26:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@540 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:29:36.252 17:26:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:36.252 17:26:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:29:36.252 [2024-11-26 17:26:13.457804] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:29:36.252 [2024-11-26 17:26:13.457872] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:29:36.252 [2024-11-26 17:26:13.457897] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009980 00:29:36.252 [2024-11-26 17:26:13.457910] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:29:36.252 [2024-11-26 17:26:13.460640] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:29:36.252 [2024-11-26 17:26:13.460817] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:29:36.252 [2024-11-26 17:26:13.460934] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:29:36.252 [2024-11-26 17:26:13.460997] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:29:36.252 [2024-11-26 17:26:13.461222] bdev_raid.c:3685:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev pt2 (4) greater than existing raid bdev raid_bdev1 (2) 00:29:36.252 [2024-11-26 17:26:13.461239] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:29:36.252 [2024-11-26 17:26:13.461261] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008580 name raid_bdev1, state configuring 00:29:36.252 [2024-11-26 17:26:13.461341] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:29:36.252 [2024-11-26 17:26:13.461423] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008900 00:29:36.252 [2024-11-26 17:26:13.461435] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:29:36.252 [2024-11-26 17:26:13.461747] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:29:36.252 [2024-11-26 17:26:13.461923] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008900 00:29:36.252 [2024-11-26 17:26:13.461940] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008900 00:29:36.252 [2024-11-26 17:26:13.462225] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:29:36.252 pt1 00:29:36.252 17:26:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:36.252 17:26:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@542 -- # '[' 2 -gt 2 ']' 00:29:36.252 17:26:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@554 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:29:36.252 17:26:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:29:36.252 17:26:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:29:36.252 17:26:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:29:36.252 17:26:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:29:36.252 17:26:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:29:36.252 17:26:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:29:36.252 17:26:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:29:36.252 17:26:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:29:36.252 17:26:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:29:36.252 17:26:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:29:36.252 17:26:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:29:36.252 17:26:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:36.252 17:26:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:29:36.252 17:26:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:36.252 17:26:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:29:36.252 "name": "raid_bdev1", 00:29:36.252 "uuid": "004d5fef-1f8c-4ee9-b767-004d853fc5c4", 00:29:36.252 "strip_size_kb": 0, 00:29:36.252 "state": "online", 00:29:36.252 "raid_level": "raid1", 00:29:36.252 "superblock": true, 00:29:36.252 "num_base_bdevs": 2, 00:29:36.252 "num_base_bdevs_discovered": 1, 00:29:36.252 "num_base_bdevs_operational": 1, 00:29:36.252 "base_bdevs_list": [ 00:29:36.252 { 00:29:36.252 "name": null, 00:29:36.252 "uuid": "00000000-0000-0000-0000-000000000000", 00:29:36.252 "is_configured": false, 00:29:36.252 "data_offset": 2048, 00:29:36.252 "data_size": 63488 00:29:36.252 }, 00:29:36.252 { 00:29:36.252 "name": "pt2", 00:29:36.252 "uuid": "00000000-0000-0000-0000-000000000002", 00:29:36.252 "is_configured": true, 00:29:36.252 "data_offset": 2048, 00:29:36.252 "data_size": 63488 00:29:36.252 } 00:29:36.252 ] 00:29:36.252 }' 00:29:36.252 17:26:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:29:36.252 17:26:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:29:36.510 17:26:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@555 -- # rpc_cmd bdev_raid_get_bdevs online 00:29:36.510 17:26:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:36.510 17:26:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@555 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:29:36.510 17:26:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:29:36.510 17:26:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:36.768 17:26:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@555 -- # [[ false == \f\a\l\s\e ]] 00:29:36.768 17:26:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@558 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:29:36.768 17:26:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:36.768 17:26:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:29:36.768 17:26:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@558 -- # jq -r '.[] | .uuid' 00:29:36.768 [2024-11-26 17:26:13.982488] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:29:36.768 17:26:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:36.768 17:26:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@558 -- # '[' 004d5fef-1f8c-4ee9-b767-004d853fc5c4 '!=' 004d5fef-1f8c-4ee9-b767-004d853fc5c4 ']' 00:29:36.768 17:26:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@563 -- # killprocess 63600 00:29:36.768 17:26:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@954 -- # '[' -z 63600 ']' 00:29:36.768 17:26:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@958 -- # kill -0 63600 00:29:36.768 17:26:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@959 -- # uname 00:29:36.768 17:26:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:29:36.768 17:26:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 63600 00:29:36.768 killing process with pid 63600 00:29:36.768 17:26:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:29:36.768 17:26:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:29:36.768 17:26:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 63600' 00:29:36.768 17:26:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@973 -- # kill 63600 00:29:36.768 [2024-11-26 17:26:14.058483] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:29:36.768 [2024-11-26 17:26:14.058582] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:29:36.768 17:26:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@978 -- # wait 63600 00:29:36.768 [2024-11-26 17:26:14.058627] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:29:36.768 [2024-11-26 17:26:14.058645] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008900 name raid_bdev1, state offline 00:29:37.062 [2024-11-26 17:26:14.287127] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:29:38.449 17:26:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@565 -- # return 0 00:29:38.449 00:29:38.449 real 0m6.468s 00:29:38.449 user 0m9.919s 00:29:38.449 sys 0m1.131s 00:29:38.449 ************************************ 00:29:38.449 END TEST raid_superblock_test 00:29:38.449 ************************************ 00:29:38.449 17:26:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:29:38.449 17:26:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:29:38.449 17:26:15 bdev_raid -- bdev/bdev_raid.sh@971 -- # run_test raid_read_error_test raid_io_error_test raid1 2 read 00:29:38.449 17:26:15 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:29:38.449 17:26:15 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:29:38.449 17:26:15 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:29:38.449 ************************************ 00:29:38.449 START TEST raid_read_error_test 00:29:38.449 ************************************ 00:29:38.449 17:26:15 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1129 -- # raid_io_error_test raid1 2 read 00:29:38.449 17:26:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=raid1 00:29:38.449 17:26:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=2 00:29:38.449 17:26:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=read 00:29:38.449 17:26:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:29:38.449 17:26:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:29:38.449 17:26:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:29:38.449 17:26:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:29:38.449 17:26:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:29:38.449 17:26:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:29:38.449 17:26:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:29:38.449 17:26:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:29:38.449 17:26:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:29:38.449 17:26:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:29:38.449 17:26:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:29:38.449 17:26:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:29:38.449 17:26:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:29:38.449 17:26:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:29:38.449 17:26:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:29:38.449 17:26:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@800 -- # '[' raid1 '!=' raid1 ']' 00:29:38.449 17:26:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@804 -- # strip_size=0 00:29:38.449 17:26:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:29:38.449 17:26:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.gFyiRX5MLd 00:29:38.449 17:26:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=63933 00:29:38.449 17:26:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 63933 00:29:38.449 17:26:15 bdev_raid.raid_read_error_test -- common/autotest_common.sh@835 -- # '[' -z 63933 ']' 00:29:38.449 17:26:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:29:38.449 17:26:15 bdev_raid.raid_read_error_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:38.449 17:26:15 bdev_raid.raid_read_error_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:29:38.449 17:26:15 bdev_raid.raid_read_error_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:38.449 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:29:38.449 17:26:15 bdev_raid.raid_read_error_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:29:38.449 17:26:15 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:29:38.449 [2024-11-26 17:26:15.650708] Starting SPDK v25.01-pre git sha1 f7ce15267 / DPDK 24.03.0 initialization... 00:29:38.449 [2024-11-26 17:26:15.650883] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63933 ] 00:29:38.449 [2024-11-26 17:26:15.842413] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:38.707 [2024-11-26 17:26:15.960618] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:29:38.966 [2024-11-26 17:26:16.172838] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:29:38.967 [2024-11-26 17:26:16.172891] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:29:39.226 17:26:16 bdev_raid.raid_read_error_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:29:39.226 17:26:16 bdev_raid.raid_read_error_test -- common/autotest_common.sh@868 -- # return 0 00:29:39.226 17:26:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:29:39.226 17:26:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:29:39.226 17:26:16 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:39.226 17:26:16 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:29:39.226 BaseBdev1_malloc 00:29:39.226 17:26:16 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:39.226 17:26:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:29:39.226 17:26:16 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:39.226 17:26:16 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:29:39.226 true 00:29:39.226 17:26:16 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:39.226 17:26:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:29:39.226 17:26:16 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:39.226 17:26:16 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:29:39.226 [2024-11-26 17:26:16.634964] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:29:39.226 [2024-11-26 17:26:16.635029] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:29:39.226 [2024-11-26 17:26:16.635066] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:29:39.226 [2024-11-26 17:26:16.635084] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:29:39.226 [2024-11-26 17:26:16.637555] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:29:39.226 [2024-11-26 17:26:16.637602] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:29:39.226 BaseBdev1 00:29:39.226 17:26:16 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:39.226 17:26:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:29:39.226 17:26:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:29:39.226 17:26:16 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:39.226 17:26:16 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:29:39.485 BaseBdev2_malloc 00:29:39.485 17:26:16 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:39.485 17:26:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:29:39.485 17:26:16 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:39.485 17:26:16 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:29:39.485 true 00:29:39.485 17:26:16 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:39.485 17:26:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:29:39.485 17:26:16 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:39.485 17:26:16 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:29:39.485 [2024-11-26 17:26:16.697274] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:29:39.485 [2024-11-26 17:26:16.697332] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:29:39.485 [2024-11-26 17:26:16.697353] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:29:39.485 [2024-11-26 17:26:16.697385] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:29:39.485 [2024-11-26 17:26:16.699875] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:29:39.485 [2024-11-26 17:26:16.699922] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:29:39.485 BaseBdev2 00:29:39.485 17:26:16 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:39.485 17:26:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n raid_bdev1 -s 00:29:39.485 17:26:16 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:39.485 17:26:16 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:29:39.485 [2024-11-26 17:26:16.705351] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:29:39.485 [2024-11-26 17:26:16.707644] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:29:39.485 [2024-11-26 17:26:16.707854] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:29:39.485 [2024-11-26 17:26:16.707877] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:29:39.485 [2024-11-26 17:26:16.708199] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:29:39.485 [2024-11-26 17:26:16.708399] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:29:39.485 [2024-11-26 17:26:16.708418] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:29:39.485 [2024-11-26 17:26:16.708587] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:29:39.485 17:26:16 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:39.485 17:26:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:29:39.485 17:26:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:29:39.485 17:26:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:29:39.485 17:26:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:29:39.486 17:26:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:29:39.486 17:26:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:29:39.486 17:26:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:29:39.486 17:26:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:29:39.486 17:26:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:29:39.486 17:26:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:29:39.486 17:26:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:29:39.486 17:26:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:29:39.486 17:26:16 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:39.486 17:26:16 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:29:39.486 17:26:16 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:39.486 17:26:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:29:39.486 "name": "raid_bdev1", 00:29:39.486 "uuid": "f502af6c-3ac2-4046-9077-2ba9928a7f09", 00:29:39.486 "strip_size_kb": 0, 00:29:39.486 "state": "online", 00:29:39.486 "raid_level": "raid1", 00:29:39.486 "superblock": true, 00:29:39.486 "num_base_bdevs": 2, 00:29:39.486 "num_base_bdevs_discovered": 2, 00:29:39.486 "num_base_bdevs_operational": 2, 00:29:39.486 "base_bdevs_list": [ 00:29:39.486 { 00:29:39.486 "name": "BaseBdev1", 00:29:39.486 "uuid": "e89ec6ec-1b6e-5d23-a2c2-23e02ae7de78", 00:29:39.486 "is_configured": true, 00:29:39.486 "data_offset": 2048, 00:29:39.486 "data_size": 63488 00:29:39.486 }, 00:29:39.486 { 00:29:39.486 "name": "BaseBdev2", 00:29:39.486 "uuid": "69de39e7-72db-5d78-bd3c-1eba7de5b71e", 00:29:39.486 "is_configured": true, 00:29:39.486 "data_offset": 2048, 00:29:39.486 "data_size": 63488 00:29:39.486 } 00:29:39.486 ] 00:29:39.486 }' 00:29:39.486 17:26:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:29:39.486 17:26:16 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:29:39.745 17:26:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:29:39.745 17:26:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:29:40.003 [2024-11-26 17:26:17.274838] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000063c0 00:29:40.940 17:26:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc read failure 00:29:40.940 17:26:18 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:40.940 17:26:18 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:29:40.940 17:26:18 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:40.940 17:26:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:29:40.940 17:26:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@832 -- # [[ raid1 = \r\a\i\d\1 ]] 00:29:40.940 17:26:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@832 -- # [[ read = \w\r\i\t\e ]] 00:29:40.940 17:26:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=2 00:29:40.940 17:26:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:29:40.940 17:26:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:29:40.940 17:26:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:29:40.940 17:26:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:29:40.940 17:26:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:29:40.940 17:26:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:29:40.940 17:26:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:29:40.940 17:26:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:29:40.940 17:26:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:29:40.940 17:26:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:29:40.940 17:26:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:29:40.940 17:26:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:29:40.940 17:26:18 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:40.940 17:26:18 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:29:40.940 17:26:18 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:40.940 17:26:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:29:40.940 "name": "raid_bdev1", 00:29:40.940 "uuid": "f502af6c-3ac2-4046-9077-2ba9928a7f09", 00:29:40.940 "strip_size_kb": 0, 00:29:40.940 "state": "online", 00:29:40.940 "raid_level": "raid1", 00:29:40.940 "superblock": true, 00:29:40.940 "num_base_bdevs": 2, 00:29:40.940 "num_base_bdevs_discovered": 2, 00:29:40.940 "num_base_bdevs_operational": 2, 00:29:40.940 "base_bdevs_list": [ 00:29:40.940 { 00:29:40.940 "name": "BaseBdev1", 00:29:40.940 "uuid": "e89ec6ec-1b6e-5d23-a2c2-23e02ae7de78", 00:29:40.940 "is_configured": true, 00:29:40.940 "data_offset": 2048, 00:29:40.940 "data_size": 63488 00:29:40.940 }, 00:29:40.940 { 00:29:40.940 "name": "BaseBdev2", 00:29:40.940 "uuid": "69de39e7-72db-5d78-bd3c-1eba7de5b71e", 00:29:40.940 "is_configured": true, 00:29:40.940 "data_offset": 2048, 00:29:40.940 "data_size": 63488 00:29:40.940 } 00:29:40.940 ] 00:29:40.940 }' 00:29:40.940 17:26:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:29:40.940 17:26:18 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:29:41.200 17:26:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:29:41.200 17:26:18 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:41.200 17:26:18 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:29:41.200 [2024-11-26 17:26:18.617709] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:29:41.200 [2024-11-26 17:26:18.617761] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:29:41.200 [2024-11-26 17:26:18.620689] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:29:41.200 [2024-11-26 17:26:18.620751] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:29:41.200 [2024-11-26 17:26:18.620837] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:29:41.200 [2024-11-26 17:26:18.620854] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:29:41.200 { 00:29:41.200 "results": [ 00:29:41.200 { 00:29:41.200 "job": "raid_bdev1", 00:29:41.200 "core_mask": "0x1", 00:29:41.200 "workload": "randrw", 00:29:41.200 "percentage": 50, 00:29:41.200 "status": "finished", 00:29:41.200 "queue_depth": 1, 00:29:41.200 "io_size": 131072, 00:29:41.200 "runtime": 1.340697, 00:29:41.200 "iops": 16459.34912959453, 00:29:41.200 "mibps": 2057.4186411993164, 00:29:41.200 "io_failed": 0, 00:29:41.200 "io_timeout": 0, 00:29:41.200 "avg_latency_us": 57.81089573528238, 00:29:41.200 "min_latency_us": 24.86857142857143, 00:29:41.200 "max_latency_us": 1568.182857142857 00:29:41.200 } 00:29:41.200 ], 00:29:41.200 "core_count": 1 00:29:41.200 } 00:29:41.200 17:26:18 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:41.200 17:26:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 63933 00:29:41.200 17:26:18 bdev_raid.raid_read_error_test -- common/autotest_common.sh@954 -- # '[' -z 63933 ']' 00:29:41.200 17:26:18 bdev_raid.raid_read_error_test -- common/autotest_common.sh@958 -- # kill -0 63933 00:29:41.200 17:26:18 bdev_raid.raid_read_error_test -- common/autotest_common.sh@959 -- # uname 00:29:41.200 17:26:18 bdev_raid.raid_read_error_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:29:41.200 17:26:18 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 63933 00:29:41.459 17:26:18 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:29:41.459 17:26:18 bdev_raid.raid_read_error_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:29:41.459 killing process with pid 63933 00:29:41.459 17:26:18 bdev_raid.raid_read_error_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 63933' 00:29:41.459 17:26:18 bdev_raid.raid_read_error_test -- common/autotest_common.sh@973 -- # kill 63933 00:29:41.459 [2024-11-26 17:26:18.669346] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:29:41.459 17:26:18 bdev_raid.raid_read_error_test -- common/autotest_common.sh@978 -- # wait 63933 00:29:41.459 [2024-11-26 17:26:18.813093] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:29:42.836 17:26:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.gFyiRX5MLd 00:29:42.836 17:26:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:29:42.836 17:26:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:29:42.836 17:26:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.00 00:29:42.836 17:26:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy raid1 00:29:42.836 17:26:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:29:42.836 17:26:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@199 -- # return 0 00:29:42.836 17:26:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@847 -- # [[ 0.00 = \0\.\0\0 ]] 00:29:42.836 00:29:42.836 real 0m4.537s 00:29:42.836 user 0m5.511s 00:29:42.836 sys 0m0.610s 00:29:42.836 17:26:20 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:29:42.836 17:26:20 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:29:42.836 ************************************ 00:29:42.836 END TEST raid_read_error_test 00:29:42.836 ************************************ 00:29:42.836 17:26:20 bdev_raid -- bdev/bdev_raid.sh@972 -- # run_test raid_write_error_test raid_io_error_test raid1 2 write 00:29:42.836 17:26:20 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:29:42.836 17:26:20 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:29:42.836 17:26:20 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:29:42.836 ************************************ 00:29:42.836 START TEST raid_write_error_test 00:29:42.836 ************************************ 00:29:42.836 17:26:20 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1129 -- # raid_io_error_test raid1 2 write 00:29:42.836 17:26:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=raid1 00:29:42.836 17:26:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=2 00:29:42.836 17:26:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=write 00:29:42.836 17:26:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:29:42.836 17:26:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:29:42.836 17:26:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:29:42.836 17:26:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:29:42.836 17:26:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:29:42.836 17:26:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:29:42.836 17:26:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:29:42.836 17:26:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:29:42.836 17:26:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:29:42.836 17:26:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:29:42.836 17:26:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:29:42.836 17:26:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:29:42.836 17:26:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:29:42.836 17:26:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:29:42.836 17:26:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:29:42.836 17:26:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@800 -- # '[' raid1 '!=' raid1 ']' 00:29:42.836 17:26:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@804 -- # strip_size=0 00:29:42.836 17:26:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:29:42.836 17:26:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.ML71CaE1B1 00:29:42.836 17:26:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=64074 00:29:42.836 17:26:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 64074 00:29:42.836 17:26:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:29:42.836 17:26:20 bdev_raid.raid_write_error_test -- common/autotest_common.sh@835 -- # '[' -z 64074 ']' 00:29:42.836 17:26:20 bdev_raid.raid_write_error_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:42.836 17:26:20 bdev_raid.raid_write_error_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:29:42.836 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:29:42.836 17:26:20 bdev_raid.raid_write_error_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:42.836 17:26:20 bdev_raid.raid_write_error_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:29:42.836 17:26:20 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:29:42.836 [2024-11-26 17:26:20.241885] Starting SPDK v25.01-pre git sha1 f7ce15267 / DPDK 24.03.0 initialization... 00:29:42.836 [2024-11-26 17:26:20.242090] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid64074 ] 00:29:43.096 [2024-11-26 17:26:20.444597] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:43.355 [2024-11-26 17:26:20.625383] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:29:43.615 [2024-11-26 17:26:20.840742] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:29:43.615 [2024-11-26 17:26:20.840784] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:29:43.875 17:26:21 bdev_raid.raid_write_error_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:29:43.875 17:26:21 bdev_raid.raid_write_error_test -- common/autotest_common.sh@868 -- # return 0 00:29:43.875 17:26:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:29:43.875 17:26:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:29:43.875 17:26:21 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:43.875 17:26:21 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:29:43.875 BaseBdev1_malloc 00:29:43.875 17:26:21 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:43.875 17:26:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:29:43.875 17:26:21 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:43.875 17:26:21 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:29:43.875 true 00:29:43.875 17:26:21 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:43.875 17:26:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:29:43.875 17:26:21 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:43.875 17:26:21 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:29:43.875 [2024-11-26 17:26:21.243581] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:29:43.875 [2024-11-26 17:26:21.243665] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:29:43.875 [2024-11-26 17:26:21.243693] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:29:43.875 [2024-11-26 17:26:21.243709] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:29:43.875 [2024-11-26 17:26:21.246316] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:29:43.875 [2024-11-26 17:26:21.246363] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:29:43.875 BaseBdev1 00:29:43.875 17:26:21 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:43.875 17:26:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:29:43.875 17:26:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:29:43.875 17:26:21 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:43.875 17:26:21 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:29:43.875 BaseBdev2_malloc 00:29:43.875 17:26:21 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:43.875 17:26:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:29:43.875 17:26:21 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:43.875 17:26:21 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:29:43.875 true 00:29:43.875 17:26:21 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:43.875 17:26:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:29:43.875 17:26:21 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:43.875 17:26:21 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:29:43.875 [2024-11-26 17:26:21.307980] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:29:43.875 [2024-11-26 17:26:21.308042] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:29:43.875 [2024-11-26 17:26:21.308074] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:29:43.875 [2024-11-26 17:26:21.308088] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:29:43.875 [2024-11-26 17:26:21.310563] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:29:43.875 [2024-11-26 17:26:21.310608] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:29:43.875 BaseBdev2 00:29:43.875 17:26:21 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:43.875 17:26:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n raid_bdev1 -s 00:29:43.875 17:26:21 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:43.875 17:26:21 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:29:43.875 [2024-11-26 17:26:21.320066] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:29:44.134 [2024-11-26 17:26:21.322349] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:29:44.134 [2024-11-26 17:26:21.322587] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:29:44.134 [2024-11-26 17:26:21.322604] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:29:44.134 [2024-11-26 17:26:21.322898] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:29:44.134 [2024-11-26 17:26:21.323121] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:29:44.134 [2024-11-26 17:26:21.323134] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:29:44.134 [2024-11-26 17:26:21.323301] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:29:44.134 17:26:21 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:44.134 17:26:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:29:44.134 17:26:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:29:44.134 17:26:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:29:44.134 17:26:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:29:44.134 17:26:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:29:44.134 17:26:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:29:44.134 17:26:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:29:44.134 17:26:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:29:44.134 17:26:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:29:44.134 17:26:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:29:44.134 17:26:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:29:44.134 17:26:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:29:44.134 17:26:21 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:44.134 17:26:21 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:29:44.134 17:26:21 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:44.134 17:26:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:29:44.134 "name": "raid_bdev1", 00:29:44.134 "uuid": "22a089b9-6034-4fdc-a908-009abb0ca2f2", 00:29:44.134 "strip_size_kb": 0, 00:29:44.134 "state": "online", 00:29:44.134 "raid_level": "raid1", 00:29:44.134 "superblock": true, 00:29:44.134 "num_base_bdevs": 2, 00:29:44.134 "num_base_bdevs_discovered": 2, 00:29:44.134 "num_base_bdevs_operational": 2, 00:29:44.134 "base_bdevs_list": [ 00:29:44.134 { 00:29:44.134 "name": "BaseBdev1", 00:29:44.134 "uuid": "9e027e08-ae75-518a-ae8c-2aa979eb8e5c", 00:29:44.134 "is_configured": true, 00:29:44.134 "data_offset": 2048, 00:29:44.134 "data_size": 63488 00:29:44.134 }, 00:29:44.134 { 00:29:44.134 "name": "BaseBdev2", 00:29:44.134 "uuid": "f4058e55-e7ca-5c6b-8449-21909a40a9d0", 00:29:44.134 "is_configured": true, 00:29:44.134 "data_offset": 2048, 00:29:44.134 "data_size": 63488 00:29:44.134 } 00:29:44.134 ] 00:29:44.134 }' 00:29:44.134 17:26:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:29:44.134 17:26:21 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:29:44.393 17:26:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:29:44.393 17:26:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:29:44.652 [2024-11-26 17:26:21.861626] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000063c0 00:29:45.590 17:26:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc write failure 00:29:45.590 17:26:22 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:45.590 17:26:22 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:29:45.590 [2024-11-26 17:26:22.783862] bdev_raid.c:2276:_raid_bdev_fail_base_bdev: *NOTICE*: Failing base bdev in slot 0 ('BaseBdev1') of raid bdev 'raid_bdev1' 00:29:45.590 [2024-11-26 17:26:22.783934] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:29:45.590 [2024-11-26 17:26:22.784140] bdev_raid.c:1974:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 0 raid_ch: 0x60d0000063c0 00:29:45.590 17:26:22 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:45.590 17:26:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:29:45.590 17:26:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@832 -- # [[ raid1 = \r\a\i\d\1 ]] 00:29:45.590 17:26:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@832 -- # [[ write = \w\r\i\t\e ]] 00:29:45.590 17:26:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@833 -- # expected_num_base_bdevs=1 00:29:45.590 17:26:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:29:45.590 17:26:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:29:45.590 17:26:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:29:45.590 17:26:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:29:45.590 17:26:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:29:45.590 17:26:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:29:45.590 17:26:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:29:45.590 17:26:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:29:45.590 17:26:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:29:45.590 17:26:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:29:45.590 17:26:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:29:45.590 17:26:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:29:45.590 17:26:22 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:45.590 17:26:22 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:29:45.590 17:26:22 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:45.590 17:26:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:29:45.590 "name": "raid_bdev1", 00:29:45.590 "uuid": "22a089b9-6034-4fdc-a908-009abb0ca2f2", 00:29:45.590 "strip_size_kb": 0, 00:29:45.590 "state": "online", 00:29:45.590 "raid_level": "raid1", 00:29:45.590 "superblock": true, 00:29:45.590 "num_base_bdevs": 2, 00:29:45.590 "num_base_bdevs_discovered": 1, 00:29:45.590 "num_base_bdevs_operational": 1, 00:29:45.590 "base_bdevs_list": [ 00:29:45.590 { 00:29:45.590 "name": null, 00:29:45.590 "uuid": "00000000-0000-0000-0000-000000000000", 00:29:45.590 "is_configured": false, 00:29:45.590 "data_offset": 0, 00:29:45.590 "data_size": 63488 00:29:45.590 }, 00:29:45.590 { 00:29:45.590 "name": "BaseBdev2", 00:29:45.590 "uuid": "f4058e55-e7ca-5c6b-8449-21909a40a9d0", 00:29:45.590 "is_configured": true, 00:29:45.590 "data_offset": 2048, 00:29:45.590 "data_size": 63488 00:29:45.590 } 00:29:45.590 ] 00:29:45.590 }' 00:29:45.590 17:26:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:29:45.590 17:26:22 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:29:45.850 17:26:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:29:45.850 17:26:23 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:45.850 17:26:23 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:29:45.850 [2024-11-26 17:26:23.254183] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:29:45.850 [2024-11-26 17:26:23.254389] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:29:45.850 [2024-11-26 17:26:23.257714] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:29:45.850 [2024-11-26 17:26:23.257888] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:29:45.850 [2024-11-26 17:26:23.257972] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:29:45.850 [2024-11-26 17:26:23.257989] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:29:45.850 17:26:23 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:45.850 17:26:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 64074 00:29:45.850 17:26:23 bdev_raid.raid_write_error_test -- common/autotest_common.sh@954 -- # '[' -z 64074 ']' 00:29:45.850 17:26:23 bdev_raid.raid_write_error_test -- common/autotest_common.sh@958 -- # kill -0 64074 00:29:45.850 { 00:29:45.850 "results": [ 00:29:45.850 { 00:29:45.850 "job": "raid_bdev1", 00:29:45.850 "core_mask": "0x1", 00:29:45.850 "workload": "randrw", 00:29:45.850 "percentage": 50, 00:29:45.850 "status": "finished", 00:29:45.850 "queue_depth": 1, 00:29:45.850 "io_size": 131072, 00:29:45.850 "runtime": 1.39064, 00:29:45.850 "iops": 19324.1960536156, 00:29:45.850 "mibps": 2415.52450670195, 00:29:45.850 "io_failed": 0, 00:29:45.850 "io_timeout": 0, 00:29:45.850 "avg_latency_us": 48.77533909943243, 00:29:45.850 "min_latency_us": 24.502857142857142, 00:29:45.850 "max_latency_us": 1591.5885714285714 00:29:45.850 } 00:29:45.850 ], 00:29:45.850 "core_count": 1 00:29:45.850 } 00:29:45.850 17:26:23 bdev_raid.raid_write_error_test -- common/autotest_common.sh@959 -- # uname 00:29:45.850 17:26:23 bdev_raid.raid_write_error_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:29:45.850 17:26:23 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 64074 00:29:46.109 killing process with pid 64074 00:29:46.109 17:26:23 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:29:46.109 17:26:23 bdev_raid.raid_write_error_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:29:46.109 17:26:23 bdev_raid.raid_write_error_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 64074' 00:29:46.110 17:26:23 bdev_raid.raid_write_error_test -- common/autotest_common.sh@973 -- # kill 64074 00:29:46.110 17:26:23 bdev_raid.raid_write_error_test -- common/autotest_common.sh@978 -- # wait 64074 00:29:46.110 [2024-11-26 17:26:23.297387] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:29:46.110 [2024-11-26 17:26:23.436810] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:29:47.488 17:26:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.ML71CaE1B1 00:29:47.488 17:26:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:29:47.488 17:26:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:29:47.488 17:26:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.00 00:29:47.488 17:26:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy raid1 00:29:47.488 ************************************ 00:29:47.488 END TEST raid_write_error_test 00:29:47.488 ************************************ 00:29:47.488 17:26:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:29:47.488 17:26:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@199 -- # return 0 00:29:47.488 17:26:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@847 -- # [[ 0.00 = \0\.\0\0 ]] 00:29:47.488 00:29:47.488 real 0m4.579s 00:29:47.488 user 0m5.539s 00:29:47.488 sys 0m0.594s 00:29:47.488 17:26:24 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:29:47.488 17:26:24 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:29:47.488 17:26:24 bdev_raid -- bdev/bdev_raid.sh@966 -- # for n in {2..4} 00:29:47.488 17:26:24 bdev_raid -- bdev/bdev_raid.sh@967 -- # for level in raid0 concat raid1 00:29:47.488 17:26:24 bdev_raid -- bdev/bdev_raid.sh@968 -- # run_test raid_state_function_test raid_state_function_test raid0 3 false 00:29:47.488 17:26:24 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:29:47.488 17:26:24 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:29:47.488 17:26:24 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:29:47.488 ************************************ 00:29:47.488 START TEST raid_state_function_test 00:29:47.488 ************************************ 00:29:47.488 17:26:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1129 -- # raid_state_function_test raid0 3 false 00:29:47.488 17:26:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # local raid_level=raid0 00:29:47.488 17:26:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=3 00:29:47.488 17:26:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # local superblock=false 00:29:47.488 17:26:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:29:47.488 17:26:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:29:47.488 17:26:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:29:47.488 17:26:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:29:47.488 17:26:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:29:47.488 17:26:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:29:47.488 17:26:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:29:47.488 17:26:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:29:47.488 17:26:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:29:47.488 17:26:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:29:47.488 17:26:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:29:47.488 17:26:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:29:47.488 17:26:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:29:47.488 17:26:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:29:47.488 17:26:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:29:47.488 17:26:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # local strip_size 00:29:47.488 17:26:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:29:47.488 17:26:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:29:47.488 17:26:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@215 -- # '[' raid0 '!=' raid1 ']' 00:29:47.488 17:26:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:29:47.488 17:26:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:29:47.488 17:26:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@222 -- # '[' false = true ']' 00:29:47.488 17:26:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # superblock_create_arg= 00:29:47.488 17:26:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@229 -- # raid_pid=64218 00:29:47.488 Process raid pid: 64218 00:29:47.488 17:26:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 64218' 00:29:47.488 17:26:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@231 -- # waitforlisten 64218 00:29:47.488 17:26:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:29:47.488 17:26:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@835 -- # '[' -z 64218 ']' 00:29:47.488 17:26:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:47.488 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:29:47.488 17:26:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:29:47.488 17:26:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:47.488 17:26:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:29:47.488 17:26:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:29:47.488 [2024-11-26 17:26:24.847068] Starting SPDK v25.01-pre git sha1 f7ce15267 / DPDK 24.03.0 initialization... 00:29:47.488 [2024-11-26 17:26:24.847202] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:29:47.747 [2024-11-26 17:26:25.021768] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:47.747 [2024-11-26 17:26:25.142600] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:29:48.084 [2024-11-26 17:26:25.366118] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:29:48.084 [2024-11-26 17:26:25.366155] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:29:48.343 17:26:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:29:48.343 17:26:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@868 -- # return 0 00:29:48.343 17:26:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:29:48.343 17:26:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:48.343 17:26:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:29:48.602 [2024-11-26 17:26:25.793577] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:29:48.602 [2024-11-26 17:26:25.793638] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:29:48.602 [2024-11-26 17:26:25.793650] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:29:48.602 [2024-11-26 17:26:25.793664] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:29:48.602 [2024-11-26 17:26:25.793672] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:29:48.602 [2024-11-26 17:26:25.793684] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:29:48.602 17:26:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:48.602 17:26:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:29:48.602 17:26:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:29:48.602 17:26:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:29:48.602 17:26:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:29:48.602 17:26:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:29:48.602 17:26:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:29:48.602 17:26:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:29:48.602 17:26:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:29:48.602 17:26:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:29:48.602 17:26:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:29:48.602 17:26:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:29:48.602 17:26:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:29:48.602 17:26:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:48.602 17:26:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:29:48.602 17:26:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:48.602 17:26:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:29:48.602 "name": "Existed_Raid", 00:29:48.602 "uuid": "00000000-0000-0000-0000-000000000000", 00:29:48.602 "strip_size_kb": 64, 00:29:48.602 "state": "configuring", 00:29:48.602 "raid_level": "raid0", 00:29:48.602 "superblock": false, 00:29:48.602 "num_base_bdevs": 3, 00:29:48.602 "num_base_bdevs_discovered": 0, 00:29:48.602 "num_base_bdevs_operational": 3, 00:29:48.602 "base_bdevs_list": [ 00:29:48.602 { 00:29:48.602 "name": "BaseBdev1", 00:29:48.602 "uuid": "00000000-0000-0000-0000-000000000000", 00:29:48.602 "is_configured": false, 00:29:48.602 "data_offset": 0, 00:29:48.602 "data_size": 0 00:29:48.602 }, 00:29:48.602 { 00:29:48.602 "name": "BaseBdev2", 00:29:48.602 "uuid": "00000000-0000-0000-0000-000000000000", 00:29:48.602 "is_configured": false, 00:29:48.602 "data_offset": 0, 00:29:48.602 "data_size": 0 00:29:48.602 }, 00:29:48.602 { 00:29:48.602 "name": "BaseBdev3", 00:29:48.602 "uuid": "00000000-0000-0000-0000-000000000000", 00:29:48.602 "is_configured": false, 00:29:48.602 "data_offset": 0, 00:29:48.602 "data_size": 0 00:29:48.602 } 00:29:48.602 ] 00:29:48.602 }' 00:29:48.602 17:26:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:29:48.602 17:26:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:29:48.862 17:26:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:29:48.862 17:26:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:48.862 17:26:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:29:48.862 [2024-11-26 17:26:26.225646] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:29:48.862 [2024-11-26 17:26:26.225689] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:29:48.862 17:26:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:48.862 17:26:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:29:48.862 17:26:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:48.862 17:26:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:29:48.862 [2024-11-26 17:26:26.237641] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:29:48.862 [2024-11-26 17:26:26.237698] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:29:48.862 [2024-11-26 17:26:26.237710] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:29:48.862 [2024-11-26 17:26:26.237724] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:29:48.862 [2024-11-26 17:26:26.237732] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:29:48.862 [2024-11-26 17:26:26.237746] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:29:48.862 17:26:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:48.862 17:26:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:29:48.862 17:26:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:48.862 17:26:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:29:48.862 [2024-11-26 17:26:26.287874] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:29:48.862 BaseBdev1 00:29:48.862 17:26:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:48.862 17:26:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:29:48.862 17:26:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:29:48.862 17:26:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:29:48.862 17:26:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:29:48.862 17:26:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:29:48.862 17:26:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:29:48.862 17:26:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:29:48.862 17:26:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:48.862 17:26:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:29:48.862 17:26:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:48.862 17:26:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:29:48.862 17:26:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:48.862 17:26:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:29:49.121 [ 00:29:49.121 { 00:29:49.121 "name": "BaseBdev1", 00:29:49.121 "aliases": [ 00:29:49.121 "8cad9252-8f83-4cf1-ad8d-14ed76d5cb3e" 00:29:49.121 ], 00:29:49.121 "product_name": "Malloc disk", 00:29:49.121 "block_size": 512, 00:29:49.121 "num_blocks": 65536, 00:29:49.121 "uuid": "8cad9252-8f83-4cf1-ad8d-14ed76d5cb3e", 00:29:49.121 "assigned_rate_limits": { 00:29:49.121 "rw_ios_per_sec": 0, 00:29:49.121 "rw_mbytes_per_sec": 0, 00:29:49.121 "r_mbytes_per_sec": 0, 00:29:49.121 "w_mbytes_per_sec": 0 00:29:49.121 }, 00:29:49.121 "claimed": true, 00:29:49.121 "claim_type": "exclusive_write", 00:29:49.121 "zoned": false, 00:29:49.121 "supported_io_types": { 00:29:49.121 "read": true, 00:29:49.121 "write": true, 00:29:49.121 "unmap": true, 00:29:49.121 "flush": true, 00:29:49.121 "reset": true, 00:29:49.121 "nvme_admin": false, 00:29:49.121 "nvme_io": false, 00:29:49.121 "nvme_io_md": false, 00:29:49.121 "write_zeroes": true, 00:29:49.121 "zcopy": true, 00:29:49.121 "get_zone_info": false, 00:29:49.121 "zone_management": false, 00:29:49.121 "zone_append": false, 00:29:49.121 "compare": false, 00:29:49.121 "compare_and_write": false, 00:29:49.121 "abort": true, 00:29:49.121 "seek_hole": false, 00:29:49.121 "seek_data": false, 00:29:49.121 "copy": true, 00:29:49.121 "nvme_iov_md": false 00:29:49.121 }, 00:29:49.121 "memory_domains": [ 00:29:49.121 { 00:29:49.121 "dma_device_id": "system", 00:29:49.121 "dma_device_type": 1 00:29:49.121 }, 00:29:49.121 { 00:29:49.121 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:29:49.121 "dma_device_type": 2 00:29:49.121 } 00:29:49.121 ], 00:29:49.121 "driver_specific": {} 00:29:49.121 } 00:29:49.121 ] 00:29:49.121 17:26:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:49.121 17:26:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:29:49.121 17:26:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:29:49.121 17:26:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:29:49.121 17:26:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:29:49.121 17:26:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:29:49.121 17:26:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:29:49.121 17:26:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:29:49.121 17:26:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:29:49.121 17:26:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:29:49.121 17:26:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:29:49.121 17:26:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:29:49.121 17:26:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:29:49.121 17:26:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:29:49.121 17:26:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:49.122 17:26:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:29:49.122 17:26:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:49.122 17:26:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:29:49.122 "name": "Existed_Raid", 00:29:49.122 "uuid": "00000000-0000-0000-0000-000000000000", 00:29:49.122 "strip_size_kb": 64, 00:29:49.122 "state": "configuring", 00:29:49.122 "raid_level": "raid0", 00:29:49.122 "superblock": false, 00:29:49.122 "num_base_bdevs": 3, 00:29:49.122 "num_base_bdevs_discovered": 1, 00:29:49.122 "num_base_bdevs_operational": 3, 00:29:49.122 "base_bdevs_list": [ 00:29:49.122 { 00:29:49.122 "name": "BaseBdev1", 00:29:49.122 "uuid": "8cad9252-8f83-4cf1-ad8d-14ed76d5cb3e", 00:29:49.122 "is_configured": true, 00:29:49.122 "data_offset": 0, 00:29:49.122 "data_size": 65536 00:29:49.122 }, 00:29:49.122 { 00:29:49.122 "name": "BaseBdev2", 00:29:49.122 "uuid": "00000000-0000-0000-0000-000000000000", 00:29:49.122 "is_configured": false, 00:29:49.122 "data_offset": 0, 00:29:49.122 "data_size": 0 00:29:49.122 }, 00:29:49.122 { 00:29:49.122 "name": "BaseBdev3", 00:29:49.122 "uuid": "00000000-0000-0000-0000-000000000000", 00:29:49.122 "is_configured": false, 00:29:49.122 "data_offset": 0, 00:29:49.122 "data_size": 0 00:29:49.122 } 00:29:49.122 ] 00:29:49.122 }' 00:29:49.122 17:26:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:29:49.122 17:26:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:29:49.381 17:26:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:29:49.381 17:26:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:49.381 17:26:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:29:49.381 [2024-11-26 17:26:26.780076] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:29:49.381 [2024-11-26 17:26:26.780129] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:29:49.381 17:26:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:49.381 17:26:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:29:49.381 17:26:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:49.381 17:26:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:29:49.381 [2024-11-26 17:26:26.788116] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:29:49.381 [2024-11-26 17:26:26.790225] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:29:49.381 [2024-11-26 17:26:26.790267] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:29:49.381 [2024-11-26 17:26:26.790279] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:29:49.381 [2024-11-26 17:26:26.790292] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:29:49.381 17:26:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:49.381 17:26:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:29:49.381 17:26:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:29:49.381 17:26:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:29:49.381 17:26:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:29:49.381 17:26:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:29:49.381 17:26:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:29:49.381 17:26:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:29:49.381 17:26:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:29:49.381 17:26:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:29:49.381 17:26:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:29:49.381 17:26:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:29:49.381 17:26:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:29:49.381 17:26:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:29:49.381 17:26:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:29:49.381 17:26:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:49.381 17:26:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:29:49.639 17:26:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:49.639 17:26:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:29:49.639 "name": "Existed_Raid", 00:29:49.639 "uuid": "00000000-0000-0000-0000-000000000000", 00:29:49.639 "strip_size_kb": 64, 00:29:49.639 "state": "configuring", 00:29:49.639 "raid_level": "raid0", 00:29:49.639 "superblock": false, 00:29:49.639 "num_base_bdevs": 3, 00:29:49.639 "num_base_bdevs_discovered": 1, 00:29:49.639 "num_base_bdevs_operational": 3, 00:29:49.639 "base_bdevs_list": [ 00:29:49.639 { 00:29:49.639 "name": "BaseBdev1", 00:29:49.639 "uuid": "8cad9252-8f83-4cf1-ad8d-14ed76d5cb3e", 00:29:49.639 "is_configured": true, 00:29:49.639 "data_offset": 0, 00:29:49.639 "data_size": 65536 00:29:49.639 }, 00:29:49.639 { 00:29:49.639 "name": "BaseBdev2", 00:29:49.639 "uuid": "00000000-0000-0000-0000-000000000000", 00:29:49.639 "is_configured": false, 00:29:49.639 "data_offset": 0, 00:29:49.639 "data_size": 0 00:29:49.639 }, 00:29:49.639 { 00:29:49.639 "name": "BaseBdev3", 00:29:49.639 "uuid": "00000000-0000-0000-0000-000000000000", 00:29:49.639 "is_configured": false, 00:29:49.639 "data_offset": 0, 00:29:49.639 "data_size": 0 00:29:49.639 } 00:29:49.639 ] 00:29:49.639 }' 00:29:49.639 17:26:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:29:49.639 17:26:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:29:49.898 17:26:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:29:49.898 17:26:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:49.898 17:26:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:29:49.898 [2024-11-26 17:26:27.275581] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:29:49.898 BaseBdev2 00:29:49.898 17:26:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:49.898 17:26:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:29:49.898 17:26:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:29:49.898 17:26:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:29:49.898 17:26:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:29:49.898 17:26:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:29:49.898 17:26:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:29:49.898 17:26:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:29:49.898 17:26:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:49.898 17:26:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:29:49.898 17:26:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:49.898 17:26:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:29:49.898 17:26:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:49.898 17:26:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:29:49.898 [ 00:29:49.898 { 00:29:49.898 "name": "BaseBdev2", 00:29:49.898 "aliases": [ 00:29:49.898 "750d3ddb-dae5-4305-ac66-07a6f3b3a6a3" 00:29:49.898 ], 00:29:49.898 "product_name": "Malloc disk", 00:29:49.898 "block_size": 512, 00:29:49.898 "num_blocks": 65536, 00:29:49.898 "uuid": "750d3ddb-dae5-4305-ac66-07a6f3b3a6a3", 00:29:49.898 "assigned_rate_limits": { 00:29:49.898 "rw_ios_per_sec": 0, 00:29:49.898 "rw_mbytes_per_sec": 0, 00:29:49.898 "r_mbytes_per_sec": 0, 00:29:49.898 "w_mbytes_per_sec": 0 00:29:49.898 }, 00:29:49.898 "claimed": true, 00:29:49.898 "claim_type": "exclusive_write", 00:29:49.898 "zoned": false, 00:29:49.898 "supported_io_types": { 00:29:49.898 "read": true, 00:29:49.898 "write": true, 00:29:49.898 "unmap": true, 00:29:49.898 "flush": true, 00:29:49.898 "reset": true, 00:29:49.898 "nvme_admin": false, 00:29:49.898 "nvme_io": false, 00:29:49.898 "nvme_io_md": false, 00:29:49.898 "write_zeroes": true, 00:29:49.898 "zcopy": true, 00:29:49.898 "get_zone_info": false, 00:29:49.898 "zone_management": false, 00:29:49.898 "zone_append": false, 00:29:49.898 "compare": false, 00:29:49.898 "compare_and_write": false, 00:29:49.898 "abort": true, 00:29:49.898 "seek_hole": false, 00:29:49.898 "seek_data": false, 00:29:49.898 "copy": true, 00:29:49.898 "nvme_iov_md": false 00:29:49.898 }, 00:29:49.898 "memory_domains": [ 00:29:49.898 { 00:29:49.898 "dma_device_id": "system", 00:29:49.898 "dma_device_type": 1 00:29:49.898 }, 00:29:49.898 { 00:29:49.898 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:29:49.898 "dma_device_type": 2 00:29:49.898 } 00:29:49.898 ], 00:29:49.898 "driver_specific": {} 00:29:49.898 } 00:29:49.898 ] 00:29:49.898 17:26:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:49.898 17:26:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:29:49.898 17:26:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:29:49.898 17:26:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:29:49.898 17:26:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:29:49.898 17:26:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:29:49.898 17:26:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:29:49.898 17:26:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:29:49.898 17:26:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:29:49.898 17:26:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:29:49.898 17:26:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:29:49.898 17:26:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:29:49.898 17:26:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:29:49.898 17:26:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:29:49.898 17:26:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:29:49.898 17:26:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:29:49.898 17:26:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:49.898 17:26:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:29:49.898 17:26:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:50.157 17:26:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:29:50.157 "name": "Existed_Raid", 00:29:50.157 "uuid": "00000000-0000-0000-0000-000000000000", 00:29:50.157 "strip_size_kb": 64, 00:29:50.157 "state": "configuring", 00:29:50.157 "raid_level": "raid0", 00:29:50.157 "superblock": false, 00:29:50.157 "num_base_bdevs": 3, 00:29:50.157 "num_base_bdevs_discovered": 2, 00:29:50.157 "num_base_bdevs_operational": 3, 00:29:50.157 "base_bdevs_list": [ 00:29:50.157 { 00:29:50.157 "name": "BaseBdev1", 00:29:50.157 "uuid": "8cad9252-8f83-4cf1-ad8d-14ed76d5cb3e", 00:29:50.157 "is_configured": true, 00:29:50.157 "data_offset": 0, 00:29:50.157 "data_size": 65536 00:29:50.157 }, 00:29:50.157 { 00:29:50.157 "name": "BaseBdev2", 00:29:50.157 "uuid": "750d3ddb-dae5-4305-ac66-07a6f3b3a6a3", 00:29:50.157 "is_configured": true, 00:29:50.157 "data_offset": 0, 00:29:50.157 "data_size": 65536 00:29:50.157 }, 00:29:50.157 { 00:29:50.157 "name": "BaseBdev3", 00:29:50.157 "uuid": "00000000-0000-0000-0000-000000000000", 00:29:50.157 "is_configured": false, 00:29:50.157 "data_offset": 0, 00:29:50.157 "data_size": 0 00:29:50.157 } 00:29:50.157 ] 00:29:50.157 }' 00:29:50.157 17:26:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:29:50.157 17:26:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:29:50.417 17:26:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:29:50.417 17:26:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:50.417 17:26:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:29:50.417 [2024-11-26 17:26:27.779042] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:29:50.417 [2024-11-26 17:26:27.779287] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:29:50.417 [2024-11-26 17:26:27.779317] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 196608, blocklen 512 00:29:50.417 [2024-11-26 17:26:27.779637] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:29:50.417 [2024-11-26 17:26:27.779819] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:29:50.417 [2024-11-26 17:26:27.779830] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:29:50.417 [2024-11-26 17:26:27.780154] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:29:50.417 BaseBdev3 00:29:50.417 17:26:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:50.417 17:26:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:29:50.417 17:26:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:29:50.417 17:26:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:29:50.417 17:26:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:29:50.417 17:26:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:29:50.417 17:26:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:29:50.417 17:26:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:29:50.417 17:26:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:50.417 17:26:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:29:50.417 17:26:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:50.417 17:26:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:29:50.417 17:26:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:50.417 17:26:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:29:50.417 [ 00:29:50.417 { 00:29:50.417 "name": "BaseBdev3", 00:29:50.417 "aliases": [ 00:29:50.417 "bbba285d-3bfd-4cc4-ace8-2bd7fff5e0f5" 00:29:50.417 ], 00:29:50.417 "product_name": "Malloc disk", 00:29:50.417 "block_size": 512, 00:29:50.417 "num_blocks": 65536, 00:29:50.417 "uuid": "bbba285d-3bfd-4cc4-ace8-2bd7fff5e0f5", 00:29:50.417 "assigned_rate_limits": { 00:29:50.417 "rw_ios_per_sec": 0, 00:29:50.417 "rw_mbytes_per_sec": 0, 00:29:50.417 "r_mbytes_per_sec": 0, 00:29:50.417 "w_mbytes_per_sec": 0 00:29:50.417 }, 00:29:50.417 "claimed": true, 00:29:50.417 "claim_type": "exclusive_write", 00:29:50.417 "zoned": false, 00:29:50.417 "supported_io_types": { 00:29:50.417 "read": true, 00:29:50.417 "write": true, 00:29:50.417 "unmap": true, 00:29:50.417 "flush": true, 00:29:50.417 "reset": true, 00:29:50.417 "nvme_admin": false, 00:29:50.417 "nvme_io": false, 00:29:50.417 "nvme_io_md": false, 00:29:50.417 "write_zeroes": true, 00:29:50.417 "zcopy": true, 00:29:50.417 "get_zone_info": false, 00:29:50.417 "zone_management": false, 00:29:50.417 "zone_append": false, 00:29:50.417 "compare": false, 00:29:50.417 "compare_and_write": false, 00:29:50.417 "abort": true, 00:29:50.417 "seek_hole": false, 00:29:50.417 "seek_data": false, 00:29:50.417 "copy": true, 00:29:50.417 "nvme_iov_md": false 00:29:50.417 }, 00:29:50.417 "memory_domains": [ 00:29:50.417 { 00:29:50.417 "dma_device_id": "system", 00:29:50.417 "dma_device_type": 1 00:29:50.417 }, 00:29:50.417 { 00:29:50.417 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:29:50.417 "dma_device_type": 2 00:29:50.417 } 00:29:50.417 ], 00:29:50.417 "driver_specific": {} 00:29:50.417 } 00:29:50.417 ] 00:29:50.417 17:26:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:50.417 17:26:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:29:50.417 17:26:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:29:50.417 17:26:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:29:50.417 17:26:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid0 64 3 00:29:50.417 17:26:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:29:50.417 17:26:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:29:50.417 17:26:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:29:50.418 17:26:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:29:50.418 17:26:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:29:50.418 17:26:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:29:50.418 17:26:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:29:50.418 17:26:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:29:50.418 17:26:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:29:50.418 17:26:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:29:50.418 17:26:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:29:50.418 17:26:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:50.418 17:26:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:29:50.418 17:26:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:50.676 17:26:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:29:50.676 "name": "Existed_Raid", 00:29:50.676 "uuid": "a1f2de29-11be-45c8-b1c2-c86ae58f8d86", 00:29:50.676 "strip_size_kb": 64, 00:29:50.676 "state": "online", 00:29:50.676 "raid_level": "raid0", 00:29:50.676 "superblock": false, 00:29:50.676 "num_base_bdevs": 3, 00:29:50.676 "num_base_bdevs_discovered": 3, 00:29:50.676 "num_base_bdevs_operational": 3, 00:29:50.676 "base_bdevs_list": [ 00:29:50.676 { 00:29:50.676 "name": "BaseBdev1", 00:29:50.676 "uuid": "8cad9252-8f83-4cf1-ad8d-14ed76d5cb3e", 00:29:50.676 "is_configured": true, 00:29:50.676 "data_offset": 0, 00:29:50.676 "data_size": 65536 00:29:50.676 }, 00:29:50.676 { 00:29:50.676 "name": "BaseBdev2", 00:29:50.676 "uuid": "750d3ddb-dae5-4305-ac66-07a6f3b3a6a3", 00:29:50.676 "is_configured": true, 00:29:50.676 "data_offset": 0, 00:29:50.677 "data_size": 65536 00:29:50.677 }, 00:29:50.677 { 00:29:50.677 "name": "BaseBdev3", 00:29:50.677 "uuid": "bbba285d-3bfd-4cc4-ace8-2bd7fff5e0f5", 00:29:50.677 "is_configured": true, 00:29:50.677 "data_offset": 0, 00:29:50.677 "data_size": 65536 00:29:50.677 } 00:29:50.677 ] 00:29:50.677 }' 00:29:50.677 17:26:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:29:50.677 17:26:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:29:50.936 17:26:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:29:50.936 17:26:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:29:50.936 17:26:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:29:50.936 17:26:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:29:50.936 17:26:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:29:50.936 17:26:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:29:50.936 17:26:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:29:50.936 17:26:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:29:50.936 17:26:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:50.936 17:26:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:29:50.936 [2024-11-26 17:26:28.239530] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:29:50.936 17:26:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:50.936 17:26:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:29:50.936 "name": "Existed_Raid", 00:29:50.936 "aliases": [ 00:29:50.936 "a1f2de29-11be-45c8-b1c2-c86ae58f8d86" 00:29:50.936 ], 00:29:50.936 "product_name": "Raid Volume", 00:29:50.936 "block_size": 512, 00:29:50.936 "num_blocks": 196608, 00:29:50.936 "uuid": "a1f2de29-11be-45c8-b1c2-c86ae58f8d86", 00:29:50.936 "assigned_rate_limits": { 00:29:50.936 "rw_ios_per_sec": 0, 00:29:50.936 "rw_mbytes_per_sec": 0, 00:29:50.936 "r_mbytes_per_sec": 0, 00:29:50.936 "w_mbytes_per_sec": 0 00:29:50.936 }, 00:29:50.936 "claimed": false, 00:29:50.936 "zoned": false, 00:29:50.936 "supported_io_types": { 00:29:50.936 "read": true, 00:29:50.936 "write": true, 00:29:50.936 "unmap": true, 00:29:50.936 "flush": true, 00:29:50.936 "reset": true, 00:29:50.936 "nvme_admin": false, 00:29:50.936 "nvme_io": false, 00:29:50.936 "nvme_io_md": false, 00:29:50.936 "write_zeroes": true, 00:29:50.936 "zcopy": false, 00:29:50.936 "get_zone_info": false, 00:29:50.936 "zone_management": false, 00:29:50.936 "zone_append": false, 00:29:50.936 "compare": false, 00:29:50.936 "compare_and_write": false, 00:29:50.936 "abort": false, 00:29:50.936 "seek_hole": false, 00:29:50.936 "seek_data": false, 00:29:50.936 "copy": false, 00:29:50.936 "nvme_iov_md": false 00:29:50.936 }, 00:29:50.936 "memory_domains": [ 00:29:50.936 { 00:29:50.936 "dma_device_id": "system", 00:29:50.936 "dma_device_type": 1 00:29:50.936 }, 00:29:50.936 { 00:29:50.936 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:29:50.936 "dma_device_type": 2 00:29:50.936 }, 00:29:50.936 { 00:29:50.936 "dma_device_id": "system", 00:29:50.936 "dma_device_type": 1 00:29:50.936 }, 00:29:50.936 { 00:29:50.936 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:29:50.936 "dma_device_type": 2 00:29:50.936 }, 00:29:50.936 { 00:29:50.936 "dma_device_id": "system", 00:29:50.936 "dma_device_type": 1 00:29:50.936 }, 00:29:50.936 { 00:29:50.936 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:29:50.936 "dma_device_type": 2 00:29:50.936 } 00:29:50.936 ], 00:29:50.936 "driver_specific": { 00:29:50.936 "raid": { 00:29:50.936 "uuid": "a1f2de29-11be-45c8-b1c2-c86ae58f8d86", 00:29:50.936 "strip_size_kb": 64, 00:29:50.936 "state": "online", 00:29:50.936 "raid_level": "raid0", 00:29:50.936 "superblock": false, 00:29:50.936 "num_base_bdevs": 3, 00:29:50.936 "num_base_bdevs_discovered": 3, 00:29:50.936 "num_base_bdevs_operational": 3, 00:29:50.936 "base_bdevs_list": [ 00:29:50.936 { 00:29:50.936 "name": "BaseBdev1", 00:29:50.936 "uuid": "8cad9252-8f83-4cf1-ad8d-14ed76d5cb3e", 00:29:50.936 "is_configured": true, 00:29:50.936 "data_offset": 0, 00:29:50.936 "data_size": 65536 00:29:50.936 }, 00:29:50.936 { 00:29:50.936 "name": "BaseBdev2", 00:29:50.936 "uuid": "750d3ddb-dae5-4305-ac66-07a6f3b3a6a3", 00:29:50.936 "is_configured": true, 00:29:50.936 "data_offset": 0, 00:29:50.936 "data_size": 65536 00:29:50.936 }, 00:29:50.936 { 00:29:50.936 "name": "BaseBdev3", 00:29:50.936 "uuid": "bbba285d-3bfd-4cc4-ace8-2bd7fff5e0f5", 00:29:50.936 "is_configured": true, 00:29:50.936 "data_offset": 0, 00:29:50.936 "data_size": 65536 00:29:50.936 } 00:29:50.937 ] 00:29:50.937 } 00:29:50.937 } 00:29:50.937 }' 00:29:50.937 17:26:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:29:50.937 17:26:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:29:50.937 BaseBdev2 00:29:50.937 BaseBdev3' 00:29:50.937 17:26:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:29:50.937 17:26:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:29:50.937 17:26:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:29:50.937 17:26:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:29:50.937 17:26:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:29:50.937 17:26:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:50.937 17:26:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:29:51.196 17:26:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:51.196 17:26:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:29:51.196 17:26:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:29:51.196 17:26:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:29:51.196 17:26:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:29:51.196 17:26:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:51.196 17:26:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:29:51.196 17:26:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:29:51.196 17:26:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:51.196 17:26:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:29:51.196 17:26:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:29:51.197 17:26:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:29:51.197 17:26:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:29:51.197 17:26:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:51.197 17:26:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:29:51.197 17:26:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:29:51.197 17:26:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:51.197 17:26:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:29:51.197 17:26:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:29:51.197 17:26:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:29:51.197 17:26:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:51.197 17:26:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:29:51.197 [2024-11-26 17:26:28.519352] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:29:51.197 [2024-11-26 17:26:28.519384] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:29:51.197 [2024-11-26 17:26:28.519444] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:29:51.197 17:26:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:51.197 17:26:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@260 -- # local expected_state 00:29:51.197 17:26:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@261 -- # has_redundancy raid0 00:29:51.197 17:26:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:29:51.197 17:26:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # return 1 00:29:51.197 17:26:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@262 -- # expected_state=offline 00:29:51.197 17:26:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid offline raid0 64 2 00:29:51.197 17:26:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:29:51.197 17:26:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=offline 00:29:51.197 17:26:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:29:51.197 17:26:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:29:51.197 17:26:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:29:51.197 17:26:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:29:51.197 17:26:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:29:51.197 17:26:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:29:51.197 17:26:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:29:51.197 17:26:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:29:51.197 17:26:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:51.197 17:26:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:29:51.197 17:26:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:29:51.455 17:26:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:51.455 17:26:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:29:51.455 "name": "Existed_Raid", 00:29:51.455 "uuid": "a1f2de29-11be-45c8-b1c2-c86ae58f8d86", 00:29:51.455 "strip_size_kb": 64, 00:29:51.455 "state": "offline", 00:29:51.455 "raid_level": "raid0", 00:29:51.455 "superblock": false, 00:29:51.455 "num_base_bdevs": 3, 00:29:51.455 "num_base_bdevs_discovered": 2, 00:29:51.455 "num_base_bdevs_operational": 2, 00:29:51.455 "base_bdevs_list": [ 00:29:51.455 { 00:29:51.455 "name": null, 00:29:51.455 "uuid": "00000000-0000-0000-0000-000000000000", 00:29:51.455 "is_configured": false, 00:29:51.455 "data_offset": 0, 00:29:51.455 "data_size": 65536 00:29:51.455 }, 00:29:51.455 { 00:29:51.455 "name": "BaseBdev2", 00:29:51.455 "uuid": "750d3ddb-dae5-4305-ac66-07a6f3b3a6a3", 00:29:51.455 "is_configured": true, 00:29:51.455 "data_offset": 0, 00:29:51.455 "data_size": 65536 00:29:51.455 }, 00:29:51.455 { 00:29:51.455 "name": "BaseBdev3", 00:29:51.455 "uuid": "bbba285d-3bfd-4cc4-ace8-2bd7fff5e0f5", 00:29:51.455 "is_configured": true, 00:29:51.455 "data_offset": 0, 00:29:51.455 "data_size": 65536 00:29:51.455 } 00:29:51.455 ] 00:29:51.455 }' 00:29:51.455 17:26:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:29:51.455 17:26:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:29:51.714 17:26:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:29:51.714 17:26:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:29:51.714 17:26:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:29:51.714 17:26:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:29:51.714 17:26:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:51.714 17:26:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:29:51.714 17:26:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:51.714 17:26:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:29:51.714 17:26:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:29:51.714 17:26:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:29:51.714 17:26:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:51.714 17:26:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:29:51.714 [2024-11-26 17:26:29.115701] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:29:51.972 17:26:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:51.972 17:26:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:29:51.972 17:26:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:29:51.972 17:26:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:29:51.972 17:26:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:51.972 17:26:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:29:51.972 17:26:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:29:51.972 17:26:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:51.972 17:26:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:29:51.972 17:26:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:29:51.972 17:26:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:29:51.972 17:26:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:51.972 17:26:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:29:51.972 [2024-11-26 17:26:29.271343] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:29:51.972 [2024-11-26 17:26:29.271534] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:29:51.972 17:26:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:51.972 17:26:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:29:51.972 17:26:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:29:51.972 17:26:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:29:51.972 17:26:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:29:51.972 17:26:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:51.972 17:26:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:29:51.972 17:26:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:52.232 17:26:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:29:52.232 17:26:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:29:52.232 17:26:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@284 -- # '[' 3 -gt 2 ']' 00:29:52.232 17:26:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:29:52.232 17:26:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:29:52.232 17:26:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:29:52.232 17:26:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:52.232 17:26:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:29:52.232 BaseBdev2 00:29:52.232 17:26:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:52.232 17:26:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:29:52.232 17:26:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:29:52.232 17:26:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:29:52.232 17:26:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:29:52.232 17:26:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:29:52.232 17:26:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:29:52.232 17:26:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:29:52.232 17:26:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:52.232 17:26:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:29:52.232 17:26:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:52.232 17:26:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:29:52.232 17:26:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:52.232 17:26:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:29:52.232 [ 00:29:52.232 { 00:29:52.232 "name": "BaseBdev2", 00:29:52.232 "aliases": [ 00:29:52.232 "5915de0f-4844-43b4-9c5d-c58946e1e015" 00:29:52.232 ], 00:29:52.232 "product_name": "Malloc disk", 00:29:52.232 "block_size": 512, 00:29:52.232 "num_blocks": 65536, 00:29:52.232 "uuid": "5915de0f-4844-43b4-9c5d-c58946e1e015", 00:29:52.232 "assigned_rate_limits": { 00:29:52.232 "rw_ios_per_sec": 0, 00:29:52.232 "rw_mbytes_per_sec": 0, 00:29:52.232 "r_mbytes_per_sec": 0, 00:29:52.232 "w_mbytes_per_sec": 0 00:29:52.232 }, 00:29:52.232 "claimed": false, 00:29:52.232 "zoned": false, 00:29:52.232 "supported_io_types": { 00:29:52.232 "read": true, 00:29:52.232 "write": true, 00:29:52.232 "unmap": true, 00:29:52.232 "flush": true, 00:29:52.232 "reset": true, 00:29:52.232 "nvme_admin": false, 00:29:52.232 "nvme_io": false, 00:29:52.232 "nvme_io_md": false, 00:29:52.232 "write_zeroes": true, 00:29:52.232 "zcopy": true, 00:29:52.232 "get_zone_info": false, 00:29:52.232 "zone_management": false, 00:29:52.232 "zone_append": false, 00:29:52.232 "compare": false, 00:29:52.232 "compare_and_write": false, 00:29:52.232 "abort": true, 00:29:52.232 "seek_hole": false, 00:29:52.232 "seek_data": false, 00:29:52.232 "copy": true, 00:29:52.232 "nvme_iov_md": false 00:29:52.232 }, 00:29:52.232 "memory_domains": [ 00:29:52.232 { 00:29:52.232 "dma_device_id": "system", 00:29:52.232 "dma_device_type": 1 00:29:52.232 }, 00:29:52.232 { 00:29:52.232 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:29:52.232 "dma_device_type": 2 00:29:52.232 } 00:29:52.232 ], 00:29:52.232 "driver_specific": {} 00:29:52.232 } 00:29:52.232 ] 00:29:52.232 17:26:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:52.232 17:26:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:29:52.232 17:26:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:29:52.232 17:26:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:29:52.232 17:26:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:29:52.232 17:26:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:52.232 17:26:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:29:52.232 BaseBdev3 00:29:52.232 17:26:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:52.232 17:26:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:29:52.232 17:26:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:29:52.232 17:26:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:29:52.232 17:26:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:29:52.232 17:26:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:29:52.232 17:26:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:29:52.232 17:26:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:29:52.232 17:26:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:52.232 17:26:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:29:52.232 17:26:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:52.232 17:26:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:29:52.232 17:26:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:52.232 17:26:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:29:52.232 [ 00:29:52.232 { 00:29:52.232 "name": "BaseBdev3", 00:29:52.232 "aliases": [ 00:29:52.232 "c5df1f6b-10a5-4f6e-a448-ec538e24e8b6" 00:29:52.232 ], 00:29:52.232 "product_name": "Malloc disk", 00:29:52.232 "block_size": 512, 00:29:52.232 "num_blocks": 65536, 00:29:52.232 "uuid": "c5df1f6b-10a5-4f6e-a448-ec538e24e8b6", 00:29:52.232 "assigned_rate_limits": { 00:29:52.232 "rw_ios_per_sec": 0, 00:29:52.232 "rw_mbytes_per_sec": 0, 00:29:52.232 "r_mbytes_per_sec": 0, 00:29:52.232 "w_mbytes_per_sec": 0 00:29:52.232 }, 00:29:52.232 "claimed": false, 00:29:52.232 "zoned": false, 00:29:52.232 "supported_io_types": { 00:29:52.232 "read": true, 00:29:52.232 "write": true, 00:29:52.232 "unmap": true, 00:29:52.232 "flush": true, 00:29:52.232 "reset": true, 00:29:52.232 "nvme_admin": false, 00:29:52.232 "nvme_io": false, 00:29:52.232 "nvme_io_md": false, 00:29:52.232 "write_zeroes": true, 00:29:52.232 "zcopy": true, 00:29:52.232 "get_zone_info": false, 00:29:52.232 "zone_management": false, 00:29:52.232 "zone_append": false, 00:29:52.232 "compare": false, 00:29:52.232 "compare_and_write": false, 00:29:52.232 "abort": true, 00:29:52.232 "seek_hole": false, 00:29:52.232 "seek_data": false, 00:29:52.232 "copy": true, 00:29:52.232 "nvme_iov_md": false 00:29:52.232 }, 00:29:52.232 "memory_domains": [ 00:29:52.232 { 00:29:52.232 "dma_device_id": "system", 00:29:52.232 "dma_device_type": 1 00:29:52.232 }, 00:29:52.232 { 00:29:52.232 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:29:52.232 "dma_device_type": 2 00:29:52.232 } 00:29:52.232 ], 00:29:52.232 "driver_specific": {} 00:29:52.232 } 00:29:52.232 ] 00:29:52.232 17:26:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:52.232 17:26:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:29:52.232 17:26:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:29:52.232 17:26:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:29:52.232 17:26:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:29:52.232 17:26:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:52.232 17:26:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:29:52.232 [2024-11-26 17:26:29.597894] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:29:52.232 [2024-11-26 17:26:29.598085] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:29:52.232 [2024-11-26 17:26:29.598205] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:29:52.232 [2024-11-26 17:26:29.600646] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:29:52.232 17:26:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:52.232 17:26:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:29:52.232 17:26:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:29:52.232 17:26:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:29:52.232 17:26:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:29:52.232 17:26:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:29:52.232 17:26:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:29:52.232 17:26:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:29:52.233 17:26:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:29:52.233 17:26:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:29:52.233 17:26:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:29:52.233 17:26:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:29:52.233 17:26:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:29:52.233 17:26:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:52.233 17:26:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:29:52.233 17:26:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:52.233 17:26:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:29:52.233 "name": "Existed_Raid", 00:29:52.233 "uuid": "00000000-0000-0000-0000-000000000000", 00:29:52.233 "strip_size_kb": 64, 00:29:52.233 "state": "configuring", 00:29:52.233 "raid_level": "raid0", 00:29:52.233 "superblock": false, 00:29:52.233 "num_base_bdevs": 3, 00:29:52.233 "num_base_bdevs_discovered": 2, 00:29:52.233 "num_base_bdevs_operational": 3, 00:29:52.233 "base_bdevs_list": [ 00:29:52.233 { 00:29:52.233 "name": "BaseBdev1", 00:29:52.233 "uuid": "00000000-0000-0000-0000-000000000000", 00:29:52.233 "is_configured": false, 00:29:52.233 "data_offset": 0, 00:29:52.233 "data_size": 0 00:29:52.233 }, 00:29:52.233 { 00:29:52.233 "name": "BaseBdev2", 00:29:52.233 "uuid": "5915de0f-4844-43b4-9c5d-c58946e1e015", 00:29:52.233 "is_configured": true, 00:29:52.233 "data_offset": 0, 00:29:52.233 "data_size": 65536 00:29:52.233 }, 00:29:52.233 { 00:29:52.233 "name": "BaseBdev3", 00:29:52.233 "uuid": "c5df1f6b-10a5-4f6e-a448-ec538e24e8b6", 00:29:52.233 "is_configured": true, 00:29:52.233 "data_offset": 0, 00:29:52.233 "data_size": 65536 00:29:52.233 } 00:29:52.233 ] 00:29:52.233 }' 00:29:52.233 17:26:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:29:52.233 17:26:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:29:52.800 17:26:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:29:52.801 17:26:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:52.801 17:26:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:29:52.801 [2024-11-26 17:26:30.034019] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:29:52.801 17:26:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:52.801 17:26:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:29:52.801 17:26:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:29:52.801 17:26:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:29:52.801 17:26:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:29:52.801 17:26:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:29:52.801 17:26:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:29:52.801 17:26:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:29:52.801 17:26:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:29:52.801 17:26:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:29:52.801 17:26:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:29:52.801 17:26:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:29:52.801 17:26:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:29:52.801 17:26:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:52.801 17:26:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:29:52.801 17:26:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:52.801 17:26:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:29:52.801 "name": "Existed_Raid", 00:29:52.801 "uuid": "00000000-0000-0000-0000-000000000000", 00:29:52.801 "strip_size_kb": 64, 00:29:52.801 "state": "configuring", 00:29:52.801 "raid_level": "raid0", 00:29:52.801 "superblock": false, 00:29:52.801 "num_base_bdevs": 3, 00:29:52.801 "num_base_bdevs_discovered": 1, 00:29:52.801 "num_base_bdevs_operational": 3, 00:29:52.801 "base_bdevs_list": [ 00:29:52.801 { 00:29:52.801 "name": "BaseBdev1", 00:29:52.801 "uuid": "00000000-0000-0000-0000-000000000000", 00:29:52.801 "is_configured": false, 00:29:52.801 "data_offset": 0, 00:29:52.801 "data_size": 0 00:29:52.801 }, 00:29:52.801 { 00:29:52.801 "name": null, 00:29:52.801 "uuid": "5915de0f-4844-43b4-9c5d-c58946e1e015", 00:29:52.801 "is_configured": false, 00:29:52.801 "data_offset": 0, 00:29:52.801 "data_size": 65536 00:29:52.801 }, 00:29:52.801 { 00:29:52.801 "name": "BaseBdev3", 00:29:52.801 "uuid": "c5df1f6b-10a5-4f6e-a448-ec538e24e8b6", 00:29:52.801 "is_configured": true, 00:29:52.801 "data_offset": 0, 00:29:52.801 "data_size": 65536 00:29:52.801 } 00:29:52.801 ] 00:29:52.801 }' 00:29:52.801 17:26:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:29:52.801 17:26:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:29:53.061 17:26:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:29:53.061 17:26:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:29:53.061 17:26:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:53.061 17:26:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:29:53.320 17:26:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:53.320 17:26:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:29:53.320 17:26:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:29:53.320 17:26:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:53.320 17:26:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:29:53.320 [2024-11-26 17:26:30.573611] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:29:53.320 BaseBdev1 00:29:53.320 17:26:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:53.320 17:26:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:29:53.320 17:26:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:29:53.320 17:26:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:29:53.320 17:26:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:29:53.320 17:26:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:29:53.320 17:26:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:29:53.320 17:26:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:29:53.320 17:26:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:53.320 17:26:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:29:53.320 17:26:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:53.320 17:26:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:29:53.320 17:26:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:53.320 17:26:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:29:53.320 [ 00:29:53.320 { 00:29:53.320 "name": "BaseBdev1", 00:29:53.320 "aliases": [ 00:29:53.320 "61fcc9b2-fdac-4f03-aeb8-8425c45fc69b" 00:29:53.320 ], 00:29:53.320 "product_name": "Malloc disk", 00:29:53.320 "block_size": 512, 00:29:53.320 "num_blocks": 65536, 00:29:53.320 "uuid": "61fcc9b2-fdac-4f03-aeb8-8425c45fc69b", 00:29:53.320 "assigned_rate_limits": { 00:29:53.320 "rw_ios_per_sec": 0, 00:29:53.320 "rw_mbytes_per_sec": 0, 00:29:53.320 "r_mbytes_per_sec": 0, 00:29:53.320 "w_mbytes_per_sec": 0 00:29:53.320 }, 00:29:53.320 "claimed": true, 00:29:53.320 "claim_type": "exclusive_write", 00:29:53.320 "zoned": false, 00:29:53.320 "supported_io_types": { 00:29:53.320 "read": true, 00:29:53.320 "write": true, 00:29:53.320 "unmap": true, 00:29:53.320 "flush": true, 00:29:53.320 "reset": true, 00:29:53.320 "nvme_admin": false, 00:29:53.320 "nvme_io": false, 00:29:53.320 "nvme_io_md": false, 00:29:53.320 "write_zeroes": true, 00:29:53.320 "zcopy": true, 00:29:53.320 "get_zone_info": false, 00:29:53.320 "zone_management": false, 00:29:53.320 "zone_append": false, 00:29:53.320 "compare": false, 00:29:53.320 "compare_and_write": false, 00:29:53.320 "abort": true, 00:29:53.320 "seek_hole": false, 00:29:53.320 "seek_data": false, 00:29:53.320 "copy": true, 00:29:53.320 "nvme_iov_md": false 00:29:53.320 }, 00:29:53.320 "memory_domains": [ 00:29:53.320 { 00:29:53.320 "dma_device_id": "system", 00:29:53.320 "dma_device_type": 1 00:29:53.320 }, 00:29:53.320 { 00:29:53.320 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:29:53.320 "dma_device_type": 2 00:29:53.320 } 00:29:53.320 ], 00:29:53.320 "driver_specific": {} 00:29:53.320 } 00:29:53.320 ] 00:29:53.320 17:26:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:53.320 17:26:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:29:53.320 17:26:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:29:53.320 17:26:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:29:53.320 17:26:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:29:53.320 17:26:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:29:53.320 17:26:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:29:53.320 17:26:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:29:53.320 17:26:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:29:53.320 17:26:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:29:53.320 17:26:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:29:53.320 17:26:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:29:53.320 17:26:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:29:53.320 17:26:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:53.320 17:26:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:29:53.320 17:26:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:29:53.320 17:26:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:53.320 17:26:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:29:53.320 "name": "Existed_Raid", 00:29:53.320 "uuid": "00000000-0000-0000-0000-000000000000", 00:29:53.320 "strip_size_kb": 64, 00:29:53.320 "state": "configuring", 00:29:53.320 "raid_level": "raid0", 00:29:53.320 "superblock": false, 00:29:53.320 "num_base_bdevs": 3, 00:29:53.320 "num_base_bdevs_discovered": 2, 00:29:53.320 "num_base_bdevs_operational": 3, 00:29:53.320 "base_bdevs_list": [ 00:29:53.320 { 00:29:53.320 "name": "BaseBdev1", 00:29:53.320 "uuid": "61fcc9b2-fdac-4f03-aeb8-8425c45fc69b", 00:29:53.320 "is_configured": true, 00:29:53.320 "data_offset": 0, 00:29:53.320 "data_size": 65536 00:29:53.320 }, 00:29:53.320 { 00:29:53.320 "name": null, 00:29:53.320 "uuid": "5915de0f-4844-43b4-9c5d-c58946e1e015", 00:29:53.320 "is_configured": false, 00:29:53.320 "data_offset": 0, 00:29:53.320 "data_size": 65536 00:29:53.320 }, 00:29:53.320 { 00:29:53.320 "name": "BaseBdev3", 00:29:53.321 "uuid": "c5df1f6b-10a5-4f6e-a448-ec538e24e8b6", 00:29:53.321 "is_configured": true, 00:29:53.321 "data_offset": 0, 00:29:53.321 "data_size": 65536 00:29:53.321 } 00:29:53.321 ] 00:29:53.321 }' 00:29:53.321 17:26:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:29:53.321 17:26:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:29:53.902 17:26:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:29:53.902 17:26:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:29:53.902 17:26:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:53.902 17:26:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:29:53.902 17:26:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:53.902 17:26:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:29:53.902 17:26:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:29:53.903 17:26:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:53.903 17:26:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:29:53.903 [2024-11-26 17:26:31.077801] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:29:53.903 17:26:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:53.903 17:26:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:29:53.903 17:26:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:29:53.903 17:26:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:29:53.903 17:26:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:29:53.903 17:26:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:29:53.903 17:26:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:29:53.903 17:26:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:29:53.903 17:26:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:29:53.903 17:26:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:29:53.903 17:26:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:29:53.903 17:26:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:29:53.903 17:26:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:29:53.903 17:26:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:53.903 17:26:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:29:53.903 17:26:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:53.903 17:26:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:29:53.903 "name": "Existed_Raid", 00:29:53.903 "uuid": "00000000-0000-0000-0000-000000000000", 00:29:53.903 "strip_size_kb": 64, 00:29:53.903 "state": "configuring", 00:29:53.903 "raid_level": "raid0", 00:29:53.903 "superblock": false, 00:29:53.903 "num_base_bdevs": 3, 00:29:53.903 "num_base_bdevs_discovered": 1, 00:29:53.903 "num_base_bdevs_operational": 3, 00:29:53.903 "base_bdevs_list": [ 00:29:53.903 { 00:29:53.903 "name": "BaseBdev1", 00:29:53.903 "uuid": "61fcc9b2-fdac-4f03-aeb8-8425c45fc69b", 00:29:53.903 "is_configured": true, 00:29:53.903 "data_offset": 0, 00:29:53.903 "data_size": 65536 00:29:53.903 }, 00:29:53.903 { 00:29:53.903 "name": null, 00:29:53.903 "uuid": "5915de0f-4844-43b4-9c5d-c58946e1e015", 00:29:53.903 "is_configured": false, 00:29:53.903 "data_offset": 0, 00:29:53.903 "data_size": 65536 00:29:53.903 }, 00:29:53.903 { 00:29:53.903 "name": null, 00:29:53.903 "uuid": "c5df1f6b-10a5-4f6e-a448-ec538e24e8b6", 00:29:53.903 "is_configured": false, 00:29:53.903 "data_offset": 0, 00:29:53.903 "data_size": 65536 00:29:53.903 } 00:29:53.903 ] 00:29:53.903 }' 00:29:53.903 17:26:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:29:53.903 17:26:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:29:54.162 17:26:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:29:54.162 17:26:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:54.162 17:26:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:29:54.162 17:26:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:29:54.162 17:26:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:54.162 17:26:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:29:54.162 17:26:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:29:54.162 17:26:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:54.162 17:26:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:29:54.162 [2024-11-26 17:26:31.549921] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:29:54.162 17:26:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:54.162 17:26:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:29:54.162 17:26:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:29:54.162 17:26:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:29:54.162 17:26:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:29:54.162 17:26:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:29:54.162 17:26:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:29:54.162 17:26:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:29:54.162 17:26:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:29:54.162 17:26:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:29:54.162 17:26:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:29:54.162 17:26:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:29:54.162 17:26:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:54.162 17:26:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:29:54.162 17:26:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:29:54.162 17:26:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:54.162 17:26:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:29:54.162 "name": "Existed_Raid", 00:29:54.162 "uuid": "00000000-0000-0000-0000-000000000000", 00:29:54.162 "strip_size_kb": 64, 00:29:54.162 "state": "configuring", 00:29:54.162 "raid_level": "raid0", 00:29:54.162 "superblock": false, 00:29:54.162 "num_base_bdevs": 3, 00:29:54.162 "num_base_bdevs_discovered": 2, 00:29:54.162 "num_base_bdevs_operational": 3, 00:29:54.162 "base_bdevs_list": [ 00:29:54.162 { 00:29:54.162 "name": "BaseBdev1", 00:29:54.162 "uuid": "61fcc9b2-fdac-4f03-aeb8-8425c45fc69b", 00:29:54.162 "is_configured": true, 00:29:54.162 "data_offset": 0, 00:29:54.162 "data_size": 65536 00:29:54.162 }, 00:29:54.162 { 00:29:54.162 "name": null, 00:29:54.162 "uuid": "5915de0f-4844-43b4-9c5d-c58946e1e015", 00:29:54.162 "is_configured": false, 00:29:54.162 "data_offset": 0, 00:29:54.162 "data_size": 65536 00:29:54.162 }, 00:29:54.162 { 00:29:54.162 "name": "BaseBdev3", 00:29:54.162 "uuid": "c5df1f6b-10a5-4f6e-a448-ec538e24e8b6", 00:29:54.162 "is_configured": true, 00:29:54.162 "data_offset": 0, 00:29:54.162 "data_size": 65536 00:29:54.162 } 00:29:54.162 ] 00:29:54.162 }' 00:29:54.162 17:26:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:29:54.162 17:26:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:29:54.731 17:26:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:29:54.731 17:26:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:54.731 17:26:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:29:54.731 17:26:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:29:54.731 17:26:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:54.731 17:26:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:29:54.731 17:26:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:29:54.731 17:26:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:54.731 17:26:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:29:54.731 [2024-11-26 17:26:32.014063] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:29:54.731 17:26:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:54.731 17:26:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:29:54.731 17:26:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:29:54.731 17:26:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:29:54.731 17:26:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:29:54.731 17:26:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:29:54.731 17:26:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:29:54.731 17:26:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:29:54.731 17:26:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:29:54.731 17:26:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:29:54.731 17:26:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:29:54.731 17:26:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:29:54.731 17:26:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:29:54.731 17:26:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:54.731 17:26:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:29:54.731 17:26:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:54.731 17:26:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:29:54.731 "name": "Existed_Raid", 00:29:54.731 "uuid": "00000000-0000-0000-0000-000000000000", 00:29:54.731 "strip_size_kb": 64, 00:29:54.731 "state": "configuring", 00:29:54.731 "raid_level": "raid0", 00:29:54.731 "superblock": false, 00:29:54.731 "num_base_bdevs": 3, 00:29:54.731 "num_base_bdevs_discovered": 1, 00:29:54.731 "num_base_bdevs_operational": 3, 00:29:54.731 "base_bdevs_list": [ 00:29:54.731 { 00:29:54.731 "name": null, 00:29:54.731 "uuid": "61fcc9b2-fdac-4f03-aeb8-8425c45fc69b", 00:29:54.731 "is_configured": false, 00:29:54.731 "data_offset": 0, 00:29:54.731 "data_size": 65536 00:29:54.731 }, 00:29:54.731 { 00:29:54.731 "name": null, 00:29:54.731 "uuid": "5915de0f-4844-43b4-9c5d-c58946e1e015", 00:29:54.731 "is_configured": false, 00:29:54.731 "data_offset": 0, 00:29:54.731 "data_size": 65536 00:29:54.731 }, 00:29:54.731 { 00:29:54.731 "name": "BaseBdev3", 00:29:54.731 "uuid": "c5df1f6b-10a5-4f6e-a448-ec538e24e8b6", 00:29:54.731 "is_configured": true, 00:29:54.731 "data_offset": 0, 00:29:54.731 "data_size": 65536 00:29:54.731 } 00:29:54.731 ] 00:29:54.731 }' 00:29:54.731 17:26:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:29:54.731 17:26:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:29:55.299 17:26:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:29:55.299 17:26:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:29:55.299 17:26:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:55.299 17:26:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:29:55.299 17:26:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:55.299 17:26:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:29:55.299 17:26:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:29:55.299 17:26:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:55.299 17:26:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:29:55.299 [2024-11-26 17:26:32.585016] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:29:55.299 17:26:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:55.299 17:26:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:29:55.299 17:26:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:29:55.299 17:26:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:29:55.299 17:26:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:29:55.299 17:26:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:29:55.299 17:26:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:29:55.299 17:26:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:29:55.299 17:26:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:29:55.299 17:26:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:29:55.299 17:26:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:29:55.299 17:26:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:29:55.299 17:26:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:55.299 17:26:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:29:55.299 17:26:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:29:55.299 17:26:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:55.299 17:26:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:29:55.299 "name": "Existed_Raid", 00:29:55.299 "uuid": "00000000-0000-0000-0000-000000000000", 00:29:55.299 "strip_size_kb": 64, 00:29:55.299 "state": "configuring", 00:29:55.299 "raid_level": "raid0", 00:29:55.299 "superblock": false, 00:29:55.299 "num_base_bdevs": 3, 00:29:55.299 "num_base_bdevs_discovered": 2, 00:29:55.299 "num_base_bdevs_operational": 3, 00:29:55.299 "base_bdevs_list": [ 00:29:55.299 { 00:29:55.299 "name": null, 00:29:55.299 "uuid": "61fcc9b2-fdac-4f03-aeb8-8425c45fc69b", 00:29:55.299 "is_configured": false, 00:29:55.299 "data_offset": 0, 00:29:55.299 "data_size": 65536 00:29:55.299 }, 00:29:55.299 { 00:29:55.299 "name": "BaseBdev2", 00:29:55.299 "uuid": "5915de0f-4844-43b4-9c5d-c58946e1e015", 00:29:55.299 "is_configured": true, 00:29:55.299 "data_offset": 0, 00:29:55.299 "data_size": 65536 00:29:55.299 }, 00:29:55.299 { 00:29:55.299 "name": "BaseBdev3", 00:29:55.299 "uuid": "c5df1f6b-10a5-4f6e-a448-ec538e24e8b6", 00:29:55.299 "is_configured": true, 00:29:55.299 "data_offset": 0, 00:29:55.299 "data_size": 65536 00:29:55.299 } 00:29:55.299 ] 00:29:55.299 }' 00:29:55.299 17:26:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:29:55.299 17:26:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:29:55.953 17:26:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:29:55.953 17:26:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:55.953 17:26:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:29:55.953 17:26:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:29:55.953 17:26:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:55.953 17:26:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:29:55.953 17:26:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:29:55.953 17:26:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:29:55.953 17:26:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:55.953 17:26:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:29:55.953 17:26:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:55.953 17:26:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u 61fcc9b2-fdac-4f03-aeb8-8425c45fc69b 00:29:55.953 17:26:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:55.953 17:26:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:29:55.953 [2024-11-26 17:26:33.171589] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:29:55.953 [2024-11-26 17:26:33.171654] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:29:55.953 [2024-11-26 17:26:33.171666] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 196608, blocklen 512 00:29:55.953 [2024-11-26 17:26:33.171963] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:29:55.953 [2024-11-26 17:26:33.172155] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:29:55.953 [2024-11-26 17:26:33.172168] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000008200 00:29:55.953 [2024-11-26 17:26:33.172484] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:29:55.953 NewBaseBdev 00:29:55.953 17:26:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:55.953 17:26:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:29:55.953 17:26:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=NewBaseBdev 00:29:55.953 17:26:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:29:55.953 17:26:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:29:55.953 17:26:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:29:55.953 17:26:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:29:55.953 17:26:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:29:55.953 17:26:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:55.953 17:26:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:29:55.953 17:26:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:55.953 17:26:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:29:55.953 17:26:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:55.953 17:26:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:29:55.953 [ 00:29:55.953 { 00:29:55.953 "name": "NewBaseBdev", 00:29:55.953 "aliases": [ 00:29:55.953 "61fcc9b2-fdac-4f03-aeb8-8425c45fc69b" 00:29:55.953 ], 00:29:55.953 "product_name": "Malloc disk", 00:29:55.953 "block_size": 512, 00:29:55.953 "num_blocks": 65536, 00:29:55.953 "uuid": "61fcc9b2-fdac-4f03-aeb8-8425c45fc69b", 00:29:55.953 "assigned_rate_limits": { 00:29:55.953 "rw_ios_per_sec": 0, 00:29:55.953 "rw_mbytes_per_sec": 0, 00:29:55.953 "r_mbytes_per_sec": 0, 00:29:55.953 "w_mbytes_per_sec": 0 00:29:55.953 }, 00:29:55.953 "claimed": true, 00:29:55.953 "claim_type": "exclusive_write", 00:29:55.953 "zoned": false, 00:29:55.953 "supported_io_types": { 00:29:55.953 "read": true, 00:29:55.953 "write": true, 00:29:55.953 "unmap": true, 00:29:55.953 "flush": true, 00:29:55.953 "reset": true, 00:29:55.953 "nvme_admin": false, 00:29:55.953 "nvme_io": false, 00:29:55.953 "nvme_io_md": false, 00:29:55.953 "write_zeroes": true, 00:29:55.953 "zcopy": true, 00:29:55.953 "get_zone_info": false, 00:29:55.953 "zone_management": false, 00:29:55.953 "zone_append": false, 00:29:55.953 "compare": false, 00:29:55.953 "compare_and_write": false, 00:29:55.953 "abort": true, 00:29:55.953 "seek_hole": false, 00:29:55.953 "seek_data": false, 00:29:55.953 "copy": true, 00:29:55.953 "nvme_iov_md": false 00:29:55.953 }, 00:29:55.953 "memory_domains": [ 00:29:55.953 { 00:29:55.953 "dma_device_id": "system", 00:29:55.953 "dma_device_type": 1 00:29:55.953 }, 00:29:55.953 { 00:29:55.953 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:29:55.953 "dma_device_type": 2 00:29:55.953 } 00:29:55.953 ], 00:29:55.953 "driver_specific": {} 00:29:55.953 } 00:29:55.953 ] 00:29:55.953 17:26:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:55.953 17:26:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:29:55.953 17:26:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online raid0 64 3 00:29:55.953 17:26:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:29:55.953 17:26:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:29:55.953 17:26:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:29:55.953 17:26:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:29:55.953 17:26:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:29:55.953 17:26:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:29:55.953 17:26:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:29:55.953 17:26:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:29:55.953 17:26:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:29:55.953 17:26:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:29:55.953 17:26:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:55.953 17:26:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:29:55.953 17:26:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:29:55.953 17:26:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:55.953 17:26:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:29:55.953 "name": "Existed_Raid", 00:29:55.953 "uuid": "cf2d1446-874e-41fa-b2f0-a9994206e7a1", 00:29:55.953 "strip_size_kb": 64, 00:29:55.953 "state": "online", 00:29:55.953 "raid_level": "raid0", 00:29:55.953 "superblock": false, 00:29:55.953 "num_base_bdevs": 3, 00:29:55.953 "num_base_bdevs_discovered": 3, 00:29:55.953 "num_base_bdevs_operational": 3, 00:29:55.953 "base_bdevs_list": [ 00:29:55.953 { 00:29:55.953 "name": "NewBaseBdev", 00:29:55.953 "uuid": "61fcc9b2-fdac-4f03-aeb8-8425c45fc69b", 00:29:55.953 "is_configured": true, 00:29:55.953 "data_offset": 0, 00:29:55.953 "data_size": 65536 00:29:55.953 }, 00:29:55.953 { 00:29:55.953 "name": "BaseBdev2", 00:29:55.953 "uuid": "5915de0f-4844-43b4-9c5d-c58946e1e015", 00:29:55.953 "is_configured": true, 00:29:55.953 "data_offset": 0, 00:29:55.953 "data_size": 65536 00:29:55.953 }, 00:29:55.953 { 00:29:55.953 "name": "BaseBdev3", 00:29:55.953 "uuid": "c5df1f6b-10a5-4f6e-a448-ec538e24e8b6", 00:29:55.953 "is_configured": true, 00:29:55.953 "data_offset": 0, 00:29:55.953 "data_size": 65536 00:29:55.953 } 00:29:55.953 ] 00:29:55.953 }' 00:29:55.953 17:26:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:29:55.953 17:26:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:29:56.247 17:26:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:29:56.247 17:26:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:29:56.247 17:26:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:29:56.247 17:26:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:29:56.247 17:26:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:29:56.247 17:26:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:29:56.247 17:26:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:29:56.247 17:26:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:56.247 17:26:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:29:56.247 17:26:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:29:56.247 [2024-11-26 17:26:33.684083] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:29:56.504 17:26:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:56.504 17:26:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:29:56.504 "name": "Existed_Raid", 00:29:56.504 "aliases": [ 00:29:56.504 "cf2d1446-874e-41fa-b2f0-a9994206e7a1" 00:29:56.504 ], 00:29:56.504 "product_name": "Raid Volume", 00:29:56.504 "block_size": 512, 00:29:56.504 "num_blocks": 196608, 00:29:56.504 "uuid": "cf2d1446-874e-41fa-b2f0-a9994206e7a1", 00:29:56.504 "assigned_rate_limits": { 00:29:56.504 "rw_ios_per_sec": 0, 00:29:56.504 "rw_mbytes_per_sec": 0, 00:29:56.504 "r_mbytes_per_sec": 0, 00:29:56.504 "w_mbytes_per_sec": 0 00:29:56.504 }, 00:29:56.504 "claimed": false, 00:29:56.504 "zoned": false, 00:29:56.504 "supported_io_types": { 00:29:56.504 "read": true, 00:29:56.504 "write": true, 00:29:56.504 "unmap": true, 00:29:56.504 "flush": true, 00:29:56.504 "reset": true, 00:29:56.504 "nvme_admin": false, 00:29:56.504 "nvme_io": false, 00:29:56.504 "nvme_io_md": false, 00:29:56.504 "write_zeroes": true, 00:29:56.504 "zcopy": false, 00:29:56.504 "get_zone_info": false, 00:29:56.504 "zone_management": false, 00:29:56.504 "zone_append": false, 00:29:56.504 "compare": false, 00:29:56.504 "compare_and_write": false, 00:29:56.504 "abort": false, 00:29:56.504 "seek_hole": false, 00:29:56.504 "seek_data": false, 00:29:56.504 "copy": false, 00:29:56.504 "nvme_iov_md": false 00:29:56.504 }, 00:29:56.504 "memory_domains": [ 00:29:56.504 { 00:29:56.504 "dma_device_id": "system", 00:29:56.504 "dma_device_type": 1 00:29:56.504 }, 00:29:56.504 { 00:29:56.504 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:29:56.504 "dma_device_type": 2 00:29:56.504 }, 00:29:56.504 { 00:29:56.504 "dma_device_id": "system", 00:29:56.504 "dma_device_type": 1 00:29:56.504 }, 00:29:56.504 { 00:29:56.504 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:29:56.504 "dma_device_type": 2 00:29:56.504 }, 00:29:56.504 { 00:29:56.504 "dma_device_id": "system", 00:29:56.504 "dma_device_type": 1 00:29:56.504 }, 00:29:56.504 { 00:29:56.504 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:29:56.504 "dma_device_type": 2 00:29:56.504 } 00:29:56.504 ], 00:29:56.504 "driver_specific": { 00:29:56.504 "raid": { 00:29:56.504 "uuid": "cf2d1446-874e-41fa-b2f0-a9994206e7a1", 00:29:56.504 "strip_size_kb": 64, 00:29:56.504 "state": "online", 00:29:56.504 "raid_level": "raid0", 00:29:56.504 "superblock": false, 00:29:56.504 "num_base_bdevs": 3, 00:29:56.504 "num_base_bdevs_discovered": 3, 00:29:56.504 "num_base_bdevs_operational": 3, 00:29:56.504 "base_bdevs_list": [ 00:29:56.504 { 00:29:56.504 "name": "NewBaseBdev", 00:29:56.504 "uuid": "61fcc9b2-fdac-4f03-aeb8-8425c45fc69b", 00:29:56.504 "is_configured": true, 00:29:56.504 "data_offset": 0, 00:29:56.504 "data_size": 65536 00:29:56.504 }, 00:29:56.504 { 00:29:56.504 "name": "BaseBdev2", 00:29:56.504 "uuid": "5915de0f-4844-43b4-9c5d-c58946e1e015", 00:29:56.504 "is_configured": true, 00:29:56.504 "data_offset": 0, 00:29:56.504 "data_size": 65536 00:29:56.504 }, 00:29:56.504 { 00:29:56.504 "name": "BaseBdev3", 00:29:56.504 "uuid": "c5df1f6b-10a5-4f6e-a448-ec538e24e8b6", 00:29:56.504 "is_configured": true, 00:29:56.504 "data_offset": 0, 00:29:56.504 "data_size": 65536 00:29:56.504 } 00:29:56.504 ] 00:29:56.504 } 00:29:56.505 } 00:29:56.505 }' 00:29:56.505 17:26:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:29:56.505 17:26:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:29:56.505 BaseBdev2 00:29:56.505 BaseBdev3' 00:29:56.505 17:26:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:29:56.505 17:26:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:29:56.505 17:26:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:29:56.505 17:26:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:29:56.505 17:26:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:56.505 17:26:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:29:56.505 17:26:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:29:56.505 17:26:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:56.505 17:26:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:29:56.505 17:26:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:29:56.505 17:26:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:29:56.505 17:26:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:29:56.505 17:26:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:29:56.505 17:26:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:56.505 17:26:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:29:56.505 17:26:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:56.505 17:26:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:29:56.505 17:26:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:29:56.505 17:26:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:29:56.505 17:26:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:29:56.505 17:26:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:29:56.505 17:26:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:56.505 17:26:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:29:56.505 17:26:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:56.762 17:26:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:29:56.762 17:26:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:29:56.762 17:26:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:29:56.762 17:26:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:56.762 17:26:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:29:56.762 [2024-11-26 17:26:33.959845] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:29:56.762 [2024-11-26 17:26:33.959928] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:29:56.762 [2024-11-26 17:26:33.960024] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:29:56.762 [2024-11-26 17:26:33.960103] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:29:56.762 [2024-11-26 17:26:33.960121] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name Existed_Raid, state offline 00:29:56.762 17:26:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:56.762 17:26:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@326 -- # killprocess 64218 00:29:56.762 17:26:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@954 -- # '[' -z 64218 ']' 00:29:56.762 17:26:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@958 -- # kill -0 64218 00:29:56.762 17:26:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@959 -- # uname 00:29:56.762 17:26:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:29:56.762 17:26:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 64218 00:29:56.762 17:26:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:29:56.762 17:26:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:29:56.762 17:26:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 64218' 00:29:56.762 killing process with pid 64218 00:29:56.762 17:26:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@973 -- # kill 64218 00:29:56.762 [2024-11-26 17:26:34.007076] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:29:56.762 17:26:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@978 -- # wait 64218 00:29:57.019 [2024-11-26 17:26:34.324650] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:29:58.400 17:26:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@328 -- # return 0 00:29:58.400 ************************************ 00:29:58.400 END TEST raid_state_function_test 00:29:58.400 ************************************ 00:29:58.400 00:29:58.400 real 0m10.753s 00:29:58.400 user 0m17.147s 00:29:58.400 sys 0m1.872s 00:29:58.400 17:26:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:29:58.400 17:26:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:29:58.400 17:26:35 bdev_raid -- bdev/bdev_raid.sh@969 -- # run_test raid_state_function_test_sb raid_state_function_test raid0 3 true 00:29:58.400 17:26:35 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:29:58.400 17:26:35 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:29:58.400 17:26:35 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:29:58.400 ************************************ 00:29:58.400 START TEST raid_state_function_test_sb 00:29:58.400 ************************************ 00:29:58.400 17:26:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1129 -- # raid_state_function_test raid0 3 true 00:29:58.400 17:26:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # local raid_level=raid0 00:29:58.400 17:26:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=3 00:29:58.400 17:26:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:29:58.400 17:26:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:29:58.400 17:26:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:29:58.400 17:26:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:29:58.400 17:26:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:29:58.400 17:26:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:29:58.400 17:26:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:29:58.400 17:26:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:29:58.400 17:26:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:29:58.400 17:26:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:29:58.400 17:26:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:29:58.400 17:26:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:29:58.400 17:26:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:29:58.400 17:26:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:29:58.400 17:26:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:29:58.400 17:26:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:29:58.400 17:26:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # local strip_size 00:29:58.400 17:26:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:29:58.400 17:26:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:29:58.400 17:26:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@215 -- # '[' raid0 '!=' raid1 ']' 00:29:58.400 17:26:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:29:58.400 17:26:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:29:58.400 17:26:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:29:58.400 17:26:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:29:58.400 17:26:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@229 -- # raid_pid=64840 00:29:58.400 Process raid pid: 64840 00:29:58.400 17:26:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 64840' 00:29:58.400 17:26:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # waitforlisten 64840 00:29:58.400 17:26:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:29:58.400 17:26:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@835 -- # '[' -z 64840 ']' 00:29:58.400 17:26:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:58.400 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:29:58.400 17:26:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@840 -- # local max_retries=100 00:29:58.400 17:26:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:58.400 17:26:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@844 -- # xtrace_disable 00:29:58.400 17:26:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:29:58.400 [2024-11-26 17:26:35.691033] Starting SPDK v25.01-pre git sha1 f7ce15267 / DPDK 24.03.0 initialization... 00:29:58.400 [2024-11-26 17:26:35.691225] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:29:58.659 [2024-11-26 17:26:35.881759] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:58.659 [2024-11-26 17:26:36.003968] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:29:58.916 [2024-11-26 17:26:36.231186] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:29:58.916 [2024-11-26 17:26:36.231227] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:29:59.175 17:26:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:29:59.175 17:26:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@868 -- # return 0 00:29:59.175 17:26:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:29:59.175 17:26:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:59.175 17:26:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:29:59.175 [2024-11-26 17:26:36.585864] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:29:59.175 [2024-11-26 17:26:36.585934] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:29:59.175 [2024-11-26 17:26:36.585947] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:29:59.175 [2024-11-26 17:26:36.585962] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:29:59.175 [2024-11-26 17:26:36.585970] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:29:59.175 [2024-11-26 17:26:36.585983] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:29:59.175 17:26:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:59.175 17:26:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:29:59.175 17:26:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:29:59.175 17:26:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:29:59.175 17:26:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:29:59.175 17:26:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:29:59.175 17:26:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:29:59.175 17:26:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:29:59.175 17:26:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:29:59.175 17:26:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:29:59.175 17:26:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:29:59.175 17:26:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:29:59.175 17:26:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:59.175 17:26:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:29:59.175 17:26:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:29:59.175 17:26:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:59.434 17:26:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:29:59.434 "name": "Existed_Raid", 00:29:59.434 "uuid": "48dd983a-2a33-4a32-adce-03a320d24702", 00:29:59.434 "strip_size_kb": 64, 00:29:59.434 "state": "configuring", 00:29:59.434 "raid_level": "raid0", 00:29:59.434 "superblock": true, 00:29:59.434 "num_base_bdevs": 3, 00:29:59.434 "num_base_bdevs_discovered": 0, 00:29:59.434 "num_base_bdevs_operational": 3, 00:29:59.434 "base_bdevs_list": [ 00:29:59.434 { 00:29:59.434 "name": "BaseBdev1", 00:29:59.434 "uuid": "00000000-0000-0000-0000-000000000000", 00:29:59.434 "is_configured": false, 00:29:59.434 "data_offset": 0, 00:29:59.434 "data_size": 0 00:29:59.434 }, 00:29:59.434 { 00:29:59.434 "name": "BaseBdev2", 00:29:59.434 "uuid": "00000000-0000-0000-0000-000000000000", 00:29:59.434 "is_configured": false, 00:29:59.434 "data_offset": 0, 00:29:59.434 "data_size": 0 00:29:59.434 }, 00:29:59.434 { 00:29:59.434 "name": "BaseBdev3", 00:29:59.434 "uuid": "00000000-0000-0000-0000-000000000000", 00:29:59.434 "is_configured": false, 00:29:59.434 "data_offset": 0, 00:29:59.434 "data_size": 0 00:29:59.434 } 00:29:59.434 ] 00:29:59.434 }' 00:29:59.434 17:26:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:29:59.434 17:26:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:29:59.693 17:26:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:29:59.693 17:26:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:59.693 17:26:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:29:59.693 [2024-11-26 17:26:37.049855] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:29:59.693 [2024-11-26 17:26:37.051494] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:29:59.693 17:26:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:59.693 17:26:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:29:59.693 17:26:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:59.693 17:26:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:29:59.693 [2024-11-26 17:26:37.057875] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:29:59.693 [2024-11-26 17:26:37.057928] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:29:59.693 [2024-11-26 17:26:37.057940] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:29:59.693 [2024-11-26 17:26:37.057954] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:29:59.693 [2024-11-26 17:26:37.057962] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:29:59.693 [2024-11-26 17:26:37.057975] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:29:59.693 17:26:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:59.693 17:26:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:29:59.693 17:26:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:59.693 17:26:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:29:59.693 [2024-11-26 17:26:37.104494] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:29:59.693 BaseBdev1 00:29:59.693 17:26:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:59.693 17:26:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:29:59.693 17:26:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:29:59.693 17:26:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:29:59.693 17:26:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:29:59.693 17:26:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:29:59.693 17:26:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:29:59.693 17:26:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:29:59.693 17:26:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:59.694 17:26:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:29:59.694 17:26:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:59.694 17:26:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:29:59.694 17:26:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:59.694 17:26:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:29:59.694 [ 00:29:59.694 { 00:29:59.694 "name": "BaseBdev1", 00:29:59.694 "aliases": [ 00:29:59.694 "cfd8a2c1-0dc6-49e3-aa59-eb75c05b1e21" 00:29:59.694 ], 00:29:59.694 "product_name": "Malloc disk", 00:29:59.694 "block_size": 512, 00:29:59.694 "num_blocks": 65536, 00:29:59.694 "uuid": "cfd8a2c1-0dc6-49e3-aa59-eb75c05b1e21", 00:29:59.694 "assigned_rate_limits": { 00:29:59.694 "rw_ios_per_sec": 0, 00:29:59.694 "rw_mbytes_per_sec": 0, 00:29:59.694 "r_mbytes_per_sec": 0, 00:29:59.694 "w_mbytes_per_sec": 0 00:29:59.694 }, 00:29:59.694 "claimed": true, 00:29:59.694 "claim_type": "exclusive_write", 00:29:59.694 "zoned": false, 00:29:59.694 "supported_io_types": { 00:29:59.694 "read": true, 00:29:59.694 "write": true, 00:29:59.694 "unmap": true, 00:29:59.694 "flush": true, 00:29:59.694 "reset": true, 00:29:59.694 "nvme_admin": false, 00:29:59.694 "nvme_io": false, 00:29:59.694 "nvme_io_md": false, 00:29:59.694 "write_zeroes": true, 00:29:59.694 "zcopy": true, 00:29:59.694 "get_zone_info": false, 00:29:59.694 "zone_management": false, 00:29:59.694 "zone_append": false, 00:29:59.694 "compare": false, 00:29:59.694 "compare_and_write": false, 00:29:59.694 "abort": true, 00:29:59.694 "seek_hole": false, 00:29:59.694 "seek_data": false, 00:29:59.694 "copy": true, 00:29:59.694 "nvme_iov_md": false 00:29:59.694 }, 00:29:59.694 "memory_domains": [ 00:29:59.694 { 00:29:59.694 "dma_device_id": "system", 00:29:59.694 "dma_device_type": 1 00:29:59.694 }, 00:29:59.694 { 00:29:59.694 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:29:59.953 "dma_device_type": 2 00:29:59.953 } 00:29:59.953 ], 00:29:59.953 "driver_specific": {} 00:29:59.953 } 00:29:59.953 ] 00:29:59.953 17:26:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:59.953 17:26:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:29:59.953 17:26:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:29:59.953 17:26:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:29:59.953 17:26:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:29:59.953 17:26:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:29:59.953 17:26:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:29:59.953 17:26:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:29:59.953 17:26:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:29:59.953 17:26:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:29:59.953 17:26:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:29:59.953 17:26:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:29:59.953 17:26:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:29:59.953 17:26:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:29:59.953 17:26:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:59.953 17:26:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:29:59.953 17:26:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:59.953 17:26:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:29:59.953 "name": "Existed_Raid", 00:29:59.953 "uuid": "8a6713e8-e65d-498f-aa66-c5041a59636e", 00:29:59.953 "strip_size_kb": 64, 00:29:59.953 "state": "configuring", 00:29:59.953 "raid_level": "raid0", 00:29:59.953 "superblock": true, 00:29:59.953 "num_base_bdevs": 3, 00:29:59.953 "num_base_bdevs_discovered": 1, 00:29:59.953 "num_base_bdevs_operational": 3, 00:29:59.953 "base_bdevs_list": [ 00:29:59.953 { 00:29:59.953 "name": "BaseBdev1", 00:29:59.953 "uuid": "cfd8a2c1-0dc6-49e3-aa59-eb75c05b1e21", 00:29:59.953 "is_configured": true, 00:29:59.953 "data_offset": 2048, 00:29:59.953 "data_size": 63488 00:29:59.953 }, 00:29:59.953 { 00:29:59.953 "name": "BaseBdev2", 00:29:59.953 "uuid": "00000000-0000-0000-0000-000000000000", 00:29:59.953 "is_configured": false, 00:29:59.953 "data_offset": 0, 00:29:59.953 "data_size": 0 00:29:59.953 }, 00:29:59.953 { 00:29:59.953 "name": "BaseBdev3", 00:29:59.953 "uuid": "00000000-0000-0000-0000-000000000000", 00:29:59.953 "is_configured": false, 00:29:59.953 "data_offset": 0, 00:29:59.953 "data_size": 0 00:29:59.953 } 00:29:59.953 ] 00:29:59.953 }' 00:29:59.953 17:26:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:29:59.953 17:26:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:30:00.211 17:26:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:30:00.212 17:26:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:00.212 17:26:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:30:00.212 [2024-11-26 17:26:37.632673] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:30:00.212 [2024-11-26 17:26:37.632892] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:30:00.212 17:26:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:00.212 17:26:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:30:00.212 17:26:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:00.212 17:26:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:30:00.212 [2024-11-26 17:26:37.640734] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:30:00.212 [2024-11-26 17:26:37.643161] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:30:00.212 [2024-11-26 17:26:37.643332] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:30:00.212 [2024-11-26 17:26:37.643355] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:30:00.212 [2024-11-26 17:26:37.643372] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:30:00.212 17:26:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:00.212 17:26:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:30:00.212 17:26:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:30:00.212 17:26:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:30:00.212 17:26:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:30:00.212 17:26:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:30:00.212 17:26:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:30:00.212 17:26:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:30:00.212 17:26:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:30:00.212 17:26:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:30:00.212 17:26:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:30:00.212 17:26:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:30:00.212 17:26:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:30:00.212 17:26:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:30:00.212 17:26:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:30:00.212 17:26:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:00.212 17:26:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:30:00.470 17:26:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:00.470 17:26:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:30:00.470 "name": "Existed_Raid", 00:30:00.470 "uuid": "ca04957a-b569-4870-bf67-f9b007acef0e", 00:30:00.470 "strip_size_kb": 64, 00:30:00.470 "state": "configuring", 00:30:00.470 "raid_level": "raid0", 00:30:00.470 "superblock": true, 00:30:00.470 "num_base_bdevs": 3, 00:30:00.470 "num_base_bdevs_discovered": 1, 00:30:00.470 "num_base_bdevs_operational": 3, 00:30:00.470 "base_bdevs_list": [ 00:30:00.470 { 00:30:00.470 "name": "BaseBdev1", 00:30:00.470 "uuid": "cfd8a2c1-0dc6-49e3-aa59-eb75c05b1e21", 00:30:00.470 "is_configured": true, 00:30:00.470 "data_offset": 2048, 00:30:00.470 "data_size": 63488 00:30:00.470 }, 00:30:00.470 { 00:30:00.470 "name": "BaseBdev2", 00:30:00.470 "uuid": "00000000-0000-0000-0000-000000000000", 00:30:00.470 "is_configured": false, 00:30:00.470 "data_offset": 0, 00:30:00.470 "data_size": 0 00:30:00.470 }, 00:30:00.470 { 00:30:00.470 "name": "BaseBdev3", 00:30:00.471 "uuid": "00000000-0000-0000-0000-000000000000", 00:30:00.471 "is_configured": false, 00:30:00.471 "data_offset": 0, 00:30:00.471 "data_size": 0 00:30:00.471 } 00:30:00.471 ] 00:30:00.471 }' 00:30:00.471 17:26:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:30:00.471 17:26:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:30:00.729 17:26:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:30:00.729 17:26:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:00.729 17:26:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:30:00.729 [2024-11-26 17:26:38.140263] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:30:00.729 BaseBdev2 00:30:00.729 17:26:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:00.729 17:26:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:30:00.729 17:26:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:30:00.729 17:26:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:30:00.729 17:26:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:30:00.729 17:26:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:30:00.729 17:26:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:30:00.729 17:26:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:30:00.729 17:26:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:00.729 17:26:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:30:00.729 17:26:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:00.729 17:26:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:30:00.729 17:26:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:00.729 17:26:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:30:00.729 [ 00:30:00.729 { 00:30:00.729 "name": "BaseBdev2", 00:30:00.729 "aliases": [ 00:30:00.729 "8806b014-ec8f-49f5-a17f-cfa8ebcc6ad9" 00:30:00.729 ], 00:30:00.729 "product_name": "Malloc disk", 00:30:00.729 "block_size": 512, 00:30:00.729 "num_blocks": 65536, 00:30:00.729 "uuid": "8806b014-ec8f-49f5-a17f-cfa8ebcc6ad9", 00:30:00.729 "assigned_rate_limits": { 00:30:00.729 "rw_ios_per_sec": 0, 00:30:00.729 "rw_mbytes_per_sec": 0, 00:30:00.729 "r_mbytes_per_sec": 0, 00:30:00.729 "w_mbytes_per_sec": 0 00:30:00.729 }, 00:30:00.729 "claimed": true, 00:30:00.729 "claim_type": "exclusive_write", 00:30:00.729 "zoned": false, 00:30:00.729 "supported_io_types": { 00:30:00.729 "read": true, 00:30:00.729 "write": true, 00:30:00.729 "unmap": true, 00:30:00.729 "flush": true, 00:30:00.729 "reset": true, 00:30:00.729 "nvme_admin": false, 00:30:00.729 "nvme_io": false, 00:30:00.729 "nvme_io_md": false, 00:30:00.729 "write_zeroes": true, 00:30:00.729 "zcopy": true, 00:30:00.729 "get_zone_info": false, 00:30:00.729 "zone_management": false, 00:30:00.729 "zone_append": false, 00:30:00.729 "compare": false, 00:30:00.729 "compare_and_write": false, 00:30:00.729 "abort": true, 00:30:00.729 "seek_hole": false, 00:30:00.729 "seek_data": false, 00:30:00.729 "copy": true, 00:30:00.729 "nvme_iov_md": false 00:30:00.729 }, 00:30:00.729 "memory_domains": [ 00:30:00.729 { 00:30:00.729 "dma_device_id": "system", 00:30:00.729 "dma_device_type": 1 00:30:00.729 }, 00:30:00.729 { 00:30:00.729 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:30:00.729 "dma_device_type": 2 00:30:00.987 } 00:30:00.987 ], 00:30:00.987 "driver_specific": {} 00:30:00.987 } 00:30:00.987 ] 00:30:00.987 17:26:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:00.987 17:26:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:30:00.987 17:26:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:30:00.987 17:26:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:30:00.987 17:26:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:30:00.987 17:26:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:30:00.987 17:26:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:30:00.987 17:26:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:30:00.987 17:26:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:30:00.987 17:26:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:30:00.987 17:26:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:30:00.987 17:26:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:30:00.988 17:26:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:30:00.988 17:26:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:30:00.988 17:26:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:30:00.988 17:26:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:00.988 17:26:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:30:00.988 17:26:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:30:00.988 17:26:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:00.988 17:26:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:30:00.988 "name": "Existed_Raid", 00:30:00.988 "uuid": "ca04957a-b569-4870-bf67-f9b007acef0e", 00:30:00.988 "strip_size_kb": 64, 00:30:00.988 "state": "configuring", 00:30:00.988 "raid_level": "raid0", 00:30:00.988 "superblock": true, 00:30:00.988 "num_base_bdevs": 3, 00:30:00.988 "num_base_bdevs_discovered": 2, 00:30:00.988 "num_base_bdevs_operational": 3, 00:30:00.988 "base_bdevs_list": [ 00:30:00.988 { 00:30:00.988 "name": "BaseBdev1", 00:30:00.988 "uuid": "cfd8a2c1-0dc6-49e3-aa59-eb75c05b1e21", 00:30:00.988 "is_configured": true, 00:30:00.988 "data_offset": 2048, 00:30:00.988 "data_size": 63488 00:30:00.988 }, 00:30:00.988 { 00:30:00.988 "name": "BaseBdev2", 00:30:00.988 "uuid": "8806b014-ec8f-49f5-a17f-cfa8ebcc6ad9", 00:30:00.988 "is_configured": true, 00:30:00.988 "data_offset": 2048, 00:30:00.988 "data_size": 63488 00:30:00.988 }, 00:30:00.988 { 00:30:00.988 "name": "BaseBdev3", 00:30:00.988 "uuid": "00000000-0000-0000-0000-000000000000", 00:30:00.988 "is_configured": false, 00:30:00.988 "data_offset": 0, 00:30:00.988 "data_size": 0 00:30:00.988 } 00:30:00.988 ] 00:30:00.988 }' 00:30:00.988 17:26:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:30:00.988 17:26:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:30:01.246 17:26:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:30:01.246 17:26:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:01.246 17:26:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:30:01.246 [2024-11-26 17:26:38.684118] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:30:01.246 [2024-11-26 17:26:38.684360] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:30:01.246 [2024-11-26 17:26:38.684385] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:30:01.246 [2024-11-26 17:26:38.684663] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:30:01.246 BaseBdev3 00:30:01.246 [2024-11-26 17:26:38.684819] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:30:01.246 [2024-11-26 17:26:38.684831] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:30:01.246 [2024-11-26 17:26:38.684979] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:30:01.246 17:26:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:01.246 17:26:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:30:01.246 17:26:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:30:01.246 17:26:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:30:01.246 17:26:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:30:01.246 17:26:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:30:01.246 17:26:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:30:01.246 17:26:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:30:01.246 17:26:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:01.246 17:26:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:30:01.505 17:26:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:01.505 17:26:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:30:01.505 17:26:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:01.505 17:26:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:30:01.505 [ 00:30:01.505 { 00:30:01.505 "name": "BaseBdev3", 00:30:01.505 "aliases": [ 00:30:01.505 "92c9cc85-2cc4-4b4c-ad23-22f2f614fb33" 00:30:01.505 ], 00:30:01.505 "product_name": "Malloc disk", 00:30:01.505 "block_size": 512, 00:30:01.505 "num_blocks": 65536, 00:30:01.505 "uuid": "92c9cc85-2cc4-4b4c-ad23-22f2f614fb33", 00:30:01.505 "assigned_rate_limits": { 00:30:01.505 "rw_ios_per_sec": 0, 00:30:01.505 "rw_mbytes_per_sec": 0, 00:30:01.505 "r_mbytes_per_sec": 0, 00:30:01.505 "w_mbytes_per_sec": 0 00:30:01.505 }, 00:30:01.505 "claimed": true, 00:30:01.505 "claim_type": "exclusive_write", 00:30:01.505 "zoned": false, 00:30:01.505 "supported_io_types": { 00:30:01.505 "read": true, 00:30:01.505 "write": true, 00:30:01.505 "unmap": true, 00:30:01.505 "flush": true, 00:30:01.505 "reset": true, 00:30:01.505 "nvme_admin": false, 00:30:01.505 "nvme_io": false, 00:30:01.505 "nvme_io_md": false, 00:30:01.505 "write_zeroes": true, 00:30:01.505 "zcopy": true, 00:30:01.505 "get_zone_info": false, 00:30:01.505 "zone_management": false, 00:30:01.505 "zone_append": false, 00:30:01.505 "compare": false, 00:30:01.505 "compare_and_write": false, 00:30:01.505 "abort": true, 00:30:01.505 "seek_hole": false, 00:30:01.505 "seek_data": false, 00:30:01.505 "copy": true, 00:30:01.505 "nvme_iov_md": false 00:30:01.505 }, 00:30:01.505 "memory_domains": [ 00:30:01.505 { 00:30:01.505 "dma_device_id": "system", 00:30:01.505 "dma_device_type": 1 00:30:01.505 }, 00:30:01.505 { 00:30:01.505 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:30:01.505 "dma_device_type": 2 00:30:01.505 } 00:30:01.505 ], 00:30:01.505 "driver_specific": {} 00:30:01.505 } 00:30:01.505 ] 00:30:01.505 17:26:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:01.505 17:26:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:30:01.505 17:26:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:30:01.505 17:26:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:30:01.505 17:26:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid0 64 3 00:30:01.505 17:26:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:30:01.505 17:26:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:30:01.505 17:26:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:30:01.505 17:26:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:30:01.505 17:26:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:30:01.505 17:26:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:30:01.505 17:26:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:30:01.505 17:26:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:30:01.505 17:26:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:30:01.505 17:26:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:30:01.505 17:26:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:30:01.505 17:26:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:01.505 17:26:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:30:01.505 17:26:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:01.505 17:26:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:30:01.505 "name": "Existed_Raid", 00:30:01.505 "uuid": "ca04957a-b569-4870-bf67-f9b007acef0e", 00:30:01.505 "strip_size_kb": 64, 00:30:01.505 "state": "online", 00:30:01.505 "raid_level": "raid0", 00:30:01.505 "superblock": true, 00:30:01.505 "num_base_bdevs": 3, 00:30:01.505 "num_base_bdevs_discovered": 3, 00:30:01.505 "num_base_bdevs_operational": 3, 00:30:01.505 "base_bdevs_list": [ 00:30:01.505 { 00:30:01.505 "name": "BaseBdev1", 00:30:01.505 "uuid": "cfd8a2c1-0dc6-49e3-aa59-eb75c05b1e21", 00:30:01.505 "is_configured": true, 00:30:01.505 "data_offset": 2048, 00:30:01.505 "data_size": 63488 00:30:01.505 }, 00:30:01.505 { 00:30:01.505 "name": "BaseBdev2", 00:30:01.505 "uuid": "8806b014-ec8f-49f5-a17f-cfa8ebcc6ad9", 00:30:01.505 "is_configured": true, 00:30:01.505 "data_offset": 2048, 00:30:01.505 "data_size": 63488 00:30:01.505 }, 00:30:01.505 { 00:30:01.505 "name": "BaseBdev3", 00:30:01.505 "uuid": "92c9cc85-2cc4-4b4c-ad23-22f2f614fb33", 00:30:01.505 "is_configured": true, 00:30:01.505 "data_offset": 2048, 00:30:01.505 "data_size": 63488 00:30:01.505 } 00:30:01.505 ] 00:30:01.505 }' 00:30:01.505 17:26:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:30:01.505 17:26:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:30:01.763 17:26:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:30:01.763 17:26:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:30:01.763 17:26:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:30:01.763 17:26:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:30:01.763 17:26:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:30:01.763 17:26:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:30:01.763 17:26:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:30:01.764 17:26:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:30:01.764 17:26:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:01.764 17:26:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:30:01.764 [2024-11-26 17:26:39.204662] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:30:02.021 17:26:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:02.021 17:26:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:30:02.021 "name": "Existed_Raid", 00:30:02.021 "aliases": [ 00:30:02.021 "ca04957a-b569-4870-bf67-f9b007acef0e" 00:30:02.021 ], 00:30:02.021 "product_name": "Raid Volume", 00:30:02.021 "block_size": 512, 00:30:02.021 "num_blocks": 190464, 00:30:02.021 "uuid": "ca04957a-b569-4870-bf67-f9b007acef0e", 00:30:02.021 "assigned_rate_limits": { 00:30:02.021 "rw_ios_per_sec": 0, 00:30:02.021 "rw_mbytes_per_sec": 0, 00:30:02.021 "r_mbytes_per_sec": 0, 00:30:02.021 "w_mbytes_per_sec": 0 00:30:02.021 }, 00:30:02.021 "claimed": false, 00:30:02.021 "zoned": false, 00:30:02.021 "supported_io_types": { 00:30:02.021 "read": true, 00:30:02.021 "write": true, 00:30:02.021 "unmap": true, 00:30:02.021 "flush": true, 00:30:02.021 "reset": true, 00:30:02.021 "nvme_admin": false, 00:30:02.021 "nvme_io": false, 00:30:02.021 "nvme_io_md": false, 00:30:02.021 "write_zeroes": true, 00:30:02.021 "zcopy": false, 00:30:02.021 "get_zone_info": false, 00:30:02.021 "zone_management": false, 00:30:02.021 "zone_append": false, 00:30:02.021 "compare": false, 00:30:02.021 "compare_and_write": false, 00:30:02.021 "abort": false, 00:30:02.021 "seek_hole": false, 00:30:02.021 "seek_data": false, 00:30:02.021 "copy": false, 00:30:02.021 "nvme_iov_md": false 00:30:02.021 }, 00:30:02.021 "memory_domains": [ 00:30:02.021 { 00:30:02.021 "dma_device_id": "system", 00:30:02.021 "dma_device_type": 1 00:30:02.021 }, 00:30:02.021 { 00:30:02.021 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:30:02.021 "dma_device_type": 2 00:30:02.021 }, 00:30:02.021 { 00:30:02.021 "dma_device_id": "system", 00:30:02.021 "dma_device_type": 1 00:30:02.021 }, 00:30:02.021 { 00:30:02.021 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:30:02.021 "dma_device_type": 2 00:30:02.021 }, 00:30:02.021 { 00:30:02.021 "dma_device_id": "system", 00:30:02.021 "dma_device_type": 1 00:30:02.021 }, 00:30:02.021 { 00:30:02.021 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:30:02.021 "dma_device_type": 2 00:30:02.021 } 00:30:02.021 ], 00:30:02.021 "driver_specific": { 00:30:02.021 "raid": { 00:30:02.021 "uuid": "ca04957a-b569-4870-bf67-f9b007acef0e", 00:30:02.021 "strip_size_kb": 64, 00:30:02.021 "state": "online", 00:30:02.021 "raid_level": "raid0", 00:30:02.021 "superblock": true, 00:30:02.021 "num_base_bdevs": 3, 00:30:02.021 "num_base_bdevs_discovered": 3, 00:30:02.021 "num_base_bdevs_operational": 3, 00:30:02.021 "base_bdevs_list": [ 00:30:02.021 { 00:30:02.021 "name": "BaseBdev1", 00:30:02.021 "uuid": "cfd8a2c1-0dc6-49e3-aa59-eb75c05b1e21", 00:30:02.021 "is_configured": true, 00:30:02.021 "data_offset": 2048, 00:30:02.021 "data_size": 63488 00:30:02.021 }, 00:30:02.021 { 00:30:02.021 "name": "BaseBdev2", 00:30:02.021 "uuid": "8806b014-ec8f-49f5-a17f-cfa8ebcc6ad9", 00:30:02.021 "is_configured": true, 00:30:02.021 "data_offset": 2048, 00:30:02.021 "data_size": 63488 00:30:02.021 }, 00:30:02.021 { 00:30:02.021 "name": "BaseBdev3", 00:30:02.021 "uuid": "92c9cc85-2cc4-4b4c-ad23-22f2f614fb33", 00:30:02.021 "is_configured": true, 00:30:02.021 "data_offset": 2048, 00:30:02.021 "data_size": 63488 00:30:02.021 } 00:30:02.021 ] 00:30:02.021 } 00:30:02.021 } 00:30:02.021 }' 00:30:02.021 17:26:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:30:02.021 17:26:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:30:02.021 BaseBdev2 00:30:02.021 BaseBdev3' 00:30:02.021 17:26:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:30:02.021 17:26:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:30:02.021 17:26:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:30:02.021 17:26:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:30:02.021 17:26:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:02.021 17:26:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:30:02.021 17:26:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:30:02.021 17:26:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:02.021 17:26:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:30:02.022 17:26:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:30:02.022 17:26:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:30:02.022 17:26:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:30:02.022 17:26:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:02.022 17:26:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:30:02.022 17:26:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:30:02.022 17:26:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:02.022 17:26:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:30:02.022 17:26:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:30:02.022 17:26:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:30:02.022 17:26:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:30:02.022 17:26:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:30:02.022 17:26:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:02.022 17:26:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:30:02.022 17:26:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:02.280 17:26:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:30:02.280 17:26:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:30:02.280 17:26:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:30:02.280 17:26:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:02.280 17:26:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:30:02.280 [2024-11-26 17:26:39.480428] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:30:02.280 [2024-11-26 17:26:39.480459] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:30:02.280 [2024-11-26 17:26:39.480520] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:30:02.280 17:26:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:02.280 17:26:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # local expected_state 00:30:02.280 17:26:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@261 -- # has_redundancy raid0 00:30:02.280 17:26:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # case $1 in 00:30:02.280 17:26:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # return 1 00:30:02.280 17:26:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@262 -- # expected_state=offline 00:30:02.280 17:26:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid offline raid0 64 2 00:30:02.280 17:26:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:30:02.280 17:26:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=offline 00:30:02.280 17:26:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:30:02.280 17:26:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:30:02.280 17:26:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:30:02.280 17:26:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:30:02.280 17:26:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:30:02.280 17:26:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:30:02.280 17:26:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:30:02.280 17:26:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:30:02.280 17:26:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:30:02.280 17:26:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:02.280 17:26:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:30:02.280 17:26:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:02.280 17:26:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:30:02.280 "name": "Existed_Raid", 00:30:02.280 "uuid": "ca04957a-b569-4870-bf67-f9b007acef0e", 00:30:02.280 "strip_size_kb": 64, 00:30:02.280 "state": "offline", 00:30:02.280 "raid_level": "raid0", 00:30:02.280 "superblock": true, 00:30:02.280 "num_base_bdevs": 3, 00:30:02.280 "num_base_bdevs_discovered": 2, 00:30:02.280 "num_base_bdevs_operational": 2, 00:30:02.280 "base_bdevs_list": [ 00:30:02.280 { 00:30:02.280 "name": null, 00:30:02.280 "uuid": "00000000-0000-0000-0000-000000000000", 00:30:02.280 "is_configured": false, 00:30:02.280 "data_offset": 0, 00:30:02.280 "data_size": 63488 00:30:02.280 }, 00:30:02.280 { 00:30:02.280 "name": "BaseBdev2", 00:30:02.280 "uuid": "8806b014-ec8f-49f5-a17f-cfa8ebcc6ad9", 00:30:02.280 "is_configured": true, 00:30:02.281 "data_offset": 2048, 00:30:02.281 "data_size": 63488 00:30:02.281 }, 00:30:02.281 { 00:30:02.281 "name": "BaseBdev3", 00:30:02.281 "uuid": "92c9cc85-2cc4-4b4c-ad23-22f2f614fb33", 00:30:02.281 "is_configured": true, 00:30:02.281 "data_offset": 2048, 00:30:02.281 "data_size": 63488 00:30:02.281 } 00:30:02.281 ] 00:30:02.281 }' 00:30:02.281 17:26:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:30:02.281 17:26:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:30:02.848 17:26:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:30:02.848 17:26:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:30:02.848 17:26:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:30:02.848 17:26:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:02.848 17:26:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:30:02.848 17:26:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:30:02.848 17:26:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:02.848 17:26:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:30:02.848 17:26:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:30:02.848 17:26:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:30:02.848 17:26:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:02.848 17:26:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:30:02.848 [2024-11-26 17:26:40.108054] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:30:02.848 17:26:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:02.848 17:26:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:30:02.848 17:26:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:30:02.848 17:26:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:30:02.848 17:26:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:30:02.848 17:26:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:02.848 17:26:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:30:02.848 17:26:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:02.848 17:26:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:30:02.848 17:26:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:30:02.848 17:26:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:30:02.848 17:26:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:02.848 17:26:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:30:02.848 [2024-11-26 17:26:40.262629] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:30:02.848 [2024-11-26 17:26:40.262681] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:30:03.107 17:26:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:03.107 17:26:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:30:03.107 17:26:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:30:03.107 17:26:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:30:03.107 17:26:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:30:03.107 17:26:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:03.107 17:26:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:30:03.107 17:26:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:03.107 17:26:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:30:03.107 17:26:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:30:03.107 17:26:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@284 -- # '[' 3 -gt 2 ']' 00:30:03.107 17:26:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:30:03.107 17:26:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:30:03.108 17:26:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:30:03.108 17:26:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:03.108 17:26:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:30:03.108 BaseBdev2 00:30:03.108 17:26:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:03.108 17:26:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:30:03.108 17:26:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:30:03.108 17:26:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:30:03.108 17:26:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:30:03.108 17:26:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:30:03.108 17:26:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:30:03.108 17:26:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:30:03.108 17:26:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:03.108 17:26:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:30:03.108 17:26:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:03.108 17:26:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:30:03.108 17:26:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:03.108 17:26:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:30:03.108 [ 00:30:03.108 { 00:30:03.108 "name": "BaseBdev2", 00:30:03.108 "aliases": [ 00:30:03.108 "9bb746eb-ba6c-4a8a-8e3f-71d845bafc58" 00:30:03.108 ], 00:30:03.108 "product_name": "Malloc disk", 00:30:03.108 "block_size": 512, 00:30:03.108 "num_blocks": 65536, 00:30:03.108 "uuid": "9bb746eb-ba6c-4a8a-8e3f-71d845bafc58", 00:30:03.108 "assigned_rate_limits": { 00:30:03.108 "rw_ios_per_sec": 0, 00:30:03.108 "rw_mbytes_per_sec": 0, 00:30:03.108 "r_mbytes_per_sec": 0, 00:30:03.108 "w_mbytes_per_sec": 0 00:30:03.108 }, 00:30:03.108 "claimed": false, 00:30:03.108 "zoned": false, 00:30:03.108 "supported_io_types": { 00:30:03.108 "read": true, 00:30:03.108 "write": true, 00:30:03.108 "unmap": true, 00:30:03.108 "flush": true, 00:30:03.108 "reset": true, 00:30:03.108 "nvme_admin": false, 00:30:03.108 "nvme_io": false, 00:30:03.108 "nvme_io_md": false, 00:30:03.108 "write_zeroes": true, 00:30:03.108 "zcopy": true, 00:30:03.108 "get_zone_info": false, 00:30:03.108 "zone_management": false, 00:30:03.108 "zone_append": false, 00:30:03.108 "compare": false, 00:30:03.108 "compare_and_write": false, 00:30:03.108 "abort": true, 00:30:03.108 "seek_hole": false, 00:30:03.108 "seek_data": false, 00:30:03.108 "copy": true, 00:30:03.108 "nvme_iov_md": false 00:30:03.108 }, 00:30:03.108 "memory_domains": [ 00:30:03.108 { 00:30:03.108 "dma_device_id": "system", 00:30:03.108 "dma_device_type": 1 00:30:03.108 }, 00:30:03.108 { 00:30:03.108 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:30:03.108 "dma_device_type": 2 00:30:03.108 } 00:30:03.108 ], 00:30:03.108 "driver_specific": {} 00:30:03.108 } 00:30:03.108 ] 00:30:03.108 17:26:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:03.108 17:26:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:30:03.108 17:26:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:30:03.108 17:26:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:30:03.108 17:26:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:30:03.108 17:26:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:03.108 17:26:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:30:03.108 BaseBdev3 00:30:03.108 17:26:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:03.108 17:26:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:30:03.108 17:26:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:30:03.108 17:26:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:30:03.108 17:26:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:30:03.108 17:26:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:30:03.108 17:26:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:30:03.108 17:26:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:30:03.108 17:26:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:03.108 17:26:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:30:03.108 17:26:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:03.108 17:26:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:30:03.108 17:26:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:03.108 17:26:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:30:03.367 [ 00:30:03.367 { 00:30:03.367 "name": "BaseBdev3", 00:30:03.367 "aliases": [ 00:30:03.367 "4efb729a-67bd-418b-9de5-b6f6d519b4d8" 00:30:03.367 ], 00:30:03.367 "product_name": "Malloc disk", 00:30:03.367 "block_size": 512, 00:30:03.367 "num_blocks": 65536, 00:30:03.367 "uuid": "4efb729a-67bd-418b-9de5-b6f6d519b4d8", 00:30:03.367 "assigned_rate_limits": { 00:30:03.367 "rw_ios_per_sec": 0, 00:30:03.367 "rw_mbytes_per_sec": 0, 00:30:03.367 "r_mbytes_per_sec": 0, 00:30:03.367 "w_mbytes_per_sec": 0 00:30:03.367 }, 00:30:03.367 "claimed": false, 00:30:03.367 "zoned": false, 00:30:03.368 "supported_io_types": { 00:30:03.368 "read": true, 00:30:03.368 "write": true, 00:30:03.368 "unmap": true, 00:30:03.368 "flush": true, 00:30:03.368 "reset": true, 00:30:03.368 "nvme_admin": false, 00:30:03.368 "nvme_io": false, 00:30:03.368 "nvme_io_md": false, 00:30:03.368 "write_zeroes": true, 00:30:03.368 "zcopy": true, 00:30:03.368 "get_zone_info": false, 00:30:03.368 "zone_management": false, 00:30:03.368 "zone_append": false, 00:30:03.368 "compare": false, 00:30:03.368 "compare_and_write": false, 00:30:03.368 "abort": true, 00:30:03.368 "seek_hole": false, 00:30:03.368 "seek_data": false, 00:30:03.368 "copy": true, 00:30:03.368 "nvme_iov_md": false 00:30:03.368 }, 00:30:03.368 "memory_domains": [ 00:30:03.368 { 00:30:03.368 "dma_device_id": "system", 00:30:03.368 "dma_device_type": 1 00:30:03.368 }, 00:30:03.368 { 00:30:03.368 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:30:03.368 "dma_device_type": 2 00:30:03.368 } 00:30:03.368 ], 00:30:03.368 "driver_specific": {} 00:30:03.368 } 00:30:03.368 ] 00:30:03.368 17:26:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:03.368 17:26:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:30:03.368 17:26:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:30:03.368 17:26:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:30:03.368 17:26:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:30:03.368 17:26:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:03.368 17:26:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:30:03.368 [2024-11-26 17:26:40.577924] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:30:03.368 [2024-11-26 17:26:40.577972] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:30:03.368 [2024-11-26 17:26:40.577996] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:30:03.368 [2024-11-26 17:26:40.580199] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:30:03.368 17:26:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:03.368 17:26:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:30:03.368 17:26:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:30:03.368 17:26:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:30:03.368 17:26:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:30:03.368 17:26:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:30:03.368 17:26:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:30:03.368 17:26:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:30:03.368 17:26:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:30:03.368 17:26:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:30:03.368 17:26:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:30:03.368 17:26:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:30:03.368 17:26:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:30:03.368 17:26:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:03.368 17:26:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:30:03.368 17:26:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:03.368 17:26:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:30:03.368 "name": "Existed_Raid", 00:30:03.368 "uuid": "e1cc9ef2-2d10-4bd9-9773-4a47fc4bd3d6", 00:30:03.368 "strip_size_kb": 64, 00:30:03.368 "state": "configuring", 00:30:03.368 "raid_level": "raid0", 00:30:03.368 "superblock": true, 00:30:03.368 "num_base_bdevs": 3, 00:30:03.368 "num_base_bdevs_discovered": 2, 00:30:03.368 "num_base_bdevs_operational": 3, 00:30:03.368 "base_bdevs_list": [ 00:30:03.368 { 00:30:03.368 "name": "BaseBdev1", 00:30:03.368 "uuid": "00000000-0000-0000-0000-000000000000", 00:30:03.368 "is_configured": false, 00:30:03.368 "data_offset": 0, 00:30:03.368 "data_size": 0 00:30:03.368 }, 00:30:03.368 { 00:30:03.368 "name": "BaseBdev2", 00:30:03.368 "uuid": "9bb746eb-ba6c-4a8a-8e3f-71d845bafc58", 00:30:03.368 "is_configured": true, 00:30:03.368 "data_offset": 2048, 00:30:03.368 "data_size": 63488 00:30:03.368 }, 00:30:03.368 { 00:30:03.368 "name": "BaseBdev3", 00:30:03.368 "uuid": "4efb729a-67bd-418b-9de5-b6f6d519b4d8", 00:30:03.368 "is_configured": true, 00:30:03.368 "data_offset": 2048, 00:30:03.368 "data_size": 63488 00:30:03.368 } 00:30:03.368 ] 00:30:03.368 }' 00:30:03.368 17:26:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:30:03.368 17:26:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:30:03.626 17:26:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:30:03.626 17:26:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:03.626 17:26:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:30:03.626 [2024-11-26 17:26:41.046081] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:30:03.626 17:26:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:03.626 17:26:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:30:03.626 17:26:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:30:03.626 17:26:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:30:03.626 17:26:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:30:03.626 17:26:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:30:03.626 17:26:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:30:03.626 17:26:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:30:03.626 17:26:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:30:03.626 17:26:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:30:03.626 17:26:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:30:03.626 17:26:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:30:03.626 17:26:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:03.626 17:26:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:30:03.626 17:26:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:30:03.884 17:26:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:03.884 17:26:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:30:03.884 "name": "Existed_Raid", 00:30:03.884 "uuid": "e1cc9ef2-2d10-4bd9-9773-4a47fc4bd3d6", 00:30:03.884 "strip_size_kb": 64, 00:30:03.884 "state": "configuring", 00:30:03.884 "raid_level": "raid0", 00:30:03.884 "superblock": true, 00:30:03.884 "num_base_bdevs": 3, 00:30:03.884 "num_base_bdevs_discovered": 1, 00:30:03.884 "num_base_bdevs_operational": 3, 00:30:03.885 "base_bdevs_list": [ 00:30:03.885 { 00:30:03.885 "name": "BaseBdev1", 00:30:03.885 "uuid": "00000000-0000-0000-0000-000000000000", 00:30:03.885 "is_configured": false, 00:30:03.885 "data_offset": 0, 00:30:03.885 "data_size": 0 00:30:03.885 }, 00:30:03.885 { 00:30:03.885 "name": null, 00:30:03.885 "uuid": "9bb746eb-ba6c-4a8a-8e3f-71d845bafc58", 00:30:03.885 "is_configured": false, 00:30:03.885 "data_offset": 0, 00:30:03.885 "data_size": 63488 00:30:03.885 }, 00:30:03.885 { 00:30:03.885 "name": "BaseBdev3", 00:30:03.885 "uuid": "4efb729a-67bd-418b-9de5-b6f6d519b4d8", 00:30:03.885 "is_configured": true, 00:30:03.885 "data_offset": 2048, 00:30:03.885 "data_size": 63488 00:30:03.885 } 00:30:03.885 ] 00:30:03.885 }' 00:30:03.885 17:26:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:30:03.885 17:26:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:30:04.144 17:26:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:30:04.144 17:26:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:30:04.144 17:26:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:04.144 17:26:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:30:04.144 17:26:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:04.144 17:26:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:30:04.144 17:26:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:30:04.144 17:26:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:04.144 17:26:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:30:04.404 [2024-11-26 17:26:41.592846] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:30:04.404 BaseBdev1 00:30:04.404 17:26:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:04.404 17:26:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:30:04.404 17:26:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:30:04.404 17:26:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:30:04.404 17:26:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:30:04.404 17:26:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:30:04.404 17:26:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:30:04.404 17:26:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:30:04.404 17:26:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:04.404 17:26:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:30:04.404 17:26:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:04.404 17:26:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:30:04.404 17:26:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:04.404 17:26:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:30:04.404 [ 00:30:04.404 { 00:30:04.404 "name": "BaseBdev1", 00:30:04.404 "aliases": [ 00:30:04.404 "d5e5c0d6-d487-4ce6-859d-a2564e220304" 00:30:04.404 ], 00:30:04.404 "product_name": "Malloc disk", 00:30:04.404 "block_size": 512, 00:30:04.404 "num_blocks": 65536, 00:30:04.404 "uuid": "d5e5c0d6-d487-4ce6-859d-a2564e220304", 00:30:04.404 "assigned_rate_limits": { 00:30:04.404 "rw_ios_per_sec": 0, 00:30:04.404 "rw_mbytes_per_sec": 0, 00:30:04.404 "r_mbytes_per_sec": 0, 00:30:04.404 "w_mbytes_per_sec": 0 00:30:04.404 }, 00:30:04.404 "claimed": true, 00:30:04.404 "claim_type": "exclusive_write", 00:30:04.404 "zoned": false, 00:30:04.404 "supported_io_types": { 00:30:04.404 "read": true, 00:30:04.404 "write": true, 00:30:04.404 "unmap": true, 00:30:04.404 "flush": true, 00:30:04.404 "reset": true, 00:30:04.404 "nvme_admin": false, 00:30:04.404 "nvme_io": false, 00:30:04.404 "nvme_io_md": false, 00:30:04.404 "write_zeroes": true, 00:30:04.404 "zcopy": true, 00:30:04.404 "get_zone_info": false, 00:30:04.404 "zone_management": false, 00:30:04.404 "zone_append": false, 00:30:04.404 "compare": false, 00:30:04.404 "compare_and_write": false, 00:30:04.404 "abort": true, 00:30:04.404 "seek_hole": false, 00:30:04.404 "seek_data": false, 00:30:04.404 "copy": true, 00:30:04.404 "nvme_iov_md": false 00:30:04.404 }, 00:30:04.404 "memory_domains": [ 00:30:04.404 { 00:30:04.404 "dma_device_id": "system", 00:30:04.404 "dma_device_type": 1 00:30:04.404 }, 00:30:04.404 { 00:30:04.404 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:30:04.404 "dma_device_type": 2 00:30:04.404 } 00:30:04.404 ], 00:30:04.404 "driver_specific": {} 00:30:04.404 } 00:30:04.404 ] 00:30:04.404 17:26:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:04.404 17:26:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:30:04.404 17:26:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:30:04.404 17:26:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:30:04.404 17:26:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:30:04.404 17:26:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:30:04.404 17:26:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:30:04.404 17:26:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:30:04.404 17:26:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:30:04.404 17:26:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:30:04.404 17:26:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:30:04.404 17:26:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:30:04.404 17:26:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:30:04.404 17:26:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:04.404 17:26:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:30:04.404 17:26:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:30:04.404 17:26:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:04.404 17:26:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:30:04.404 "name": "Existed_Raid", 00:30:04.404 "uuid": "e1cc9ef2-2d10-4bd9-9773-4a47fc4bd3d6", 00:30:04.404 "strip_size_kb": 64, 00:30:04.404 "state": "configuring", 00:30:04.404 "raid_level": "raid0", 00:30:04.404 "superblock": true, 00:30:04.404 "num_base_bdevs": 3, 00:30:04.404 "num_base_bdevs_discovered": 2, 00:30:04.404 "num_base_bdevs_operational": 3, 00:30:04.404 "base_bdevs_list": [ 00:30:04.404 { 00:30:04.404 "name": "BaseBdev1", 00:30:04.404 "uuid": "d5e5c0d6-d487-4ce6-859d-a2564e220304", 00:30:04.404 "is_configured": true, 00:30:04.404 "data_offset": 2048, 00:30:04.404 "data_size": 63488 00:30:04.404 }, 00:30:04.404 { 00:30:04.404 "name": null, 00:30:04.404 "uuid": "9bb746eb-ba6c-4a8a-8e3f-71d845bafc58", 00:30:04.404 "is_configured": false, 00:30:04.404 "data_offset": 0, 00:30:04.404 "data_size": 63488 00:30:04.404 }, 00:30:04.404 { 00:30:04.404 "name": "BaseBdev3", 00:30:04.404 "uuid": "4efb729a-67bd-418b-9de5-b6f6d519b4d8", 00:30:04.404 "is_configured": true, 00:30:04.404 "data_offset": 2048, 00:30:04.404 "data_size": 63488 00:30:04.404 } 00:30:04.404 ] 00:30:04.404 }' 00:30:04.404 17:26:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:30:04.404 17:26:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:30:04.663 17:26:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:30:04.663 17:26:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:30:04.663 17:26:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:04.663 17:26:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:30:04.922 17:26:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:04.922 17:26:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:30:04.922 17:26:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:30:04.922 17:26:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:04.922 17:26:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:30:04.922 [2024-11-26 17:26:42.153032] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:30:04.922 17:26:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:04.922 17:26:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:30:04.922 17:26:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:30:04.922 17:26:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:30:04.922 17:26:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:30:04.922 17:26:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:30:04.922 17:26:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:30:04.922 17:26:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:30:04.922 17:26:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:30:04.922 17:26:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:30:04.922 17:26:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:30:04.922 17:26:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:30:04.922 17:26:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:30:04.922 17:26:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:04.922 17:26:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:30:04.922 17:26:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:04.922 17:26:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:30:04.922 "name": "Existed_Raid", 00:30:04.922 "uuid": "e1cc9ef2-2d10-4bd9-9773-4a47fc4bd3d6", 00:30:04.922 "strip_size_kb": 64, 00:30:04.922 "state": "configuring", 00:30:04.922 "raid_level": "raid0", 00:30:04.922 "superblock": true, 00:30:04.922 "num_base_bdevs": 3, 00:30:04.922 "num_base_bdevs_discovered": 1, 00:30:04.922 "num_base_bdevs_operational": 3, 00:30:04.922 "base_bdevs_list": [ 00:30:04.922 { 00:30:04.922 "name": "BaseBdev1", 00:30:04.922 "uuid": "d5e5c0d6-d487-4ce6-859d-a2564e220304", 00:30:04.922 "is_configured": true, 00:30:04.922 "data_offset": 2048, 00:30:04.922 "data_size": 63488 00:30:04.922 }, 00:30:04.923 { 00:30:04.923 "name": null, 00:30:04.923 "uuid": "9bb746eb-ba6c-4a8a-8e3f-71d845bafc58", 00:30:04.923 "is_configured": false, 00:30:04.923 "data_offset": 0, 00:30:04.923 "data_size": 63488 00:30:04.923 }, 00:30:04.923 { 00:30:04.923 "name": null, 00:30:04.923 "uuid": "4efb729a-67bd-418b-9de5-b6f6d519b4d8", 00:30:04.923 "is_configured": false, 00:30:04.923 "data_offset": 0, 00:30:04.923 "data_size": 63488 00:30:04.923 } 00:30:04.923 ] 00:30:04.923 }' 00:30:04.923 17:26:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:30:04.923 17:26:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:30:05.181 17:26:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:30:05.181 17:26:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:30:05.181 17:26:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:05.181 17:26:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:30:05.440 17:26:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:05.440 17:26:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:30:05.440 17:26:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:30:05.440 17:26:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:05.440 17:26:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:30:05.440 [2024-11-26 17:26:42.669182] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:30:05.440 17:26:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:05.440 17:26:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:30:05.440 17:26:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:30:05.440 17:26:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:30:05.440 17:26:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:30:05.440 17:26:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:30:05.440 17:26:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:30:05.440 17:26:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:30:05.440 17:26:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:30:05.440 17:26:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:30:05.440 17:26:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:30:05.440 17:26:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:30:05.440 17:26:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:30:05.440 17:26:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:05.440 17:26:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:30:05.440 17:26:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:05.440 17:26:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:30:05.440 "name": "Existed_Raid", 00:30:05.440 "uuid": "e1cc9ef2-2d10-4bd9-9773-4a47fc4bd3d6", 00:30:05.440 "strip_size_kb": 64, 00:30:05.440 "state": "configuring", 00:30:05.440 "raid_level": "raid0", 00:30:05.440 "superblock": true, 00:30:05.440 "num_base_bdevs": 3, 00:30:05.440 "num_base_bdevs_discovered": 2, 00:30:05.440 "num_base_bdevs_operational": 3, 00:30:05.440 "base_bdevs_list": [ 00:30:05.440 { 00:30:05.440 "name": "BaseBdev1", 00:30:05.440 "uuid": "d5e5c0d6-d487-4ce6-859d-a2564e220304", 00:30:05.440 "is_configured": true, 00:30:05.440 "data_offset": 2048, 00:30:05.440 "data_size": 63488 00:30:05.440 }, 00:30:05.440 { 00:30:05.440 "name": null, 00:30:05.440 "uuid": "9bb746eb-ba6c-4a8a-8e3f-71d845bafc58", 00:30:05.440 "is_configured": false, 00:30:05.440 "data_offset": 0, 00:30:05.440 "data_size": 63488 00:30:05.440 }, 00:30:05.440 { 00:30:05.440 "name": "BaseBdev3", 00:30:05.440 "uuid": "4efb729a-67bd-418b-9de5-b6f6d519b4d8", 00:30:05.440 "is_configured": true, 00:30:05.440 "data_offset": 2048, 00:30:05.440 "data_size": 63488 00:30:05.440 } 00:30:05.440 ] 00:30:05.440 }' 00:30:05.440 17:26:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:30:05.440 17:26:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:30:05.699 17:26:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:30:05.699 17:26:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:30:05.699 17:26:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:05.699 17:26:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:30:05.958 17:26:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:05.958 17:26:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:30:05.958 17:26:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:30:05.958 17:26:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:05.958 17:26:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:30:05.958 [2024-11-26 17:26:43.181343] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:30:05.958 17:26:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:05.958 17:26:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:30:05.958 17:26:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:30:05.958 17:26:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:30:05.958 17:26:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:30:05.958 17:26:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:30:05.958 17:26:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:30:05.958 17:26:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:30:05.958 17:26:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:30:05.958 17:26:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:30:05.958 17:26:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:30:05.958 17:26:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:30:05.958 17:26:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:30:05.958 17:26:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:05.958 17:26:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:30:05.958 17:26:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:05.958 17:26:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:30:05.958 "name": "Existed_Raid", 00:30:05.958 "uuid": "e1cc9ef2-2d10-4bd9-9773-4a47fc4bd3d6", 00:30:05.958 "strip_size_kb": 64, 00:30:05.958 "state": "configuring", 00:30:05.958 "raid_level": "raid0", 00:30:05.958 "superblock": true, 00:30:05.958 "num_base_bdevs": 3, 00:30:05.958 "num_base_bdevs_discovered": 1, 00:30:05.958 "num_base_bdevs_operational": 3, 00:30:05.958 "base_bdevs_list": [ 00:30:05.958 { 00:30:05.958 "name": null, 00:30:05.958 "uuid": "d5e5c0d6-d487-4ce6-859d-a2564e220304", 00:30:05.958 "is_configured": false, 00:30:05.958 "data_offset": 0, 00:30:05.958 "data_size": 63488 00:30:05.958 }, 00:30:05.958 { 00:30:05.958 "name": null, 00:30:05.958 "uuid": "9bb746eb-ba6c-4a8a-8e3f-71d845bafc58", 00:30:05.958 "is_configured": false, 00:30:05.958 "data_offset": 0, 00:30:05.958 "data_size": 63488 00:30:05.958 }, 00:30:05.958 { 00:30:05.958 "name": "BaseBdev3", 00:30:05.958 "uuid": "4efb729a-67bd-418b-9de5-b6f6d519b4d8", 00:30:05.958 "is_configured": true, 00:30:05.958 "data_offset": 2048, 00:30:05.958 "data_size": 63488 00:30:05.958 } 00:30:05.958 ] 00:30:05.958 }' 00:30:05.958 17:26:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:30:05.958 17:26:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:30:06.525 17:26:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:30:06.525 17:26:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:06.525 17:26:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:30:06.525 17:26:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:30:06.525 17:26:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:06.525 17:26:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:30:06.525 17:26:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:30:06.525 17:26:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:06.525 17:26:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:30:06.525 [2024-11-26 17:26:43.804836] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:30:06.525 17:26:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:06.525 17:26:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:30:06.525 17:26:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:30:06.525 17:26:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:30:06.525 17:26:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:30:06.525 17:26:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:30:06.525 17:26:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:30:06.525 17:26:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:30:06.525 17:26:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:30:06.525 17:26:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:30:06.525 17:26:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:30:06.525 17:26:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:30:06.525 17:26:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:30:06.525 17:26:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:06.525 17:26:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:30:06.525 17:26:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:06.525 17:26:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:30:06.525 "name": "Existed_Raid", 00:30:06.525 "uuid": "e1cc9ef2-2d10-4bd9-9773-4a47fc4bd3d6", 00:30:06.525 "strip_size_kb": 64, 00:30:06.525 "state": "configuring", 00:30:06.525 "raid_level": "raid0", 00:30:06.525 "superblock": true, 00:30:06.525 "num_base_bdevs": 3, 00:30:06.525 "num_base_bdevs_discovered": 2, 00:30:06.525 "num_base_bdevs_operational": 3, 00:30:06.525 "base_bdevs_list": [ 00:30:06.525 { 00:30:06.525 "name": null, 00:30:06.525 "uuid": "d5e5c0d6-d487-4ce6-859d-a2564e220304", 00:30:06.525 "is_configured": false, 00:30:06.525 "data_offset": 0, 00:30:06.525 "data_size": 63488 00:30:06.525 }, 00:30:06.525 { 00:30:06.525 "name": "BaseBdev2", 00:30:06.525 "uuid": "9bb746eb-ba6c-4a8a-8e3f-71d845bafc58", 00:30:06.525 "is_configured": true, 00:30:06.525 "data_offset": 2048, 00:30:06.525 "data_size": 63488 00:30:06.525 }, 00:30:06.525 { 00:30:06.525 "name": "BaseBdev3", 00:30:06.525 "uuid": "4efb729a-67bd-418b-9de5-b6f6d519b4d8", 00:30:06.525 "is_configured": true, 00:30:06.525 "data_offset": 2048, 00:30:06.525 "data_size": 63488 00:30:06.525 } 00:30:06.525 ] 00:30:06.525 }' 00:30:06.525 17:26:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:30:06.525 17:26:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:30:07.092 17:26:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:30:07.092 17:26:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:30:07.092 17:26:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:07.092 17:26:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:30:07.092 17:26:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:07.092 17:26:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:30:07.092 17:26:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:30:07.092 17:26:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:30:07.092 17:26:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:07.092 17:26:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:30:07.092 17:26:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:07.092 17:26:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u d5e5c0d6-d487-4ce6-859d-a2564e220304 00:30:07.092 17:26:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:07.092 17:26:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:30:07.092 [2024-11-26 17:26:44.412347] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:30:07.092 [2024-11-26 17:26:44.412778] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:30:07.092 [2024-11-26 17:26:44.412807] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:30:07.092 [2024-11-26 17:26:44.413087] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:30:07.093 [2024-11-26 17:26:44.413224] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:30:07.093 [2024-11-26 17:26:44.413234] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000008200 00:30:07.093 NewBaseBdev 00:30:07.093 [2024-11-26 17:26:44.413365] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:30:07.093 17:26:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:07.093 17:26:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:30:07.093 17:26:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=NewBaseBdev 00:30:07.093 17:26:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:30:07.093 17:26:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:30:07.093 17:26:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:30:07.093 17:26:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:30:07.093 17:26:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:30:07.093 17:26:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:07.093 17:26:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:30:07.093 17:26:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:07.093 17:26:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:30:07.093 17:26:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:07.093 17:26:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:30:07.093 [ 00:30:07.093 { 00:30:07.093 "name": "NewBaseBdev", 00:30:07.093 "aliases": [ 00:30:07.093 "d5e5c0d6-d487-4ce6-859d-a2564e220304" 00:30:07.093 ], 00:30:07.093 "product_name": "Malloc disk", 00:30:07.093 "block_size": 512, 00:30:07.093 "num_blocks": 65536, 00:30:07.093 "uuid": "d5e5c0d6-d487-4ce6-859d-a2564e220304", 00:30:07.093 "assigned_rate_limits": { 00:30:07.093 "rw_ios_per_sec": 0, 00:30:07.093 "rw_mbytes_per_sec": 0, 00:30:07.093 "r_mbytes_per_sec": 0, 00:30:07.093 "w_mbytes_per_sec": 0 00:30:07.093 }, 00:30:07.093 "claimed": true, 00:30:07.093 "claim_type": "exclusive_write", 00:30:07.093 "zoned": false, 00:30:07.093 "supported_io_types": { 00:30:07.093 "read": true, 00:30:07.093 "write": true, 00:30:07.093 "unmap": true, 00:30:07.093 "flush": true, 00:30:07.093 "reset": true, 00:30:07.093 "nvme_admin": false, 00:30:07.093 "nvme_io": false, 00:30:07.093 "nvme_io_md": false, 00:30:07.093 "write_zeroes": true, 00:30:07.093 "zcopy": true, 00:30:07.093 "get_zone_info": false, 00:30:07.093 "zone_management": false, 00:30:07.093 "zone_append": false, 00:30:07.093 "compare": false, 00:30:07.093 "compare_and_write": false, 00:30:07.093 "abort": true, 00:30:07.093 "seek_hole": false, 00:30:07.093 "seek_data": false, 00:30:07.093 "copy": true, 00:30:07.093 "nvme_iov_md": false 00:30:07.093 }, 00:30:07.093 "memory_domains": [ 00:30:07.093 { 00:30:07.093 "dma_device_id": "system", 00:30:07.093 "dma_device_type": 1 00:30:07.093 }, 00:30:07.093 { 00:30:07.093 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:30:07.093 "dma_device_type": 2 00:30:07.093 } 00:30:07.093 ], 00:30:07.093 "driver_specific": {} 00:30:07.093 } 00:30:07.093 ] 00:30:07.093 17:26:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:07.093 17:26:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:30:07.093 17:26:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online raid0 64 3 00:30:07.093 17:26:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:30:07.093 17:26:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:30:07.093 17:26:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:30:07.093 17:26:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:30:07.093 17:26:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:30:07.093 17:26:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:30:07.093 17:26:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:30:07.093 17:26:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:30:07.093 17:26:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:30:07.093 17:26:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:30:07.093 17:26:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:30:07.093 17:26:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:07.093 17:26:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:30:07.093 17:26:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:07.093 17:26:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:30:07.093 "name": "Existed_Raid", 00:30:07.093 "uuid": "e1cc9ef2-2d10-4bd9-9773-4a47fc4bd3d6", 00:30:07.093 "strip_size_kb": 64, 00:30:07.093 "state": "online", 00:30:07.093 "raid_level": "raid0", 00:30:07.093 "superblock": true, 00:30:07.093 "num_base_bdevs": 3, 00:30:07.093 "num_base_bdevs_discovered": 3, 00:30:07.093 "num_base_bdevs_operational": 3, 00:30:07.093 "base_bdevs_list": [ 00:30:07.093 { 00:30:07.093 "name": "NewBaseBdev", 00:30:07.093 "uuid": "d5e5c0d6-d487-4ce6-859d-a2564e220304", 00:30:07.093 "is_configured": true, 00:30:07.093 "data_offset": 2048, 00:30:07.093 "data_size": 63488 00:30:07.093 }, 00:30:07.093 { 00:30:07.093 "name": "BaseBdev2", 00:30:07.093 "uuid": "9bb746eb-ba6c-4a8a-8e3f-71d845bafc58", 00:30:07.093 "is_configured": true, 00:30:07.093 "data_offset": 2048, 00:30:07.093 "data_size": 63488 00:30:07.093 }, 00:30:07.093 { 00:30:07.093 "name": "BaseBdev3", 00:30:07.093 "uuid": "4efb729a-67bd-418b-9de5-b6f6d519b4d8", 00:30:07.093 "is_configured": true, 00:30:07.093 "data_offset": 2048, 00:30:07.093 "data_size": 63488 00:30:07.093 } 00:30:07.093 ] 00:30:07.093 }' 00:30:07.093 17:26:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:30:07.093 17:26:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:30:07.660 17:26:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:30:07.660 17:26:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:30:07.660 17:26:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:30:07.660 17:26:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:30:07.660 17:26:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:30:07.660 17:26:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:30:07.660 17:26:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:30:07.660 17:26:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:30:07.660 17:26:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:07.660 17:26:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:30:07.660 [2024-11-26 17:26:44.909171] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:30:07.660 17:26:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:07.660 17:26:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:30:07.660 "name": "Existed_Raid", 00:30:07.660 "aliases": [ 00:30:07.660 "e1cc9ef2-2d10-4bd9-9773-4a47fc4bd3d6" 00:30:07.660 ], 00:30:07.660 "product_name": "Raid Volume", 00:30:07.660 "block_size": 512, 00:30:07.660 "num_blocks": 190464, 00:30:07.660 "uuid": "e1cc9ef2-2d10-4bd9-9773-4a47fc4bd3d6", 00:30:07.660 "assigned_rate_limits": { 00:30:07.660 "rw_ios_per_sec": 0, 00:30:07.660 "rw_mbytes_per_sec": 0, 00:30:07.660 "r_mbytes_per_sec": 0, 00:30:07.660 "w_mbytes_per_sec": 0 00:30:07.660 }, 00:30:07.660 "claimed": false, 00:30:07.660 "zoned": false, 00:30:07.660 "supported_io_types": { 00:30:07.660 "read": true, 00:30:07.660 "write": true, 00:30:07.660 "unmap": true, 00:30:07.660 "flush": true, 00:30:07.660 "reset": true, 00:30:07.660 "nvme_admin": false, 00:30:07.660 "nvme_io": false, 00:30:07.660 "nvme_io_md": false, 00:30:07.660 "write_zeroes": true, 00:30:07.660 "zcopy": false, 00:30:07.660 "get_zone_info": false, 00:30:07.660 "zone_management": false, 00:30:07.660 "zone_append": false, 00:30:07.660 "compare": false, 00:30:07.660 "compare_and_write": false, 00:30:07.660 "abort": false, 00:30:07.660 "seek_hole": false, 00:30:07.660 "seek_data": false, 00:30:07.660 "copy": false, 00:30:07.660 "nvme_iov_md": false 00:30:07.660 }, 00:30:07.660 "memory_domains": [ 00:30:07.660 { 00:30:07.660 "dma_device_id": "system", 00:30:07.660 "dma_device_type": 1 00:30:07.660 }, 00:30:07.660 { 00:30:07.660 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:30:07.660 "dma_device_type": 2 00:30:07.660 }, 00:30:07.660 { 00:30:07.660 "dma_device_id": "system", 00:30:07.660 "dma_device_type": 1 00:30:07.660 }, 00:30:07.660 { 00:30:07.660 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:30:07.660 "dma_device_type": 2 00:30:07.660 }, 00:30:07.660 { 00:30:07.660 "dma_device_id": "system", 00:30:07.660 "dma_device_type": 1 00:30:07.660 }, 00:30:07.660 { 00:30:07.660 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:30:07.660 "dma_device_type": 2 00:30:07.660 } 00:30:07.660 ], 00:30:07.660 "driver_specific": { 00:30:07.660 "raid": { 00:30:07.660 "uuid": "e1cc9ef2-2d10-4bd9-9773-4a47fc4bd3d6", 00:30:07.660 "strip_size_kb": 64, 00:30:07.660 "state": "online", 00:30:07.660 "raid_level": "raid0", 00:30:07.660 "superblock": true, 00:30:07.660 "num_base_bdevs": 3, 00:30:07.660 "num_base_bdevs_discovered": 3, 00:30:07.660 "num_base_bdevs_operational": 3, 00:30:07.660 "base_bdevs_list": [ 00:30:07.660 { 00:30:07.660 "name": "NewBaseBdev", 00:30:07.660 "uuid": "d5e5c0d6-d487-4ce6-859d-a2564e220304", 00:30:07.660 "is_configured": true, 00:30:07.660 "data_offset": 2048, 00:30:07.660 "data_size": 63488 00:30:07.660 }, 00:30:07.660 { 00:30:07.660 "name": "BaseBdev2", 00:30:07.660 "uuid": "9bb746eb-ba6c-4a8a-8e3f-71d845bafc58", 00:30:07.660 "is_configured": true, 00:30:07.660 "data_offset": 2048, 00:30:07.660 "data_size": 63488 00:30:07.660 }, 00:30:07.660 { 00:30:07.660 "name": "BaseBdev3", 00:30:07.660 "uuid": "4efb729a-67bd-418b-9de5-b6f6d519b4d8", 00:30:07.660 "is_configured": true, 00:30:07.660 "data_offset": 2048, 00:30:07.660 "data_size": 63488 00:30:07.660 } 00:30:07.660 ] 00:30:07.660 } 00:30:07.660 } 00:30:07.660 }' 00:30:07.660 17:26:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:30:07.660 17:26:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:30:07.660 BaseBdev2 00:30:07.660 BaseBdev3' 00:30:07.660 17:26:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:30:07.660 17:26:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:30:07.660 17:26:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:30:07.660 17:26:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:30:07.660 17:26:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:30:07.660 17:26:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:07.660 17:26:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:30:07.660 17:26:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:07.660 17:26:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:30:07.660 17:26:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:30:07.660 17:26:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:30:07.660 17:26:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:30:07.660 17:26:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:07.661 17:26:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:30:07.661 17:26:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:30:07.661 17:26:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:07.919 17:26:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:30:07.919 17:26:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:30:07.919 17:26:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:30:07.919 17:26:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:30:07.920 17:26:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:30:07.920 17:26:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:07.920 17:26:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:30:07.920 17:26:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:07.920 17:26:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:30:07.920 17:26:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:30:07.920 17:26:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:30:07.920 17:26:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:07.920 17:26:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:30:07.920 [2024-11-26 17:26:45.180925] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:30:07.920 [2024-11-26 17:26:45.180955] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:30:07.920 [2024-11-26 17:26:45.181042] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:30:07.920 [2024-11-26 17:26:45.181110] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:30:07.920 [2024-11-26 17:26:45.181126] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name Existed_Raid, state offline 00:30:07.920 17:26:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:07.920 17:26:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@326 -- # killprocess 64840 00:30:07.920 17:26:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@954 -- # '[' -z 64840 ']' 00:30:07.920 17:26:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@958 -- # kill -0 64840 00:30:07.920 17:26:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@959 -- # uname 00:30:07.920 17:26:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:30:07.920 17:26:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 64840 00:30:07.920 17:26:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:30:07.920 17:26:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:30:07.920 17:26:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@972 -- # echo 'killing process with pid 64840' 00:30:07.920 killing process with pid 64840 00:30:07.920 17:26:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@973 -- # kill 64840 00:30:07.920 [2024-11-26 17:26:45.228833] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:30:07.920 17:26:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@978 -- # wait 64840 00:30:08.178 [2024-11-26 17:26:45.544835] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:30:09.618 17:26:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@328 -- # return 0 00:30:09.618 00:30:09.618 real 0m11.179s 00:30:09.618 user 0m17.955s 00:30:09.618 sys 0m1.984s 00:30:09.618 17:26:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1130 -- # xtrace_disable 00:30:09.618 ************************************ 00:30:09.618 END TEST raid_state_function_test_sb 00:30:09.618 17:26:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:30:09.618 ************************************ 00:30:09.618 17:26:46 bdev_raid -- bdev/bdev_raid.sh@970 -- # run_test raid_superblock_test raid_superblock_test raid0 3 00:30:09.618 17:26:46 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:30:09.618 17:26:46 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:30:09.618 17:26:46 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:30:09.618 ************************************ 00:30:09.618 START TEST raid_superblock_test 00:30:09.618 ************************************ 00:30:09.618 17:26:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1129 -- # raid_superblock_test raid0 3 00:30:09.618 17:26:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@393 -- # local raid_level=raid0 00:30:09.618 17:26:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=3 00:30:09.618 17:26:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:30:09.618 17:26:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:30:09.618 17:26:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:30:09.618 17:26:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:30:09.618 17:26:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:30:09.618 17:26:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:30:09.619 17:26:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:30:09.619 17:26:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size 00:30:09.619 17:26:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:30:09.619 17:26:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:30:09.619 17:26:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:30:09.619 17:26:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@404 -- # '[' raid0 '!=' raid1 ']' 00:30:09.619 17:26:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@405 -- # strip_size=64 00:30:09.619 17:26:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@406 -- # strip_size_create_arg='-z 64' 00:30:09.619 17:26:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@412 -- # raid_pid=65465 00:30:09.619 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:30:09.619 17:26:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@413 -- # waitforlisten 65465 00:30:09.619 17:26:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:30:09.619 17:26:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@835 -- # '[' -z 65465 ']' 00:30:09.619 17:26:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:30:09.619 17:26:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:30:09.619 17:26:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:30:09.619 17:26:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:30:09.619 17:26:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:30:09.619 [2024-11-26 17:26:46.935687] Starting SPDK v25.01-pre git sha1 f7ce15267 / DPDK 24.03.0 initialization... 00:30:09.619 [2024-11-26 17:26:46.935865] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid65465 ] 00:30:09.876 [2024-11-26 17:26:47.140809] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:09.876 [2024-11-26 17:26:47.300288] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:30:10.135 [2024-11-26 17:26:47.521142] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:30:10.135 [2024-11-26 17:26:47.521205] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:30:10.702 17:26:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:30:10.702 17:26:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@868 -- # return 0 00:30:10.702 17:26:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:30:10.702 17:26:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:30:10.702 17:26:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:30:10.702 17:26:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:30:10.702 17:26:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:30:10.702 17:26:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:30:10.702 17:26:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:30:10.702 17:26:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:30:10.702 17:26:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc1 00:30:10.702 17:26:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:10.702 17:26:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:30:10.702 malloc1 00:30:10.702 17:26:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:10.702 17:26:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:30:10.702 17:26:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:10.702 17:26:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:30:10.702 [2024-11-26 17:26:47.950881] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:30:10.702 [2024-11-26 17:26:47.951110] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:30:10.702 [2024-11-26 17:26:47.951189] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:30:10.702 [2024-11-26 17:26:47.951284] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:30:10.702 [2024-11-26 17:26:47.953798] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:30:10.702 [2024-11-26 17:26:47.953946] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:30:10.702 pt1 00:30:10.702 17:26:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:10.702 17:26:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:30:10.702 17:26:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:30:10.702 17:26:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:30:10.702 17:26:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:30:10.702 17:26:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:30:10.702 17:26:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:30:10.702 17:26:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:30:10.702 17:26:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:30:10.702 17:26:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc2 00:30:10.702 17:26:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:10.702 17:26:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:30:10.702 malloc2 00:30:10.702 17:26:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:10.702 17:26:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:30:10.702 17:26:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:10.702 17:26:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:30:10.702 [2024-11-26 17:26:48.007137] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:30:10.702 [2024-11-26 17:26:48.007196] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:30:10.702 [2024-11-26 17:26:48.007227] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:30:10.702 [2024-11-26 17:26:48.007240] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:30:10.702 [2024-11-26 17:26:48.009618] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:30:10.702 [2024-11-26 17:26:48.009656] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:30:10.702 pt2 00:30:10.702 17:26:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:10.702 17:26:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:30:10.702 17:26:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:30:10.702 17:26:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc3 00:30:10.702 17:26:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt3 00:30:10.702 17:26:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000003 00:30:10.702 17:26:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:30:10.702 17:26:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:30:10.702 17:26:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:30:10.702 17:26:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc3 00:30:10.702 17:26:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:10.702 17:26:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:30:10.702 malloc3 00:30:10.702 17:26:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:10.702 17:26:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:30:10.702 17:26:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:10.702 17:26:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:30:10.702 [2024-11-26 17:26:48.081951] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:30:10.702 [2024-11-26 17:26:48.082016] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:30:10.702 [2024-11-26 17:26:48.082042] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:30:10.702 [2024-11-26 17:26:48.082070] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:30:10.702 [2024-11-26 17:26:48.084521] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:30:10.702 [2024-11-26 17:26:48.084560] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:30:10.702 pt3 00:30:10.703 17:26:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:10.703 17:26:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:30:10.703 17:26:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:30:10.703 17:26:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''pt1 pt2 pt3'\''' -n raid_bdev1 -s 00:30:10.703 17:26:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:10.703 17:26:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:30:10.703 [2024-11-26 17:26:48.093982] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:30:10.703 [2024-11-26 17:26:48.096201] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:30:10.703 [2024-11-26 17:26:48.096431] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:30:10.703 [2024-11-26 17:26:48.096628] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:30:10.703 [2024-11-26 17:26:48.096648] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:30:10.703 [2024-11-26 17:26:48.096961] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:30:10.703 [2024-11-26 17:26:48.097161] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:30:10.703 [2024-11-26 17:26:48.097174] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:30:10.703 [2024-11-26 17:26:48.097366] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:30:10.703 17:26:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:10.703 17:26:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 3 00:30:10.703 17:26:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:30:10.703 17:26:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:30:10.703 17:26:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:30:10.703 17:26:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:30:10.703 17:26:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:30:10.703 17:26:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:30:10.703 17:26:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:30:10.703 17:26:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:30:10.703 17:26:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:30:10.703 17:26:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:30:10.703 17:26:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:10.703 17:26:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:30:10.703 17:26:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:30:10.703 17:26:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:10.703 17:26:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:30:10.703 "name": "raid_bdev1", 00:30:10.703 "uuid": "7129ca77-45ee-4482-9502-d658c5303201", 00:30:10.703 "strip_size_kb": 64, 00:30:10.703 "state": "online", 00:30:10.703 "raid_level": "raid0", 00:30:10.703 "superblock": true, 00:30:10.703 "num_base_bdevs": 3, 00:30:10.703 "num_base_bdevs_discovered": 3, 00:30:10.703 "num_base_bdevs_operational": 3, 00:30:10.703 "base_bdevs_list": [ 00:30:10.703 { 00:30:10.703 "name": "pt1", 00:30:10.703 "uuid": "00000000-0000-0000-0000-000000000001", 00:30:10.703 "is_configured": true, 00:30:10.703 "data_offset": 2048, 00:30:10.703 "data_size": 63488 00:30:10.703 }, 00:30:10.703 { 00:30:10.703 "name": "pt2", 00:30:10.703 "uuid": "00000000-0000-0000-0000-000000000002", 00:30:10.703 "is_configured": true, 00:30:10.703 "data_offset": 2048, 00:30:10.703 "data_size": 63488 00:30:10.703 }, 00:30:10.703 { 00:30:10.703 "name": "pt3", 00:30:10.703 "uuid": "00000000-0000-0000-0000-000000000003", 00:30:10.703 "is_configured": true, 00:30:10.703 "data_offset": 2048, 00:30:10.703 "data_size": 63488 00:30:10.703 } 00:30:10.703 ] 00:30:10.703 }' 00:30:10.703 17:26:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:30:10.703 17:26:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:30:11.271 17:26:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:30:11.271 17:26:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:30:11.271 17:26:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:30:11.271 17:26:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:30:11.271 17:26:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:30:11.271 17:26:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:30:11.271 17:26:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:30:11.271 17:26:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:30:11.271 17:26:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:11.271 17:26:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:30:11.271 [2024-11-26 17:26:48.550413] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:30:11.271 17:26:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:11.271 17:26:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:30:11.271 "name": "raid_bdev1", 00:30:11.271 "aliases": [ 00:30:11.271 "7129ca77-45ee-4482-9502-d658c5303201" 00:30:11.271 ], 00:30:11.271 "product_name": "Raid Volume", 00:30:11.271 "block_size": 512, 00:30:11.271 "num_blocks": 190464, 00:30:11.271 "uuid": "7129ca77-45ee-4482-9502-d658c5303201", 00:30:11.271 "assigned_rate_limits": { 00:30:11.271 "rw_ios_per_sec": 0, 00:30:11.271 "rw_mbytes_per_sec": 0, 00:30:11.271 "r_mbytes_per_sec": 0, 00:30:11.271 "w_mbytes_per_sec": 0 00:30:11.271 }, 00:30:11.271 "claimed": false, 00:30:11.271 "zoned": false, 00:30:11.271 "supported_io_types": { 00:30:11.271 "read": true, 00:30:11.271 "write": true, 00:30:11.271 "unmap": true, 00:30:11.271 "flush": true, 00:30:11.271 "reset": true, 00:30:11.271 "nvme_admin": false, 00:30:11.271 "nvme_io": false, 00:30:11.271 "nvme_io_md": false, 00:30:11.271 "write_zeroes": true, 00:30:11.271 "zcopy": false, 00:30:11.271 "get_zone_info": false, 00:30:11.271 "zone_management": false, 00:30:11.271 "zone_append": false, 00:30:11.271 "compare": false, 00:30:11.271 "compare_and_write": false, 00:30:11.271 "abort": false, 00:30:11.271 "seek_hole": false, 00:30:11.271 "seek_data": false, 00:30:11.271 "copy": false, 00:30:11.271 "nvme_iov_md": false 00:30:11.271 }, 00:30:11.271 "memory_domains": [ 00:30:11.271 { 00:30:11.271 "dma_device_id": "system", 00:30:11.271 "dma_device_type": 1 00:30:11.271 }, 00:30:11.271 { 00:30:11.271 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:30:11.271 "dma_device_type": 2 00:30:11.271 }, 00:30:11.271 { 00:30:11.271 "dma_device_id": "system", 00:30:11.271 "dma_device_type": 1 00:30:11.271 }, 00:30:11.271 { 00:30:11.271 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:30:11.271 "dma_device_type": 2 00:30:11.271 }, 00:30:11.271 { 00:30:11.271 "dma_device_id": "system", 00:30:11.271 "dma_device_type": 1 00:30:11.271 }, 00:30:11.271 { 00:30:11.271 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:30:11.271 "dma_device_type": 2 00:30:11.271 } 00:30:11.271 ], 00:30:11.271 "driver_specific": { 00:30:11.271 "raid": { 00:30:11.271 "uuid": "7129ca77-45ee-4482-9502-d658c5303201", 00:30:11.271 "strip_size_kb": 64, 00:30:11.271 "state": "online", 00:30:11.271 "raid_level": "raid0", 00:30:11.271 "superblock": true, 00:30:11.271 "num_base_bdevs": 3, 00:30:11.271 "num_base_bdevs_discovered": 3, 00:30:11.271 "num_base_bdevs_operational": 3, 00:30:11.271 "base_bdevs_list": [ 00:30:11.271 { 00:30:11.271 "name": "pt1", 00:30:11.271 "uuid": "00000000-0000-0000-0000-000000000001", 00:30:11.271 "is_configured": true, 00:30:11.271 "data_offset": 2048, 00:30:11.271 "data_size": 63488 00:30:11.271 }, 00:30:11.271 { 00:30:11.271 "name": "pt2", 00:30:11.271 "uuid": "00000000-0000-0000-0000-000000000002", 00:30:11.271 "is_configured": true, 00:30:11.271 "data_offset": 2048, 00:30:11.271 "data_size": 63488 00:30:11.271 }, 00:30:11.271 { 00:30:11.271 "name": "pt3", 00:30:11.271 "uuid": "00000000-0000-0000-0000-000000000003", 00:30:11.271 "is_configured": true, 00:30:11.271 "data_offset": 2048, 00:30:11.271 "data_size": 63488 00:30:11.271 } 00:30:11.271 ] 00:30:11.271 } 00:30:11.271 } 00:30:11.271 }' 00:30:11.271 17:26:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:30:11.271 17:26:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:30:11.271 pt2 00:30:11.271 pt3' 00:30:11.271 17:26:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:30:11.271 17:26:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:30:11.271 17:26:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:30:11.271 17:26:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:30:11.271 17:26:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:30:11.271 17:26:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:11.271 17:26:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:30:11.271 17:26:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:11.531 17:26:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:30:11.531 17:26:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:30:11.531 17:26:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:30:11.531 17:26:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:30:11.531 17:26:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:11.531 17:26:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:30:11.531 17:26:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:30:11.531 17:26:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:11.531 17:26:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:30:11.531 17:26:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:30:11.531 17:26:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:30:11.531 17:26:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:30:11.531 17:26:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:11.531 17:26:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:30:11.531 17:26:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:30:11.531 17:26:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:11.531 17:26:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:30:11.531 17:26:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:30:11.531 17:26:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:30:11.531 17:26:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:11.531 17:26:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:30:11.531 17:26:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:30:11.531 [2024-11-26 17:26:48.822354] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:30:11.531 17:26:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:11.531 17:26:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=7129ca77-45ee-4482-9502-d658c5303201 00:30:11.531 17:26:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@436 -- # '[' -z 7129ca77-45ee-4482-9502-d658c5303201 ']' 00:30:11.531 17:26:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:30:11.531 17:26:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:11.531 17:26:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:30:11.531 [2024-11-26 17:26:48.866092] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:30:11.531 [2024-11-26 17:26:48.866122] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:30:11.531 [2024-11-26 17:26:48.866194] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:30:11.531 [2024-11-26 17:26:48.866257] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:30:11.531 [2024-11-26 17:26:48.866269] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:30:11.531 17:26:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:11.531 17:26:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:30:11.531 17:26:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:11.531 17:26:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:30:11.531 17:26:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:30:11.531 17:26:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:11.531 17:26:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:30:11.531 17:26:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:30:11.531 17:26:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:30:11.531 17:26:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:30:11.531 17:26:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:11.531 17:26:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:30:11.531 17:26:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:11.531 17:26:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:30:11.531 17:26:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:30:11.531 17:26:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:11.531 17:26:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:30:11.531 17:26:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:11.531 17:26:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:30:11.531 17:26:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt3 00:30:11.531 17:26:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:11.531 17:26:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:30:11.531 17:26:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:11.531 17:26:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:30:11.531 17:26:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:11.531 17:26:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:30:11.531 17:26:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:30:11.790 17:26:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:11.790 17:26:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:30:11.790 17:26:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''malloc1 malloc2 malloc3'\''' -n raid_bdev1 00:30:11.790 17:26:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@652 -- # local es=0 00:30:11.790 17:26:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''malloc1 malloc2 malloc3'\''' -n raid_bdev1 00:30:11.790 17:26:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:30:11.790 17:26:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:30:11.790 17:26:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:30:11.790 17:26:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:30:11.790 17:26:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''malloc1 malloc2 malloc3'\''' -n raid_bdev1 00:30:11.790 17:26:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:11.790 17:26:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:30:11.790 [2024-11-26 17:26:49.002223] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:30:11.790 [2024-11-26 17:26:49.004335] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:30:11.790 [2024-11-26 17:26:49.004395] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc3 is claimed 00:30:11.790 [2024-11-26 17:26:49.004451] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:30:11.790 [2024-11-26 17:26:49.004513] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:30:11.790 [2024-11-26 17:26:49.004535] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc3 00:30:11.790 [2024-11-26 17:26:49.004557] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:30:11.790 [2024-11-26 17:26:49.004570] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state configuring 00:30:11.790 request: 00:30:11.790 { 00:30:11.790 "name": "raid_bdev1", 00:30:11.790 "raid_level": "raid0", 00:30:11.790 "base_bdevs": [ 00:30:11.790 "malloc1", 00:30:11.790 "malloc2", 00:30:11.790 "malloc3" 00:30:11.790 ], 00:30:11.790 "strip_size_kb": 64, 00:30:11.790 "superblock": false, 00:30:11.790 "method": "bdev_raid_create", 00:30:11.790 "req_id": 1 00:30:11.790 } 00:30:11.790 Got JSON-RPC error response 00:30:11.790 response: 00:30:11.790 { 00:30:11.790 "code": -17, 00:30:11.790 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:30:11.790 } 00:30:11.790 17:26:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:30:11.790 17:26:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@655 -- # es=1 00:30:11.790 17:26:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:30:11.790 17:26:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:30:11.790 17:26:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:30:11.790 17:26:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:30:11.790 17:26:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:30:11.790 17:26:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:11.790 17:26:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:30:11.790 17:26:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:11.790 17:26:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:30:11.790 17:26:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:30:11.790 17:26:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:30:11.790 17:26:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:11.790 17:26:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:30:11.790 [2024-11-26 17:26:49.062134] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:30:11.790 [2024-11-26 17:26:49.062185] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:30:11.790 [2024-11-26 17:26:49.062206] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:30:11.790 [2024-11-26 17:26:49.062218] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:30:11.790 [2024-11-26 17:26:49.064615] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:30:11.790 [2024-11-26 17:26:49.064654] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:30:11.790 [2024-11-26 17:26:49.064733] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:30:11.790 [2024-11-26 17:26:49.064789] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:30:11.790 pt1 00:30:11.790 17:26:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:11.790 17:26:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring raid0 64 3 00:30:11.790 17:26:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:30:11.790 17:26:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:30:11.790 17:26:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:30:11.790 17:26:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:30:11.790 17:26:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:30:11.790 17:26:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:30:11.790 17:26:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:30:11.790 17:26:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:30:11.790 17:26:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:30:11.790 17:26:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:30:11.790 17:26:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:30:11.790 17:26:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:11.790 17:26:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:30:11.790 17:26:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:11.790 17:26:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:30:11.790 "name": "raid_bdev1", 00:30:11.790 "uuid": "7129ca77-45ee-4482-9502-d658c5303201", 00:30:11.790 "strip_size_kb": 64, 00:30:11.790 "state": "configuring", 00:30:11.790 "raid_level": "raid0", 00:30:11.790 "superblock": true, 00:30:11.790 "num_base_bdevs": 3, 00:30:11.790 "num_base_bdevs_discovered": 1, 00:30:11.790 "num_base_bdevs_operational": 3, 00:30:11.790 "base_bdevs_list": [ 00:30:11.790 { 00:30:11.790 "name": "pt1", 00:30:11.790 "uuid": "00000000-0000-0000-0000-000000000001", 00:30:11.790 "is_configured": true, 00:30:11.790 "data_offset": 2048, 00:30:11.790 "data_size": 63488 00:30:11.790 }, 00:30:11.790 { 00:30:11.790 "name": null, 00:30:11.790 "uuid": "00000000-0000-0000-0000-000000000002", 00:30:11.790 "is_configured": false, 00:30:11.790 "data_offset": 2048, 00:30:11.790 "data_size": 63488 00:30:11.790 }, 00:30:11.790 { 00:30:11.790 "name": null, 00:30:11.790 "uuid": "00000000-0000-0000-0000-000000000003", 00:30:11.790 "is_configured": false, 00:30:11.790 "data_offset": 2048, 00:30:11.790 "data_size": 63488 00:30:11.790 } 00:30:11.790 ] 00:30:11.790 }' 00:30:11.790 17:26:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:30:11.790 17:26:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:30:12.357 17:26:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@470 -- # '[' 3 -gt 2 ']' 00:30:12.357 17:26:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@472 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:30:12.357 17:26:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:12.357 17:26:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:30:12.357 [2024-11-26 17:26:49.542302] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:30:12.357 [2024-11-26 17:26:49.542395] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:30:12.357 [2024-11-26 17:26:49.542428] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009c80 00:30:12.357 [2024-11-26 17:26:49.542441] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:30:12.357 [2024-11-26 17:26:49.542923] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:30:12.357 [2024-11-26 17:26:49.542943] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:30:12.357 [2024-11-26 17:26:49.543043] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:30:12.357 [2024-11-26 17:26:49.543091] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:30:12.357 pt2 00:30:12.357 17:26:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:12.357 17:26:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@473 -- # rpc_cmd bdev_passthru_delete pt2 00:30:12.357 17:26:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:12.357 17:26:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:30:12.357 [2024-11-26 17:26:49.550303] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: pt2 00:30:12.357 17:26:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:12.357 17:26:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@474 -- # verify_raid_bdev_state raid_bdev1 configuring raid0 64 3 00:30:12.357 17:26:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:30:12.357 17:26:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:30:12.357 17:26:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:30:12.357 17:26:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:30:12.357 17:26:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:30:12.357 17:26:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:30:12.357 17:26:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:30:12.357 17:26:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:30:12.357 17:26:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:30:12.357 17:26:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:30:12.357 17:26:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:30:12.357 17:26:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:12.357 17:26:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:30:12.357 17:26:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:12.357 17:26:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:30:12.357 "name": "raid_bdev1", 00:30:12.357 "uuid": "7129ca77-45ee-4482-9502-d658c5303201", 00:30:12.357 "strip_size_kb": 64, 00:30:12.357 "state": "configuring", 00:30:12.357 "raid_level": "raid0", 00:30:12.357 "superblock": true, 00:30:12.357 "num_base_bdevs": 3, 00:30:12.357 "num_base_bdevs_discovered": 1, 00:30:12.357 "num_base_bdevs_operational": 3, 00:30:12.357 "base_bdevs_list": [ 00:30:12.357 { 00:30:12.357 "name": "pt1", 00:30:12.357 "uuid": "00000000-0000-0000-0000-000000000001", 00:30:12.357 "is_configured": true, 00:30:12.357 "data_offset": 2048, 00:30:12.357 "data_size": 63488 00:30:12.357 }, 00:30:12.357 { 00:30:12.357 "name": null, 00:30:12.357 "uuid": "00000000-0000-0000-0000-000000000002", 00:30:12.357 "is_configured": false, 00:30:12.357 "data_offset": 0, 00:30:12.357 "data_size": 63488 00:30:12.357 }, 00:30:12.357 { 00:30:12.357 "name": null, 00:30:12.357 "uuid": "00000000-0000-0000-0000-000000000003", 00:30:12.357 "is_configured": false, 00:30:12.357 "data_offset": 2048, 00:30:12.357 "data_size": 63488 00:30:12.357 } 00:30:12.357 ] 00:30:12.357 }' 00:30:12.357 17:26:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:30:12.357 17:26:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:30:12.616 17:26:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:30:12.616 17:26:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:30:12.616 17:26:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:30:12.616 17:26:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:12.616 17:26:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:30:12.616 [2024-11-26 17:26:50.006406] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:30:12.616 [2024-11-26 17:26:50.006485] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:30:12.616 [2024-11-26 17:26:50.006508] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009f80 00:30:12.616 [2024-11-26 17:26:50.006525] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:30:12.616 [2024-11-26 17:26:50.007035] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:30:12.616 [2024-11-26 17:26:50.007095] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:30:12.616 [2024-11-26 17:26:50.007187] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:30:12.616 [2024-11-26 17:26:50.007216] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:30:12.616 pt2 00:30:12.616 17:26:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:12.616 17:26:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:30:12.616 17:26:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:30:12.616 17:26:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:30:12.616 17:26:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:12.616 17:26:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:30:12.616 [2024-11-26 17:26:50.014381] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:30:12.616 [2024-11-26 17:26:50.014435] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:30:12.616 [2024-11-26 17:26:50.014453] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:30:12.616 [2024-11-26 17:26:50.014467] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:30:12.616 [2024-11-26 17:26:50.014870] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:30:12.616 [2024-11-26 17:26:50.014904] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:30:12.616 [2024-11-26 17:26:50.014975] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:30:12.616 [2024-11-26 17:26:50.015000] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:30:12.616 [2024-11-26 17:26:50.015164] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:30:12.616 [2024-11-26 17:26:50.015179] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:30:12.616 [2024-11-26 17:26:50.015488] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:30:12.617 [2024-11-26 17:26:50.015657] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:30:12.617 [2024-11-26 17:26:50.015668] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:30:12.617 [2024-11-26 17:26:50.015820] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:30:12.617 pt3 00:30:12.617 17:26:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:12.617 17:26:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:30:12.617 17:26:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:30:12.617 17:26:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 3 00:30:12.617 17:26:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:30:12.617 17:26:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:30:12.617 17:26:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:30:12.617 17:26:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:30:12.617 17:26:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:30:12.617 17:26:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:30:12.617 17:26:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:30:12.617 17:26:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:30:12.617 17:26:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:30:12.617 17:26:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:30:12.617 17:26:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:12.617 17:26:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:30:12.617 17:26:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:30:12.617 17:26:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:12.876 17:26:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:30:12.876 "name": "raid_bdev1", 00:30:12.876 "uuid": "7129ca77-45ee-4482-9502-d658c5303201", 00:30:12.876 "strip_size_kb": 64, 00:30:12.876 "state": "online", 00:30:12.876 "raid_level": "raid0", 00:30:12.876 "superblock": true, 00:30:12.876 "num_base_bdevs": 3, 00:30:12.876 "num_base_bdevs_discovered": 3, 00:30:12.876 "num_base_bdevs_operational": 3, 00:30:12.876 "base_bdevs_list": [ 00:30:12.876 { 00:30:12.876 "name": "pt1", 00:30:12.876 "uuid": "00000000-0000-0000-0000-000000000001", 00:30:12.876 "is_configured": true, 00:30:12.876 "data_offset": 2048, 00:30:12.876 "data_size": 63488 00:30:12.876 }, 00:30:12.876 { 00:30:12.876 "name": "pt2", 00:30:12.876 "uuid": "00000000-0000-0000-0000-000000000002", 00:30:12.876 "is_configured": true, 00:30:12.876 "data_offset": 2048, 00:30:12.876 "data_size": 63488 00:30:12.876 }, 00:30:12.876 { 00:30:12.876 "name": "pt3", 00:30:12.876 "uuid": "00000000-0000-0000-0000-000000000003", 00:30:12.876 "is_configured": true, 00:30:12.876 "data_offset": 2048, 00:30:12.876 "data_size": 63488 00:30:12.876 } 00:30:12.876 ] 00:30:12.876 }' 00:30:12.876 17:26:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:30:12.876 17:26:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:30:13.136 17:26:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:30:13.136 17:26:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:30:13.136 17:26:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:30:13.136 17:26:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:30:13.136 17:26:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:30:13.136 17:26:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:30:13.136 17:26:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:30:13.136 17:26:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:13.136 17:26:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:30:13.136 17:26:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:30:13.136 [2024-11-26 17:26:50.466857] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:30:13.136 17:26:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:13.136 17:26:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:30:13.136 "name": "raid_bdev1", 00:30:13.136 "aliases": [ 00:30:13.136 "7129ca77-45ee-4482-9502-d658c5303201" 00:30:13.136 ], 00:30:13.136 "product_name": "Raid Volume", 00:30:13.136 "block_size": 512, 00:30:13.136 "num_blocks": 190464, 00:30:13.136 "uuid": "7129ca77-45ee-4482-9502-d658c5303201", 00:30:13.136 "assigned_rate_limits": { 00:30:13.136 "rw_ios_per_sec": 0, 00:30:13.136 "rw_mbytes_per_sec": 0, 00:30:13.136 "r_mbytes_per_sec": 0, 00:30:13.136 "w_mbytes_per_sec": 0 00:30:13.136 }, 00:30:13.136 "claimed": false, 00:30:13.136 "zoned": false, 00:30:13.136 "supported_io_types": { 00:30:13.136 "read": true, 00:30:13.136 "write": true, 00:30:13.136 "unmap": true, 00:30:13.136 "flush": true, 00:30:13.136 "reset": true, 00:30:13.136 "nvme_admin": false, 00:30:13.136 "nvme_io": false, 00:30:13.136 "nvme_io_md": false, 00:30:13.136 "write_zeroes": true, 00:30:13.136 "zcopy": false, 00:30:13.137 "get_zone_info": false, 00:30:13.137 "zone_management": false, 00:30:13.137 "zone_append": false, 00:30:13.137 "compare": false, 00:30:13.137 "compare_and_write": false, 00:30:13.137 "abort": false, 00:30:13.137 "seek_hole": false, 00:30:13.137 "seek_data": false, 00:30:13.137 "copy": false, 00:30:13.137 "nvme_iov_md": false 00:30:13.137 }, 00:30:13.137 "memory_domains": [ 00:30:13.137 { 00:30:13.137 "dma_device_id": "system", 00:30:13.137 "dma_device_type": 1 00:30:13.137 }, 00:30:13.137 { 00:30:13.137 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:30:13.137 "dma_device_type": 2 00:30:13.137 }, 00:30:13.137 { 00:30:13.137 "dma_device_id": "system", 00:30:13.137 "dma_device_type": 1 00:30:13.137 }, 00:30:13.137 { 00:30:13.137 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:30:13.137 "dma_device_type": 2 00:30:13.137 }, 00:30:13.137 { 00:30:13.137 "dma_device_id": "system", 00:30:13.137 "dma_device_type": 1 00:30:13.137 }, 00:30:13.137 { 00:30:13.137 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:30:13.137 "dma_device_type": 2 00:30:13.137 } 00:30:13.137 ], 00:30:13.137 "driver_specific": { 00:30:13.137 "raid": { 00:30:13.137 "uuid": "7129ca77-45ee-4482-9502-d658c5303201", 00:30:13.137 "strip_size_kb": 64, 00:30:13.137 "state": "online", 00:30:13.137 "raid_level": "raid0", 00:30:13.137 "superblock": true, 00:30:13.137 "num_base_bdevs": 3, 00:30:13.137 "num_base_bdevs_discovered": 3, 00:30:13.137 "num_base_bdevs_operational": 3, 00:30:13.137 "base_bdevs_list": [ 00:30:13.137 { 00:30:13.137 "name": "pt1", 00:30:13.137 "uuid": "00000000-0000-0000-0000-000000000001", 00:30:13.137 "is_configured": true, 00:30:13.137 "data_offset": 2048, 00:30:13.137 "data_size": 63488 00:30:13.137 }, 00:30:13.137 { 00:30:13.137 "name": "pt2", 00:30:13.137 "uuid": "00000000-0000-0000-0000-000000000002", 00:30:13.137 "is_configured": true, 00:30:13.137 "data_offset": 2048, 00:30:13.137 "data_size": 63488 00:30:13.137 }, 00:30:13.137 { 00:30:13.137 "name": "pt3", 00:30:13.137 "uuid": "00000000-0000-0000-0000-000000000003", 00:30:13.137 "is_configured": true, 00:30:13.137 "data_offset": 2048, 00:30:13.137 "data_size": 63488 00:30:13.137 } 00:30:13.137 ] 00:30:13.137 } 00:30:13.137 } 00:30:13.137 }' 00:30:13.137 17:26:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:30:13.137 17:26:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:30:13.137 pt2 00:30:13.137 pt3' 00:30:13.137 17:26:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:30:13.396 17:26:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:30:13.396 17:26:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:30:13.396 17:26:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:30:13.396 17:26:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:13.396 17:26:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:30:13.396 17:26:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:30:13.396 17:26:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:13.396 17:26:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:30:13.397 17:26:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:30:13.397 17:26:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:30:13.397 17:26:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:30:13.397 17:26:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:30:13.397 17:26:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:13.397 17:26:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:30:13.397 17:26:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:13.397 17:26:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:30:13.397 17:26:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:30:13.397 17:26:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:30:13.397 17:26:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:30:13.397 17:26:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:13.397 17:26:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:30:13.397 17:26:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:30:13.397 17:26:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:13.397 17:26:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:30:13.397 17:26:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:30:13.397 17:26:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:30:13.397 17:26:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:13.397 17:26:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:30:13.397 17:26:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:30:13.397 [2024-11-26 17:26:50.730819] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:30:13.397 17:26:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:13.397 17:26:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # '[' 7129ca77-45ee-4482-9502-d658c5303201 '!=' 7129ca77-45ee-4482-9502-d658c5303201 ']' 00:30:13.397 17:26:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@491 -- # has_redundancy raid0 00:30:13.397 17:26:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:30:13.397 17:26:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # return 1 00:30:13.397 17:26:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@563 -- # killprocess 65465 00:30:13.397 17:26:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@954 -- # '[' -z 65465 ']' 00:30:13.397 17:26:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@958 -- # kill -0 65465 00:30:13.397 17:26:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@959 -- # uname 00:30:13.397 17:26:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:30:13.397 17:26:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 65465 00:30:13.397 17:26:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:30:13.397 killing process with pid 65465 00:30:13.397 17:26:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:30:13.397 17:26:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 65465' 00:30:13.397 17:26:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@973 -- # kill 65465 00:30:13.397 [2024-11-26 17:26:50.811684] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:30:13.397 [2024-11-26 17:26:50.811787] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:30:13.397 17:26:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@978 -- # wait 65465 00:30:13.397 [2024-11-26 17:26:50.811850] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:30:13.397 [2024-11-26 17:26:50.811865] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:30:13.964 [2024-11-26 17:26:51.126921] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:30:14.900 17:26:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@565 -- # return 0 00:30:14.900 00:30:14.900 real 0m5.497s 00:30:14.900 user 0m7.945s 00:30:14.900 sys 0m0.992s 00:30:14.900 17:26:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:30:14.900 17:26:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:30:14.900 ************************************ 00:30:14.900 END TEST raid_superblock_test 00:30:14.900 ************************************ 00:30:15.159 17:26:52 bdev_raid -- bdev/bdev_raid.sh@971 -- # run_test raid_read_error_test raid_io_error_test raid0 3 read 00:30:15.159 17:26:52 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:30:15.159 17:26:52 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:30:15.159 17:26:52 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:30:15.159 ************************************ 00:30:15.159 START TEST raid_read_error_test 00:30:15.159 ************************************ 00:30:15.159 17:26:52 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1129 -- # raid_io_error_test raid0 3 read 00:30:15.159 17:26:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=raid0 00:30:15.159 17:26:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=3 00:30:15.159 17:26:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=read 00:30:15.159 17:26:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:30:15.159 17:26:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:30:15.159 17:26:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:30:15.159 17:26:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:30:15.159 17:26:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:30:15.159 17:26:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:30:15.159 17:26:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:30:15.159 17:26:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:30:15.159 17:26:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev3 00:30:15.159 17:26:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:30:15.159 17:26:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:30:15.159 17:26:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:30:15.159 17:26:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:30:15.159 17:26:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:30:15.159 17:26:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:30:15.159 17:26:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:30:15.159 17:26:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:30:15.159 17:26:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:30:15.159 17:26:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@800 -- # '[' raid0 '!=' raid1 ']' 00:30:15.159 17:26:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@801 -- # strip_size=64 00:30:15.159 17:26:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@802 -- # create_arg+=' -z 64' 00:30:15.159 17:26:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:30:15.159 17:26:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.0DDQqe14X0 00:30:15.159 17:26:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=65724 00:30:15.159 17:26:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 65724 00:30:15.159 17:26:52 bdev_raid.raid_read_error_test -- common/autotest_common.sh@835 -- # '[' -z 65724 ']' 00:30:15.159 17:26:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:30:15.159 17:26:52 bdev_raid.raid_read_error_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:30:15.159 17:26:52 bdev_raid.raid_read_error_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:30:15.159 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:30:15.159 17:26:52 bdev_raid.raid_read_error_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:30:15.159 17:26:52 bdev_raid.raid_read_error_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:30:15.159 17:26:52 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:30:15.159 [2024-11-26 17:26:52.507819] Starting SPDK v25.01-pre git sha1 f7ce15267 / DPDK 24.03.0 initialization... 00:30:15.159 [2024-11-26 17:26:52.507999] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid65724 ] 00:30:15.418 [2024-11-26 17:26:52.698262] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:15.418 [2024-11-26 17:26:52.814326] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:30:15.688 [2024-11-26 17:26:53.019843] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:30:15.688 [2024-11-26 17:26:53.019902] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:30:15.946 17:26:53 bdev_raid.raid_read_error_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:30:15.946 17:26:53 bdev_raid.raid_read_error_test -- common/autotest_common.sh@868 -- # return 0 00:30:15.946 17:26:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:30:15.946 17:26:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:30:15.946 17:26:53 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:15.946 17:26:53 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:30:16.205 BaseBdev1_malloc 00:30:16.205 17:26:53 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:16.205 17:26:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:30:16.205 17:26:53 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:16.205 17:26:53 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:30:16.205 true 00:30:16.205 17:26:53 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:16.205 17:26:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:30:16.205 17:26:53 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:16.205 17:26:53 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:30:16.205 [2024-11-26 17:26:53.434743] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:30:16.205 [2024-11-26 17:26:53.434806] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:30:16.205 [2024-11-26 17:26:53.434829] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:30:16.205 [2024-11-26 17:26:53.434845] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:30:16.205 [2024-11-26 17:26:53.437253] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:30:16.205 [2024-11-26 17:26:53.437294] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:30:16.205 BaseBdev1 00:30:16.205 17:26:53 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:16.205 17:26:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:30:16.205 17:26:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:30:16.205 17:26:53 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:16.205 17:26:53 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:30:16.205 BaseBdev2_malloc 00:30:16.205 17:26:53 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:16.205 17:26:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:30:16.205 17:26:53 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:16.205 17:26:53 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:30:16.205 true 00:30:16.205 17:26:53 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:16.205 17:26:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:30:16.206 17:26:53 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:16.206 17:26:53 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:30:16.206 [2024-11-26 17:26:53.495613] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:30:16.206 [2024-11-26 17:26:53.495672] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:30:16.206 [2024-11-26 17:26:53.495690] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:30:16.206 [2024-11-26 17:26:53.495705] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:30:16.206 [2024-11-26 17:26:53.498075] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:30:16.206 [2024-11-26 17:26:53.498115] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:30:16.206 BaseBdev2 00:30:16.206 17:26:53 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:16.206 17:26:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:30:16.206 17:26:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:30:16.206 17:26:53 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:16.206 17:26:53 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:30:16.206 BaseBdev3_malloc 00:30:16.206 17:26:53 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:16.206 17:26:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev3_malloc 00:30:16.206 17:26:53 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:16.206 17:26:53 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:30:16.206 true 00:30:16.206 17:26:53 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:16.206 17:26:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:30:16.206 17:26:53 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:16.206 17:26:53 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:30:16.206 [2024-11-26 17:26:53.567259] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:30:16.206 [2024-11-26 17:26:53.567312] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:30:16.206 [2024-11-26 17:26:53.567332] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:30:16.206 [2024-11-26 17:26:53.567346] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:30:16.206 [2024-11-26 17:26:53.569685] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:30:16.206 [2024-11-26 17:26:53.569727] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:30:16.206 BaseBdev3 00:30:16.206 17:26:53 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:16.206 17:26:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n raid_bdev1 -s 00:30:16.206 17:26:53 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:16.206 17:26:53 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:30:16.206 [2024-11-26 17:26:53.575344] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:30:16.206 [2024-11-26 17:26:53.577422] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:30:16.206 [2024-11-26 17:26:53.577495] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:30:16.206 [2024-11-26 17:26:53.577688] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:30:16.206 [2024-11-26 17:26:53.577703] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:30:16.206 [2024-11-26 17:26:53.577963] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006560 00:30:16.206 [2024-11-26 17:26:53.578189] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:30:16.206 [2024-11-26 17:26:53.578216] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008200 00:30:16.206 [2024-11-26 17:26:53.578378] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:30:16.206 17:26:53 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:16.206 17:26:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 3 00:30:16.206 17:26:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:30:16.206 17:26:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:30:16.206 17:26:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:30:16.206 17:26:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:30:16.206 17:26:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:30:16.206 17:26:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:30:16.206 17:26:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:30:16.206 17:26:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:30:16.206 17:26:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:30:16.206 17:26:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:30:16.206 17:26:53 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:16.206 17:26:53 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:30:16.206 17:26:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:30:16.206 17:26:53 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:16.206 17:26:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:30:16.206 "name": "raid_bdev1", 00:30:16.206 "uuid": "ee7a5b00-b800-4c0c-bdc2-f90c61f32ec5", 00:30:16.206 "strip_size_kb": 64, 00:30:16.206 "state": "online", 00:30:16.206 "raid_level": "raid0", 00:30:16.206 "superblock": true, 00:30:16.206 "num_base_bdevs": 3, 00:30:16.206 "num_base_bdevs_discovered": 3, 00:30:16.206 "num_base_bdevs_operational": 3, 00:30:16.206 "base_bdevs_list": [ 00:30:16.206 { 00:30:16.206 "name": "BaseBdev1", 00:30:16.206 "uuid": "8b01d7c2-e5bb-58eb-9cfd-4ea4d1523cba", 00:30:16.206 "is_configured": true, 00:30:16.206 "data_offset": 2048, 00:30:16.206 "data_size": 63488 00:30:16.206 }, 00:30:16.206 { 00:30:16.206 "name": "BaseBdev2", 00:30:16.206 "uuid": "8414605a-6aa4-51ad-aaf2-439cfc6a6b8d", 00:30:16.206 "is_configured": true, 00:30:16.206 "data_offset": 2048, 00:30:16.206 "data_size": 63488 00:30:16.206 }, 00:30:16.206 { 00:30:16.206 "name": "BaseBdev3", 00:30:16.206 "uuid": "b50af08c-5dc6-5ba7-bea2-ce8d3046a92e", 00:30:16.206 "is_configured": true, 00:30:16.206 "data_offset": 2048, 00:30:16.206 "data_size": 63488 00:30:16.206 } 00:30:16.206 ] 00:30:16.206 }' 00:30:16.206 17:26:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:30:16.206 17:26:53 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:30:16.775 17:26:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:30:16.775 17:26:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:30:16.775 [2024-11-26 17:26:54.144800] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006700 00:30:17.712 17:26:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc read failure 00:30:17.712 17:26:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:17.712 17:26:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:30:17.712 17:26:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:17.712 17:26:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:30:17.712 17:26:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@832 -- # [[ raid0 = \r\a\i\d\1 ]] 00:30:17.712 17:26:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=3 00:30:17.712 17:26:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 3 00:30:17.712 17:26:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:30:17.712 17:26:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:30:17.712 17:26:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:30:17.712 17:26:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:30:17.712 17:26:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:30:17.712 17:26:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:30:17.712 17:26:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:30:17.712 17:26:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:30:17.712 17:26:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:30:17.712 17:26:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:30:17.712 17:26:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:30:17.712 17:26:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:17.712 17:26:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:30:17.712 17:26:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:17.712 17:26:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:30:17.712 "name": "raid_bdev1", 00:30:17.712 "uuid": "ee7a5b00-b800-4c0c-bdc2-f90c61f32ec5", 00:30:17.712 "strip_size_kb": 64, 00:30:17.712 "state": "online", 00:30:17.712 "raid_level": "raid0", 00:30:17.712 "superblock": true, 00:30:17.712 "num_base_bdevs": 3, 00:30:17.712 "num_base_bdevs_discovered": 3, 00:30:17.712 "num_base_bdevs_operational": 3, 00:30:17.712 "base_bdevs_list": [ 00:30:17.712 { 00:30:17.712 "name": "BaseBdev1", 00:30:17.712 "uuid": "8b01d7c2-e5bb-58eb-9cfd-4ea4d1523cba", 00:30:17.712 "is_configured": true, 00:30:17.712 "data_offset": 2048, 00:30:17.712 "data_size": 63488 00:30:17.712 }, 00:30:17.712 { 00:30:17.712 "name": "BaseBdev2", 00:30:17.712 "uuid": "8414605a-6aa4-51ad-aaf2-439cfc6a6b8d", 00:30:17.712 "is_configured": true, 00:30:17.712 "data_offset": 2048, 00:30:17.712 "data_size": 63488 00:30:17.712 }, 00:30:17.712 { 00:30:17.712 "name": "BaseBdev3", 00:30:17.712 "uuid": "b50af08c-5dc6-5ba7-bea2-ce8d3046a92e", 00:30:17.712 "is_configured": true, 00:30:17.712 "data_offset": 2048, 00:30:17.712 "data_size": 63488 00:30:17.712 } 00:30:17.712 ] 00:30:17.712 }' 00:30:17.712 17:26:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:30:17.712 17:26:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:30:18.281 17:26:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:30:18.281 17:26:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:18.281 17:26:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:30:18.281 [2024-11-26 17:26:55.481891] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:30:18.281 [2024-11-26 17:26:55.481926] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:30:18.281 [2024-11-26 17:26:55.484739] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:30:18.281 [2024-11-26 17:26:55.484787] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:30:18.281 [2024-11-26 17:26:55.484826] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:30:18.281 [2024-11-26 17:26:55.484837] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name raid_bdev1, state offline 00:30:18.281 { 00:30:18.281 "results": [ 00:30:18.281 { 00:30:18.281 "job": "raid_bdev1", 00:30:18.281 "core_mask": "0x1", 00:30:18.281 "workload": "randrw", 00:30:18.281 "percentage": 50, 00:30:18.281 "status": "finished", 00:30:18.281 "queue_depth": 1, 00:30:18.281 "io_size": 131072, 00:30:18.281 "runtime": 1.335116, 00:30:18.281 "iops": 15468.318857687273, 00:30:18.281 "mibps": 1933.539857210909, 00:30:18.281 "io_failed": 1, 00:30:18.281 "io_timeout": 0, 00:30:18.281 "avg_latency_us": 89.14105244712518, 00:30:18.281 "min_latency_us": 26.940952380952382, 00:30:18.281 "max_latency_us": 1443.352380952381 00:30:18.281 } 00:30:18.281 ], 00:30:18.281 "core_count": 1 00:30:18.281 } 00:30:18.281 17:26:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:18.281 17:26:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 65724 00:30:18.281 17:26:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@954 -- # '[' -z 65724 ']' 00:30:18.281 17:26:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@958 -- # kill -0 65724 00:30:18.281 17:26:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@959 -- # uname 00:30:18.281 17:26:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:30:18.281 17:26:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 65724 00:30:18.281 17:26:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:30:18.281 17:26:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:30:18.281 17:26:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 65724' 00:30:18.281 killing process with pid 65724 00:30:18.281 17:26:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@973 -- # kill 65724 00:30:18.281 [2024-11-26 17:26:55.528434] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:30:18.281 17:26:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@978 -- # wait 65724 00:30:18.540 [2024-11-26 17:26:55.767828] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:30:19.919 17:26:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.0DDQqe14X0 00:30:19.919 17:26:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:30:19.919 17:26:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:30:19.919 17:26:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.75 00:30:19.919 17:26:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy raid0 00:30:19.919 17:26:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:30:19.919 17:26:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@200 -- # return 1 00:30:19.919 17:26:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@849 -- # [[ 0.75 != \0\.\0\0 ]] 00:30:19.919 00:30:19.919 real 0m4.635s 00:30:19.919 user 0m5.509s 00:30:19.919 sys 0m0.634s 00:30:19.919 17:26:57 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:30:19.919 17:26:57 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:30:19.919 ************************************ 00:30:19.919 END TEST raid_read_error_test 00:30:19.919 ************************************ 00:30:19.919 17:26:57 bdev_raid -- bdev/bdev_raid.sh@972 -- # run_test raid_write_error_test raid_io_error_test raid0 3 write 00:30:19.919 17:26:57 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:30:19.919 17:26:57 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:30:19.919 17:26:57 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:30:19.919 ************************************ 00:30:19.919 START TEST raid_write_error_test 00:30:19.919 ************************************ 00:30:19.919 17:26:57 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1129 -- # raid_io_error_test raid0 3 write 00:30:19.919 17:26:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=raid0 00:30:19.919 17:26:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=3 00:30:19.919 17:26:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=write 00:30:19.919 17:26:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:30:19.919 17:26:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:30:19.919 17:26:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:30:19.919 17:26:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:30:19.919 17:26:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:30:19.919 17:26:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:30:19.919 17:26:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:30:19.919 17:26:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:30:19.919 17:26:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev3 00:30:19.919 17:26:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:30:19.919 17:26:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:30:19.919 17:26:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:30:19.919 17:26:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:30:19.919 17:26:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:30:19.919 17:26:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:30:19.919 17:26:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:30:19.919 17:26:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:30:19.919 17:26:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:30:19.919 17:26:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@800 -- # '[' raid0 '!=' raid1 ']' 00:30:19.919 17:26:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@801 -- # strip_size=64 00:30:19.919 17:26:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@802 -- # create_arg+=' -z 64' 00:30:19.919 17:26:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:30:19.919 17:26:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.fWf7BkOMSN 00:30:19.919 17:26:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=65869 00:30:19.919 17:26:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 65869 00:30:19.919 17:26:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:30:19.919 17:26:57 bdev_raid.raid_write_error_test -- common/autotest_common.sh@835 -- # '[' -z 65869 ']' 00:30:19.919 17:26:57 bdev_raid.raid_write_error_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:30:19.919 17:26:57 bdev_raid.raid_write_error_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:30:19.919 17:26:57 bdev_raid.raid_write_error_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:30:19.919 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:30:19.919 17:26:57 bdev_raid.raid_write_error_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:30:19.919 17:26:57 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:30:19.919 [2024-11-26 17:26:57.207615] Starting SPDK v25.01-pre git sha1 f7ce15267 / DPDK 24.03.0 initialization... 00:30:19.919 [2024-11-26 17:26:57.207797] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid65869 ] 00:30:20.177 [2024-11-26 17:26:57.408866] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:20.177 [2024-11-26 17:26:57.579489] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:30:20.436 [2024-11-26 17:26:57.799454] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:30:20.436 [2024-11-26 17:26:57.799523] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:30:20.694 17:26:58 bdev_raid.raid_write_error_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:30:20.694 17:26:58 bdev_raid.raid_write_error_test -- common/autotest_common.sh@868 -- # return 0 00:30:20.694 17:26:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:30:20.694 17:26:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:30:20.694 17:26:58 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:20.694 17:26:58 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:30:20.953 BaseBdev1_malloc 00:30:20.953 17:26:58 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:20.954 17:26:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:30:20.954 17:26:58 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:20.954 17:26:58 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:30:20.954 true 00:30:20.954 17:26:58 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:20.954 17:26:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:30:20.954 17:26:58 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:20.954 17:26:58 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:30:20.954 [2024-11-26 17:26:58.193328] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:30:20.954 [2024-11-26 17:26:58.193391] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:30:20.954 [2024-11-26 17:26:58.193416] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:30:20.954 [2024-11-26 17:26:58.193432] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:30:20.954 [2024-11-26 17:26:58.196018] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:30:20.954 [2024-11-26 17:26:58.196078] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:30:20.954 BaseBdev1 00:30:20.954 17:26:58 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:20.954 17:26:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:30:20.954 17:26:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:30:20.954 17:26:58 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:20.954 17:26:58 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:30:20.954 BaseBdev2_malloc 00:30:20.954 17:26:58 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:20.954 17:26:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:30:20.954 17:26:58 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:20.954 17:26:58 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:30:20.954 true 00:30:20.954 17:26:58 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:20.954 17:26:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:30:20.954 17:26:58 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:20.954 17:26:58 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:30:20.954 [2024-11-26 17:26:58.267546] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:30:20.954 [2024-11-26 17:26:58.267612] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:30:20.954 [2024-11-26 17:26:58.267634] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:30:20.954 [2024-11-26 17:26:58.267650] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:30:20.954 [2024-11-26 17:26:58.270223] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:30:20.954 [2024-11-26 17:26:58.270288] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:30:20.954 BaseBdev2 00:30:20.954 17:26:58 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:20.954 17:26:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:30:20.954 17:26:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:30:20.954 17:26:58 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:20.954 17:26:58 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:30:20.954 BaseBdev3_malloc 00:30:20.954 17:26:58 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:20.954 17:26:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev3_malloc 00:30:20.954 17:26:58 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:20.954 17:26:58 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:30:20.954 true 00:30:20.954 17:26:58 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:20.954 17:26:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:30:20.954 17:26:58 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:20.954 17:26:58 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:30:20.954 [2024-11-26 17:26:58.345562] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:30:20.954 [2024-11-26 17:26:58.345621] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:30:20.954 [2024-11-26 17:26:58.345642] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:30:20.954 [2024-11-26 17:26:58.345656] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:30:20.954 [2024-11-26 17:26:58.348261] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:30:20.954 [2024-11-26 17:26:58.348303] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:30:20.954 BaseBdev3 00:30:20.954 17:26:58 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:20.954 17:26:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n raid_bdev1 -s 00:30:20.954 17:26:58 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:20.954 17:26:58 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:30:20.954 [2024-11-26 17:26:58.353657] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:30:20.954 [2024-11-26 17:26:58.355905] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:30:20.954 [2024-11-26 17:26:58.355990] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:30:20.954 [2024-11-26 17:26:58.356204] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:30:20.954 [2024-11-26 17:26:58.356221] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:30:20.954 [2024-11-26 17:26:58.356508] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006560 00:30:20.954 [2024-11-26 17:26:58.356676] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:30:20.954 [2024-11-26 17:26:58.356699] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008200 00:30:20.954 [2024-11-26 17:26:58.356848] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:30:20.954 17:26:58 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:20.954 17:26:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 3 00:30:20.954 17:26:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:30:20.954 17:26:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:30:20.954 17:26:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:30:20.954 17:26:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:30:20.954 17:26:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:30:20.954 17:26:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:30:20.954 17:26:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:30:20.954 17:26:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:30:20.954 17:26:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:30:20.954 17:26:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:30:20.954 17:26:58 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:20.954 17:26:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:30:20.954 17:26:58 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:30:20.954 17:26:58 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:21.213 17:26:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:30:21.213 "name": "raid_bdev1", 00:30:21.213 "uuid": "1d33b680-0413-487c-aa01-326fdd83f351", 00:30:21.213 "strip_size_kb": 64, 00:30:21.213 "state": "online", 00:30:21.213 "raid_level": "raid0", 00:30:21.213 "superblock": true, 00:30:21.213 "num_base_bdevs": 3, 00:30:21.213 "num_base_bdevs_discovered": 3, 00:30:21.213 "num_base_bdevs_operational": 3, 00:30:21.213 "base_bdevs_list": [ 00:30:21.213 { 00:30:21.213 "name": "BaseBdev1", 00:30:21.213 "uuid": "f385290b-b38c-5c59-82c2-000818cdb7c9", 00:30:21.213 "is_configured": true, 00:30:21.213 "data_offset": 2048, 00:30:21.213 "data_size": 63488 00:30:21.213 }, 00:30:21.213 { 00:30:21.213 "name": "BaseBdev2", 00:30:21.213 "uuid": "c24c4470-da06-5ee3-a13a-0a41d0ed4c67", 00:30:21.213 "is_configured": true, 00:30:21.213 "data_offset": 2048, 00:30:21.213 "data_size": 63488 00:30:21.213 }, 00:30:21.214 { 00:30:21.214 "name": "BaseBdev3", 00:30:21.214 "uuid": "9fceff51-e9c3-58e6-ae42-7bf012d58088", 00:30:21.214 "is_configured": true, 00:30:21.214 "data_offset": 2048, 00:30:21.214 "data_size": 63488 00:30:21.214 } 00:30:21.214 ] 00:30:21.214 }' 00:30:21.214 17:26:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:30:21.214 17:26:58 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:30:21.473 17:26:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:30:21.473 17:26:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:30:21.731 [2024-11-26 17:26:58.931119] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006700 00:30:22.677 17:26:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc write failure 00:30:22.677 17:26:59 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:22.677 17:26:59 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:30:22.677 17:26:59 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:22.677 17:26:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:30:22.677 17:26:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@832 -- # [[ raid0 = \r\a\i\d\1 ]] 00:30:22.677 17:26:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=3 00:30:22.677 17:26:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 3 00:30:22.677 17:26:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:30:22.677 17:26:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:30:22.677 17:26:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:30:22.677 17:26:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:30:22.677 17:26:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:30:22.677 17:26:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:30:22.677 17:26:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:30:22.677 17:26:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:30:22.677 17:26:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:30:22.677 17:26:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:30:22.677 17:26:59 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:22.677 17:26:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:30:22.677 17:26:59 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:30:22.677 17:26:59 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:22.677 17:26:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:30:22.677 "name": "raid_bdev1", 00:30:22.677 "uuid": "1d33b680-0413-487c-aa01-326fdd83f351", 00:30:22.677 "strip_size_kb": 64, 00:30:22.677 "state": "online", 00:30:22.677 "raid_level": "raid0", 00:30:22.677 "superblock": true, 00:30:22.677 "num_base_bdevs": 3, 00:30:22.677 "num_base_bdevs_discovered": 3, 00:30:22.677 "num_base_bdevs_operational": 3, 00:30:22.677 "base_bdevs_list": [ 00:30:22.677 { 00:30:22.677 "name": "BaseBdev1", 00:30:22.677 "uuid": "f385290b-b38c-5c59-82c2-000818cdb7c9", 00:30:22.677 "is_configured": true, 00:30:22.677 "data_offset": 2048, 00:30:22.677 "data_size": 63488 00:30:22.678 }, 00:30:22.678 { 00:30:22.678 "name": "BaseBdev2", 00:30:22.678 "uuid": "c24c4470-da06-5ee3-a13a-0a41d0ed4c67", 00:30:22.678 "is_configured": true, 00:30:22.678 "data_offset": 2048, 00:30:22.678 "data_size": 63488 00:30:22.678 }, 00:30:22.678 { 00:30:22.678 "name": "BaseBdev3", 00:30:22.678 "uuid": "9fceff51-e9c3-58e6-ae42-7bf012d58088", 00:30:22.678 "is_configured": true, 00:30:22.678 "data_offset": 2048, 00:30:22.678 "data_size": 63488 00:30:22.678 } 00:30:22.678 ] 00:30:22.678 }' 00:30:22.678 17:26:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:30:22.678 17:26:59 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:30:22.937 17:27:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:30:22.937 17:27:00 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:22.937 17:27:00 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:30:22.937 [2024-11-26 17:27:00.257822] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:30:22.937 [2024-11-26 17:27:00.257859] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:30:22.937 [2024-11-26 17:27:00.260858] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:30:22.937 [2024-11-26 17:27:00.260905] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:30:22.937 [2024-11-26 17:27:00.260952] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:30:22.937 [2024-11-26 17:27:00.260965] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name raid_bdev1, state offline 00:30:22.937 { 00:30:22.937 "results": [ 00:30:22.937 { 00:30:22.937 "job": "raid_bdev1", 00:30:22.937 "core_mask": "0x1", 00:30:22.937 "workload": "randrw", 00:30:22.937 "percentage": 50, 00:30:22.937 "status": "finished", 00:30:22.937 "queue_depth": 1, 00:30:22.937 "io_size": 131072, 00:30:22.937 "runtime": 1.324684, 00:30:22.937 "iops": 14395.886113216435, 00:30:22.937 "mibps": 1799.4857641520543, 00:30:22.937 "io_failed": 1, 00:30:22.937 "io_timeout": 0, 00:30:22.937 "avg_latency_us": 95.7638834330859, 00:30:22.937 "min_latency_us": 20.601904761904763, 00:30:22.937 "max_latency_us": 1599.3904761904762 00:30:22.937 } 00:30:22.937 ], 00:30:22.937 "core_count": 1 00:30:22.937 } 00:30:22.937 17:27:00 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:22.937 17:27:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 65869 00:30:22.937 17:27:00 bdev_raid.raid_write_error_test -- common/autotest_common.sh@954 -- # '[' -z 65869 ']' 00:30:22.937 17:27:00 bdev_raid.raid_write_error_test -- common/autotest_common.sh@958 -- # kill -0 65869 00:30:22.937 17:27:00 bdev_raid.raid_write_error_test -- common/autotest_common.sh@959 -- # uname 00:30:22.937 17:27:00 bdev_raid.raid_write_error_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:30:22.937 17:27:00 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 65869 00:30:22.937 17:27:00 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:30:22.937 17:27:00 bdev_raid.raid_write_error_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:30:22.937 17:27:00 bdev_raid.raid_write_error_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 65869' 00:30:22.937 killing process with pid 65869 00:30:22.937 17:27:00 bdev_raid.raid_write_error_test -- common/autotest_common.sh@973 -- # kill 65869 00:30:22.937 [2024-11-26 17:27:00.304644] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:30:22.937 17:27:00 bdev_raid.raid_write_error_test -- common/autotest_common.sh@978 -- # wait 65869 00:30:23.196 [2024-11-26 17:27:00.544895] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:30:24.572 17:27:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.fWf7BkOMSN 00:30:24.572 17:27:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:30:24.572 17:27:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:30:24.572 17:27:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.75 00:30:24.572 17:27:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy raid0 00:30:24.572 17:27:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:30:24.572 17:27:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@200 -- # return 1 00:30:24.572 17:27:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@849 -- # [[ 0.75 != \0\.\0\0 ]] 00:30:24.572 00:30:24.572 real 0m4.716s 00:30:24.572 user 0m5.631s 00:30:24.572 sys 0m0.641s 00:30:24.572 17:27:01 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:30:24.572 17:27:01 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:30:24.572 ************************************ 00:30:24.572 END TEST raid_write_error_test 00:30:24.572 ************************************ 00:30:24.572 17:27:01 bdev_raid -- bdev/bdev_raid.sh@967 -- # for level in raid0 concat raid1 00:30:24.572 17:27:01 bdev_raid -- bdev/bdev_raid.sh@968 -- # run_test raid_state_function_test raid_state_function_test concat 3 false 00:30:24.572 17:27:01 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:30:24.572 17:27:01 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:30:24.572 17:27:01 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:30:24.572 ************************************ 00:30:24.572 START TEST raid_state_function_test 00:30:24.572 ************************************ 00:30:24.572 17:27:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1129 -- # raid_state_function_test concat 3 false 00:30:24.572 17:27:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # local raid_level=concat 00:30:24.572 17:27:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=3 00:30:24.572 17:27:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # local superblock=false 00:30:24.572 17:27:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:30:24.572 17:27:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:30:24.572 17:27:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:30:24.572 17:27:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:30:24.572 17:27:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:30:24.572 17:27:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:30:24.572 17:27:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:30:24.572 17:27:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:30:24.572 17:27:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:30:24.572 17:27:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:30:24.572 17:27:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:30:24.572 17:27:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:30:24.572 17:27:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:30:24.572 17:27:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:30:24.572 17:27:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:30:24.572 17:27:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # local strip_size 00:30:24.572 17:27:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:30:24.572 17:27:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:30:24.572 17:27:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@215 -- # '[' concat '!=' raid1 ']' 00:30:24.572 17:27:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:30:24.572 17:27:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:30:24.572 17:27:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@222 -- # '[' false = true ']' 00:30:24.572 17:27:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # superblock_create_arg= 00:30:24.572 17:27:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@229 -- # raid_pid=66013 00:30:24.572 Process raid pid: 66013 00:30:24.572 17:27:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 66013' 00:30:24.572 17:27:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:30:24.572 17:27:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@231 -- # waitforlisten 66013 00:30:24.572 17:27:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@835 -- # '[' -z 66013 ']' 00:30:24.572 17:27:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:30:24.572 17:27:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:30:24.572 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:30:24.572 17:27:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:30:24.572 17:27:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:30:24.572 17:27:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:30:24.572 [2024-11-26 17:27:01.979337] Starting SPDK v25.01-pre git sha1 f7ce15267 / DPDK 24.03.0 initialization... 00:30:24.572 [2024-11-26 17:27:01.979519] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:30:24.830 [2024-11-26 17:27:02.174332] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:25.089 [2024-11-26 17:27:02.300122] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:30:25.089 [2024-11-26 17:27:02.518826] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:30:25.089 [2024-11-26 17:27:02.518881] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:30:25.661 17:27:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:30:25.661 17:27:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@868 -- # return 0 00:30:25.661 17:27:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:30:25.661 17:27:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:25.661 17:27:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:30:25.661 [2024-11-26 17:27:02.880725] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:30:25.661 [2024-11-26 17:27:02.880783] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:30:25.661 [2024-11-26 17:27:02.880796] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:30:25.661 [2024-11-26 17:27:02.880809] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:30:25.661 [2024-11-26 17:27:02.880817] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:30:25.661 [2024-11-26 17:27:02.880829] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:30:25.661 17:27:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:25.661 17:27:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:30:25.661 17:27:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:30:25.661 17:27:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:30:25.661 17:27:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:30:25.662 17:27:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:30:25.662 17:27:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:30:25.662 17:27:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:30:25.662 17:27:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:30:25.662 17:27:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:30:25.662 17:27:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:30:25.662 17:27:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:30:25.662 17:27:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:30:25.662 17:27:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:25.662 17:27:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:30:25.662 17:27:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:25.662 17:27:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:30:25.662 "name": "Existed_Raid", 00:30:25.662 "uuid": "00000000-0000-0000-0000-000000000000", 00:30:25.662 "strip_size_kb": 64, 00:30:25.662 "state": "configuring", 00:30:25.662 "raid_level": "concat", 00:30:25.662 "superblock": false, 00:30:25.662 "num_base_bdevs": 3, 00:30:25.662 "num_base_bdevs_discovered": 0, 00:30:25.662 "num_base_bdevs_operational": 3, 00:30:25.662 "base_bdevs_list": [ 00:30:25.662 { 00:30:25.662 "name": "BaseBdev1", 00:30:25.662 "uuid": "00000000-0000-0000-0000-000000000000", 00:30:25.662 "is_configured": false, 00:30:25.662 "data_offset": 0, 00:30:25.662 "data_size": 0 00:30:25.662 }, 00:30:25.662 { 00:30:25.662 "name": "BaseBdev2", 00:30:25.662 "uuid": "00000000-0000-0000-0000-000000000000", 00:30:25.662 "is_configured": false, 00:30:25.662 "data_offset": 0, 00:30:25.662 "data_size": 0 00:30:25.662 }, 00:30:25.662 { 00:30:25.662 "name": "BaseBdev3", 00:30:25.662 "uuid": "00000000-0000-0000-0000-000000000000", 00:30:25.662 "is_configured": false, 00:30:25.662 "data_offset": 0, 00:30:25.662 "data_size": 0 00:30:25.662 } 00:30:25.662 ] 00:30:25.662 }' 00:30:25.662 17:27:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:30:25.662 17:27:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:30:25.920 17:27:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:30:25.920 17:27:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:25.920 17:27:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:30:25.920 [2024-11-26 17:27:03.336777] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:30:25.920 [2024-11-26 17:27:03.336834] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:30:25.920 17:27:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:25.920 17:27:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:30:25.920 17:27:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:25.920 17:27:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:30:25.920 [2024-11-26 17:27:03.348876] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:30:25.920 [2024-11-26 17:27:03.348952] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:30:25.920 [2024-11-26 17:27:03.348972] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:30:25.920 [2024-11-26 17:27:03.348995] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:30:25.920 [2024-11-26 17:27:03.349010] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:30:25.920 [2024-11-26 17:27:03.349032] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:30:25.920 17:27:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:25.920 17:27:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:30:25.920 17:27:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:25.920 17:27:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:30:26.178 [2024-11-26 17:27:03.402786] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:30:26.178 BaseBdev1 00:30:26.178 17:27:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:26.178 17:27:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:30:26.178 17:27:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:30:26.178 17:27:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:30:26.178 17:27:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:30:26.178 17:27:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:30:26.178 17:27:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:30:26.178 17:27:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:30:26.178 17:27:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:26.178 17:27:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:30:26.178 17:27:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:26.178 17:27:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:30:26.178 17:27:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:26.178 17:27:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:30:26.178 [ 00:30:26.178 { 00:30:26.178 "name": "BaseBdev1", 00:30:26.178 "aliases": [ 00:30:26.178 "c0d993cd-a6bd-49af-95c4-817d9e927c12" 00:30:26.178 ], 00:30:26.178 "product_name": "Malloc disk", 00:30:26.178 "block_size": 512, 00:30:26.178 "num_blocks": 65536, 00:30:26.178 "uuid": "c0d993cd-a6bd-49af-95c4-817d9e927c12", 00:30:26.178 "assigned_rate_limits": { 00:30:26.178 "rw_ios_per_sec": 0, 00:30:26.178 "rw_mbytes_per_sec": 0, 00:30:26.178 "r_mbytes_per_sec": 0, 00:30:26.178 "w_mbytes_per_sec": 0 00:30:26.178 }, 00:30:26.178 "claimed": true, 00:30:26.178 "claim_type": "exclusive_write", 00:30:26.178 "zoned": false, 00:30:26.178 "supported_io_types": { 00:30:26.178 "read": true, 00:30:26.178 "write": true, 00:30:26.178 "unmap": true, 00:30:26.178 "flush": true, 00:30:26.178 "reset": true, 00:30:26.178 "nvme_admin": false, 00:30:26.178 "nvme_io": false, 00:30:26.178 "nvme_io_md": false, 00:30:26.178 "write_zeroes": true, 00:30:26.178 "zcopy": true, 00:30:26.178 "get_zone_info": false, 00:30:26.178 "zone_management": false, 00:30:26.178 "zone_append": false, 00:30:26.178 "compare": false, 00:30:26.178 "compare_and_write": false, 00:30:26.178 "abort": true, 00:30:26.178 "seek_hole": false, 00:30:26.178 "seek_data": false, 00:30:26.178 "copy": true, 00:30:26.178 "nvme_iov_md": false 00:30:26.178 }, 00:30:26.178 "memory_domains": [ 00:30:26.178 { 00:30:26.178 "dma_device_id": "system", 00:30:26.178 "dma_device_type": 1 00:30:26.178 }, 00:30:26.178 { 00:30:26.178 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:30:26.178 "dma_device_type": 2 00:30:26.178 } 00:30:26.178 ], 00:30:26.178 "driver_specific": {} 00:30:26.178 } 00:30:26.178 ] 00:30:26.178 17:27:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:26.178 17:27:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:30:26.178 17:27:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:30:26.178 17:27:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:30:26.178 17:27:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:30:26.178 17:27:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:30:26.178 17:27:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:30:26.178 17:27:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:30:26.178 17:27:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:30:26.178 17:27:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:30:26.178 17:27:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:30:26.178 17:27:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:30:26.178 17:27:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:30:26.178 17:27:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:30:26.178 17:27:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:26.178 17:27:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:30:26.178 17:27:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:26.178 17:27:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:30:26.178 "name": "Existed_Raid", 00:30:26.178 "uuid": "00000000-0000-0000-0000-000000000000", 00:30:26.178 "strip_size_kb": 64, 00:30:26.178 "state": "configuring", 00:30:26.178 "raid_level": "concat", 00:30:26.178 "superblock": false, 00:30:26.178 "num_base_bdevs": 3, 00:30:26.178 "num_base_bdevs_discovered": 1, 00:30:26.178 "num_base_bdevs_operational": 3, 00:30:26.178 "base_bdevs_list": [ 00:30:26.178 { 00:30:26.178 "name": "BaseBdev1", 00:30:26.178 "uuid": "c0d993cd-a6bd-49af-95c4-817d9e927c12", 00:30:26.178 "is_configured": true, 00:30:26.178 "data_offset": 0, 00:30:26.178 "data_size": 65536 00:30:26.178 }, 00:30:26.178 { 00:30:26.178 "name": "BaseBdev2", 00:30:26.178 "uuid": "00000000-0000-0000-0000-000000000000", 00:30:26.178 "is_configured": false, 00:30:26.178 "data_offset": 0, 00:30:26.178 "data_size": 0 00:30:26.178 }, 00:30:26.178 { 00:30:26.178 "name": "BaseBdev3", 00:30:26.178 "uuid": "00000000-0000-0000-0000-000000000000", 00:30:26.178 "is_configured": false, 00:30:26.178 "data_offset": 0, 00:30:26.178 "data_size": 0 00:30:26.178 } 00:30:26.178 ] 00:30:26.178 }' 00:30:26.179 17:27:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:30:26.179 17:27:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:30:26.746 17:27:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:30:26.746 17:27:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:26.746 17:27:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:30:26.746 [2024-11-26 17:27:03.894957] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:30:26.746 [2024-11-26 17:27:03.895032] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:30:26.746 17:27:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:26.746 17:27:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:30:26.746 17:27:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:26.746 17:27:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:30:26.746 [2024-11-26 17:27:03.903009] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:30:26.746 [2024-11-26 17:27:03.905218] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:30:26.746 [2024-11-26 17:27:03.905282] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:30:26.746 [2024-11-26 17:27:03.905295] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:30:26.746 [2024-11-26 17:27:03.905308] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:30:26.746 17:27:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:26.746 17:27:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:30:26.746 17:27:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:30:26.746 17:27:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:30:26.746 17:27:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:30:26.746 17:27:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:30:26.746 17:27:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:30:26.746 17:27:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:30:26.746 17:27:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:30:26.746 17:27:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:30:26.746 17:27:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:30:26.746 17:27:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:30:26.746 17:27:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:30:26.746 17:27:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:30:26.746 17:27:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:26.746 17:27:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:30:26.746 17:27:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:30:26.746 17:27:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:26.746 17:27:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:30:26.746 "name": "Existed_Raid", 00:30:26.746 "uuid": "00000000-0000-0000-0000-000000000000", 00:30:26.746 "strip_size_kb": 64, 00:30:26.746 "state": "configuring", 00:30:26.746 "raid_level": "concat", 00:30:26.746 "superblock": false, 00:30:26.746 "num_base_bdevs": 3, 00:30:26.746 "num_base_bdevs_discovered": 1, 00:30:26.746 "num_base_bdevs_operational": 3, 00:30:26.746 "base_bdevs_list": [ 00:30:26.746 { 00:30:26.746 "name": "BaseBdev1", 00:30:26.746 "uuid": "c0d993cd-a6bd-49af-95c4-817d9e927c12", 00:30:26.746 "is_configured": true, 00:30:26.746 "data_offset": 0, 00:30:26.746 "data_size": 65536 00:30:26.746 }, 00:30:26.746 { 00:30:26.746 "name": "BaseBdev2", 00:30:26.746 "uuid": "00000000-0000-0000-0000-000000000000", 00:30:26.746 "is_configured": false, 00:30:26.746 "data_offset": 0, 00:30:26.746 "data_size": 0 00:30:26.746 }, 00:30:26.746 { 00:30:26.746 "name": "BaseBdev3", 00:30:26.746 "uuid": "00000000-0000-0000-0000-000000000000", 00:30:26.746 "is_configured": false, 00:30:26.746 "data_offset": 0, 00:30:26.746 "data_size": 0 00:30:26.746 } 00:30:26.746 ] 00:30:26.746 }' 00:30:26.746 17:27:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:30:26.746 17:27:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:30:27.005 17:27:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:30:27.005 17:27:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:27.005 17:27:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:30:27.005 [2024-11-26 17:27:04.400420] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:30:27.005 BaseBdev2 00:30:27.005 17:27:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:27.005 17:27:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:30:27.005 17:27:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:30:27.005 17:27:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:30:27.005 17:27:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:30:27.005 17:27:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:30:27.005 17:27:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:30:27.005 17:27:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:30:27.005 17:27:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:27.005 17:27:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:30:27.005 17:27:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:27.005 17:27:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:30:27.005 17:27:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:27.005 17:27:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:30:27.005 [ 00:30:27.005 { 00:30:27.005 "name": "BaseBdev2", 00:30:27.005 "aliases": [ 00:30:27.005 "fb107a71-525b-4eb0-bed3-36f8bc7f5a52" 00:30:27.005 ], 00:30:27.005 "product_name": "Malloc disk", 00:30:27.005 "block_size": 512, 00:30:27.005 "num_blocks": 65536, 00:30:27.005 "uuid": "fb107a71-525b-4eb0-bed3-36f8bc7f5a52", 00:30:27.005 "assigned_rate_limits": { 00:30:27.005 "rw_ios_per_sec": 0, 00:30:27.005 "rw_mbytes_per_sec": 0, 00:30:27.005 "r_mbytes_per_sec": 0, 00:30:27.005 "w_mbytes_per_sec": 0 00:30:27.005 }, 00:30:27.005 "claimed": true, 00:30:27.005 "claim_type": "exclusive_write", 00:30:27.005 "zoned": false, 00:30:27.005 "supported_io_types": { 00:30:27.005 "read": true, 00:30:27.005 "write": true, 00:30:27.005 "unmap": true, 00:30:27.005 "flush": true, 00:30:27.005 "reset": true, 00:30:27.005 "nvme_admin": false, 00:30:27.005 "nvme_io": false, 00:30:27.005 "nvme_io_md": false, 00:30:27.005 "write_zeroes": true, 00:30:27.005 "zcopy": true, 00:30:27.005 "get_zone_info": false, 00:30:27.005 "zone_management": false, 00:30:27.005 "zone_append": false, 00:30:27.005 "compare": false, 00:30:27.005 "compare_and_write": false, 00:30:27.005 "abort": true, 00:30:27.005 "seek_hole": false, 00:30:27.005 "seek_data": false, 00:30:27.005 "copy": true, 00:30:27.005 "nvme_iov_md": false 00:30:27.005 }, 00:30:27.005 "memory_domains": [ 00:30:27.005 { 00:30:27.005 "dma_device_id": "system", 00:30:27.005 "dma_device_type": 1 00:30:27.005 }, 00:30:27.005 { 00:30:27.005 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:30:27.005 "dma_device_type": 2 00:30:27.005 } 00:30:27.005 ], 00:30:27.005 "driver_specific": {} 00:30:27.005 } 00:30:27.005 ] 00:30:27.264 17:27:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:27.264 17:27:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:30:27.264 17:27:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:30:27.264 17:27:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:30:27.264 17:27:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:30:27.264 17:27:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:30:27.264 17:27:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:30:27.264 17:27:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:30:27.264 17:27:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:30:27.264 17:27:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:30:27.264 17:27:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:30:27.264 17:27:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:30:27.264 17:27:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:30:27.264 17:27:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:30:27.264 17:27:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:30:27.264 17:27:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:30:27.264 17:27:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:27.264 17:27:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:30:27.264 17:27:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:27.264 17:27:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:30:27.264 "name": "Existed_Raid", 00:30:27.264 "uuid": "00000000-0000-0000-0000-000000000000", 00:30:27.264 "strip_size_kb": 64, 00:30:27.264 "state": "configuring", 00:30:27.264 "raid_level": "concat", 00:30:27.264 "superblock": false, 00:30:27.264 "num_base_bdevs": 3, 00:30:27.264 "num_base_bdevs_discovered": 2, 00:30:27.264 "num_base_bdevs_operational": 3, 00:30:27.264 "base_bdevs_list": [ 00:30:27.264 { 00:30:27.264 "name": "BaseBdev1", 00:30:27.264 "uuid": "c0d993cd-a6bd-49af-95c4-817d9e927c12", 00:30:27.264 "is_configured": true, 00:30:27.264 "data_offset": 0, 00:30:27.264 "data_size": 65536 00:30:27.264 }, 00:30:27.264 { 00:30:27.264 "name": "BaseBdev2", 00:30:27.264 "uuid": "fb107a71-525b-4eb0-bed3-36f8bc7f5a52", 00:30:27.264 "is_configured": true, 00:30:27.264 "data_offset": 0, 00:30:27.264 "data_size": 65536 00:30:27.264 }, 00:30:27.264 { 00:30:27.264 "name": "BaseBdev3", 00:30:27.264 "uuid": "00000000-0000-0000-0000-000000000000", 00:30:27.264 "is_configured": false, 00:30:27.264 "data_offset": 0, 00:30:27.264 "data_size": 0 00:30:27.264 } 00:30:27.264 ] 00:30:27.264 }' 00:30:27.264 17:27:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:30:27.264 17:27:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:30:27.523 17:27:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:30:27.523 17:27:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:27.523 17:27:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:30:27.523 [2024-11-26 17:27:04.932661] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:30:27.523 [2024-11-26 17:27:04.932718] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:30:27.523 [2024-11-26 17:27:04.932734] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 196608, blocklen 512 00:30:27.523 [2024-11-26 17:27:04.933013] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:30:27.523 [2024-11-26 17:27:04.933204] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:30:27.523 [2024-11-26 17:27:04.933216] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:30:27.523 [2024-11-26 17:27:04.933478] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:30:27.523 BaseBdev3 00:30:27.523 17:27:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:27.523 17:27:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:30:27.523 17:27:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:30:27.523 17:27:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:30:27.523 17:27:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:30:27.523 17:27:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:30:27.523 17:27:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:30:27.523 17:27:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:30:27.523 17:27:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:27.523 17:27:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:30:27.523 17:27:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:27.523 17:27:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:30:27.523 17:27:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:27.523 17:27:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:30:27.523 [ 00:30:27.523 { 00:30:27.523 "name": "BaseBdev3", 00:30:27.523 "aliases": [ 00:30:27.523 "7c179c4c-c30a-4b72-a173-96ecfdb2e26f" 00:30:27.523 ], 00:30:27.523 "product_name": "Malloc disk", 00:30:27.523 "block_size": 512, 00:30:27.523 "num_blocks": 65536, 00:30:27.523 "uuid": "7c179c4c-c30a-4b72-a173-96ecfdb2e26f", 00:30:27.523 "assigned_rate_limits": { 00:30:27.523 "rw_ios_per_sec": 0, 00:30:27.523 "rw_mbytes_per_sec": 0, 00:30:27.523 "r_mbytes_per_sec": 0, 00:30:27.523 "w_mbytes_per_sec": 0 00:30:27.523 }, 00:30:27.523 "claimed": true, 00:30:27.523 "claim_type": "exclusive_write", 00:30:27.523 "zoned": false, 00:30:27.523 "supported_io_types": { 00:30:27.523 "read": true, 00:30:27.523 "write": true, 00:30:27.523 "unmap": true, 00:30:27.523 "flush": true, 00:30:27.781 "reset": true, 00:30:27.781 "nvme_admin": false, 00:30:27.781 "nvme_io": false, 00:30:27.781 "nvme_io_md": false, 00:30:27.781 "write_zeroes": true, 00:30:27.781 "zcopy": true, 00:30:27.781 "get_zone_info": false, 00:30:27.781 "zone_management": false, 00:30:27.781 "zone_append": false, 00:30:27.781 "compare": false, 00:30:27.781 "compare_and_write": false, 00:30:27.781 "abort": true, 00:30:27.781 "seek_hole": false, 00:30:27.781 "seek_data": false, 00:30:27.781 "copy": true, 00:30:27.781 "nvme_iov_md": false 00:30:27.781 }, 00:30:27.781 "memory_domains": [ 00:30:27.781 { 00:30:27.781 "dma_device_id": "system", 00:30:27.781 "dma_device_type": 1 00:30:27.781 }, 00:30:27.781 { 00:30:27.781 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:30:27.781 "dma_device_type": 2 00:30:27.781 } 00:30:27.781 ], 00:30:27.781 "driver_specific": {} 00:30:27.781 } 00:30:27.781 ] 00:30:27.781 17:27:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:27.781 17:27:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:30:27.781 17:27:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:30:27.781 17:27:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:30:27.781 17:27:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online concat 64 3 00:30:27.781 17:27:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:30:27.782 17:27:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:30:27.782 17:27:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:30:27.782 17:27:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:30:27.782 17:27:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:30:27.782 17:27:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:30:27.782 17:27:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:30:27.782 17:27:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:30:27.782 17:27:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:30:27.782 17:27:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:30:27.782 17:27:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:27.782 17:27:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:30:27.782 17:27:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:30:27.782 17:27:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:27.782 17:27:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:30:27.782 "name": "Existed_Raid", 00:30:27.782 "uuid": "fd29119c-efb1-4b4f-bdfb-dceaac924827", 00:30:27.782 "strip_size_kb": 64, 00:30:27.782 "state": "online", 00:30:27.782 "raid_level": "concat", 00:30:27.782 "superblock": false, 00:30:27.782 "num_base_bdevs": 3, 00:30:27.782 "num_base_bdevs_discovered": 3, 00:30:27.782 "num_base_bdevs_operational": 3, 00:30:27.782 "base_bdevs_list": [ 00:30:27.782 { 00:30:27.782 "name": "BaseBdev1", 00:30:27.782 "uuid": "c0d993cd-a6bd-49af-95c4-817d9e927c12", 00:30:27.782 "is_configured": true, 00:30:27.782 "data_offset": 0, 00:30:27.782 "data_size": 65536 00:30:27.782 }, 00:30:27.782 { 00:30:27.782 "name": "BaseBdev2", 00:30:27.782 "uuid": "fb107a71-525b-4eb0-bed3-36f8bc7f5a52", 00:30:27.782 "is_configured": true, 00:30:27.782 "data_offset": 0, 00:30:27.782 "data_size": 65536 00:30:27.782 }, 00:30:27.782 { 00:30:27.782 "name": "BaseBdev3", 00:30:27.782 "uuid": "7c179c4c-c30a-4b72-a173-96ecfdb2e26f", 00:30:27.782 "is_configured": true, 00:30:27.782 "data_offset": 0, 00:30:27.782 "data_size": 65536 00:30:27.782 } 00:30:27.782 ] 00:30:27.782 }' 00:30:27.782 17:27:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:30:27.782 17:27:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:30:28.040 17:27:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:30:28.040 17:27:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:30:28.040 17:27:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:30:28.040 17:27:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:30:28.040 17:27:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:30:28.040 17:27:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:30:28.040 17:27:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:30:28.040 17:27:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:30:28.040 17:27:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:28.040 17:27:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:30:28.040 [2024-11-26 17:27:05.425152] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:30:28.040 17:27:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:28.040 17:27:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:30:28.040 "name": "Existed_Raid", 00:30:28.040 "aliases": [ 00:30:28.040 "fd29119c-efb1-4b4f-bdfb-dceaac924827" 00:30:28.040 ], 00:30:28.040 "product_name": "Raid Volume", 00:30:28.040 "block_size": 512, 00:30:28.040 "num_blocks": 196608, 00:30:28.041 "uuid": "fd29119c-efb1-4b4f-bdfb-dceaac924827", 00:30:28.041 "assigned_rate_limits": { 00:30:28.041 "rw_ios_per_sec": 0, 00:30:28.041 "rw_mbytes_per_sec": 0, 00:30:28.041 "r_mbytes_per_sec": 0, 00:30:28.041 "w_mbytes_per_sec": 0 00:30:28.041 }, 00:30:28.041 "claimed": false, 00:30:28.041 "zoned": false, 00:30:28.041 "supported_io_types": { 00:30:28.041 "read": true, 00:30:28.041 "write": true, 00:30:28.041 "unmap": true, 00:30:28.041 "flush": true, 00:30:28.041 "reset": true, 00:30:28.041 "nvme_admin": false, 00:30:28.041 "nvme_io": false, 00:30:28.041 "nvme_io_md": false, 00:30:28.041 "write_zeroes": true, 00:30:28.041 "zcopy": false, 00:30:28.041 "get_zone_info": false, 00:30:28.041 "zone_management": false, 00:30:28.041 "zone_append": false, 00:30:28.041 "compare": false, 00:30:28.041 "compare_and_write": false, 00:30:28.041 "abort": false, 00:30:28.041 "seek_hole": false, 00:30:28.041 "seek_data": false, 00:30:28.041 "copy": false, 00:30:28.041 "nvme_iov_md": false 00:30:28.041 }, 00:30:28.041 "memory_domains": [ 00:30:28.041 { 00:30:28.041 "dma_device_id": "system", 00:30:28.041 "dma_device_type": 1 00:30:28.041 }, 00:30:28.041 { 00:30:28.041 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:30:28.041 "dma_device_type": 2 00:30:28.041 }, 00:30:28.041 { 00:30:28.041 "dma_device_id": "system", 00:30:28.041 "dma_device_type": 1 00:30:28.041 }, 00:30:28.041 { 00:30:28.041 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:30:28.041 "dma_device_type": 2 00:30:28.041 }, 00:30:28.041 { 00:30:28.041 "dma_device_id": "system", 00:30:28.041 "dma_device_type": 1 00:30:28.041 }, 00:30:28.041 { 00:30:28.041 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:30:28.041 "dma_device_type": 2 00:30:28.041 } 00:30:28.041 ], 00:30:28.041 "driver_specific": { 00:30:28.041 "raid": { 00:30:28.041 "uuid": "fd29119c-efb1-4b4f-bdfb-dceaac924827", 00:30:28.041 "strip_size_kb": 64, 00:30:28.041 "state": "online", 00:30:28.041 "raid_level": "concat", 00:30:28.041 "superblock": false, 00:30:28.041 "num_base_bdevs": 3, 00:30:28.041 "num_base_bdevs_discovered": 3, 00:30:28.041 "num_base_bdevs_operational": 3, 00:30:28.041 "base_bdevs_list": [ 00:30:28.041 { 00:30:28.041 "name": "BaseBdev1", 00:30:28.041 "uuid": "c0d993cd-a6bd-49af-95c4-817d9e927c12", 00:30:28.041 "is_configured": true, 00:30:28.041 "data_offset": 0, 00:30:28.041 "data_size": 65536 00:30:28.041 }, 00:30:28.041 { 00:30:28.041 "name": "BaseBdev2", 00:30:28.041 "uuid": "fb107a71-525b-4eb0-bed3-36f8bc7f5a52", 00:30:28.041 "is_configured": true, 00:30:28.041 "data_offset": 0, 00:30:28.041 "data_size": 65536 00:30:28.041 }, 00:30:28.041 { 00:30:28.041 "name": "BaseBdev3", 00:30:28.041 "uuid": "7c179c4c-c30a-4b72-a173-96ecfdb2e26f", 00:30:28.041 "is_configured": true, 00:30:28.041 "data_offset": 0, 00:30:28.041 "data_size": 65536 00:30:28.041 } 00:30:28.041 ] 00:30:28.041 } 00:30:28.041 } 00:30:28.041 }' 00:30:28.041 17:27:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:30:28.299 17:27:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:30:28.299 BaseBdev2 00:30:28.299 BaseBdev3' 00:30:28.299 17:27:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:30:28.299 17:27:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:30:28.299 17:27:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:30:28.299 17:27:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:30:28.299 17:27:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:30:28.299 17:27:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:28.299 17:27:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:30:28.299 17:27:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:28.299 17:27:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:30:28.299 17:27:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:30:28.299 17:27:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:30:28.299 17:27:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:30:28.299 17:27:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:30:28.299 17:27:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:28.299 17:27:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:30:28.299 17:27:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:28.299 17:27:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:30:28.299 17:27:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:30:28.299 17:27:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:30:28.299 17:27:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:30:28.299 17:27:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:28.299 17:27:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:30:28.299 17:27:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:30:28.299 17:27:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:28.299 17:27:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:30:28.299 17:27:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:30:28.300 17:27:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:30:28.300 17:27:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:28.300 17:27:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:30:28.300 [2024-11-26 17:27:05.728947] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:30:28.300 [2024-11-26 17:27:05.728982] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:30:28.300 [2024-11-26 17:27:05.729039] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:30:28.558 17:27:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:28.558 17:27:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@260 -- # local expected_state 00:30:28.558 17:27:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@261 -- # has_redundancy concat 00:30:28.558 17:27:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:30:28.558 17:27:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # return 1 00:30:28.558 17:27:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@262 -- # expected_state=offline 00:30:28.558 17:27:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid offline concat 64 2 00:30:28.558 17:27:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:30:28.558 17:27:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=offline 00:30:28.558 17:27:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:30:28.559 17:27:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:30:28.559 17:27:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:30:28.559 17:27:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:30:28.559 17:27:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:30:28.559 17:27:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:30:28.559 17:27:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:30:28.559 17:27:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:30:28.559 17:27:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:30:28.559 17:27:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:28.559 17:27:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:30:28.559 17:27:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:28.559 17:27:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:30:28.559 "name": "Existed_Raid", 00:30:28.559 "uuid": "fd29119c-efb1-4b4f-bdfb-dceaac924827", 00:30:28.559 "strip_size_kb": 64, 00:30:28.559 "state": "offline", 00:30:28.559 "raid_level": "concat", 00:30:28.559 "superblock": false, 00:30:28.559 "num_base_bdevs": 3, 00:30:28.559 "num_base_bdevs_discovered": 2, 00:30:28.559 "num_base_bdevs_operational": 2, 00:30:28.559 "base_bdevs_list": [ 00:30:28.559 { 00:30:28.559 "name": null, 00:30:28.559 "uuid": "00000000-0000-0000-0000-000000000000", 00:30:28.559 "is_configured": false, 00:30:28.559 "data_offset": 0, 00:30:28.559 "data_size": 65536 00:30:28.559 }, 00:30:28.559 { 00:30:28.559 "name": "BaseBdev2", 00:30:28.559 "uuid": "fb107a71-525b-4eb0-bed3-36f8bc7f5a52", 00:30:28.559 "is_configured": true, 00:30:28.559 "data_offset": 0, 00:30:28.559 "data_size": 65536 00:30:28.559 }, 00:30:28.559 { 00:30:28.559 "name": "BaseBdev3", 00:30:28.559 "uuid": "7c179c4c-c30a-4b72-a173-96ecfdb2e26f", 00:30:28.559 "is_configured": true, 00:30:28.559 "data_offset": 0, 00:30:28.559 "data_size": 65536 00:30:28.559 } 00:30:28.559 ] 00:30:28.559 }' 00:30:28.559 17:27:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:30:28.559 17:27:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:30:29.127 17:27:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:30:29.127 17:27:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:30:29.127 17:27:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:30:29.127 17:27:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:30:29.127 17:27:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:29.127 17:27:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:30:29.127 17:27:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:29.127 17:27:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:30:29.127 17:27:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:30:29.127 17:27:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:30:29.127 17:27:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:29.127 17:27:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:30:29.127 [2024-11-26 17:27:06.326922] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:30:29.127 17:27:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:29.127 17:27:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:30:29.127 17:27:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:30:29.127 17:27:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:30:29.127 17:27:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:30:29.127 17:27:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:29.127 17:27:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:30:29.127 17:27:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:29.127 17:27:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:30:29.127 17:27:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:30:29.127 17:27:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:30:29.127 17:27:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:29.127 17:27:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:30:29.127 [2024-11-26 17:27:06.491240] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:30:29.127 [2024-11-26 17:27:06.491316] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:30:29.386 17:27:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:29.386 17:27:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:30:29.386 17:27:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:30:29.386 17:27:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:30:29.386 17:27:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:29.386 17:27:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:30:29.386 17:27:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:30:29.386 17:27:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:29.386 17:27:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:30:29.387 17:27:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:30:29.387 17:27:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@284 -- # '[' 3 -gt 2 ']' 00:30:29.387 17:27:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:30:29.387 17:27:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:30:29.387 17:27:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:30:29.387 17:27:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:29.387 17:27:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:30:29.387 BaseBdev2 00:30:29.387 17:27:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:29.387 17:27:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:30:29.387 17:27:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:30:29.387 17:27:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:30:29.387 17:27:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:30:29.387 17:27:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:30:29.387 17:27:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:30:29.387 17:27:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:30:29.387 17:27:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:29.387 17:27:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:30:29.387 17:27:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:29.387 17:27:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:30:29.387 17:27:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:29.387 17:27:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:30:29.387 [ 00:30:29.387 { 00:30:29.387 "name": "BaseBdev2", 00:30:29.387 "aliases": [ 00:30:29.387 "fcb98a57-8c60-4a77-8790-72500e06a3dd" 00:30:29.387 ], 00:30:29.387 "product_name": "Malloc disk", 00:30:29.387 "block_size": 512, 00:30:29.387 "num_blocks": 65536, 00:30:29.387 "uuid": "fcb98a57-8c60-4a77-8790-72500e06a3dd", 00:30:29.387 "assigned_rate_limits": { 00:30:29.387 "rw_ios_per_sec": 0, 00:30:29.387 "rw_mbytes_per_sec": 0, 00:30:29.387 "r_mbytes_per_sec": 0, 00:30:29.387 "w_mbytes_per_sec": 0 00:30:29.387 }, 00:30:29.387 "claimed": false, 00:30:29.387 "zoned": false, 00:30:29.387 "supported_io_types": { 00:30:29.387 "read": true, 00:30:29.387 "write": true, 00:30:29.387 "unmap": true, 00:30:29.387 "flush": true, 00:30:29.387 "reset": true, 00:30:29.387 "nvme_admin": false, 00:30:29.387 "nvme_io": false, 00:30:29.387 "nvme_io_md": false, 00:30:29.387 "write_zeroes": true, 00:30:29.387 "zcopy": true, 00:30:29.387 "get_zone_info": false, 00:30:29.387 "zone_management": false, 00:30:29.387 "zone_append": false, 00:30:29.387 "compare": false, 00:30:29.387 "compare_and_write": false, 00:30:29.387 "abort": true, 00:30:29.387 "seek_hole": false, 00:30:29.387 "seek_data": false, 00:30:29.387 "copy": true, 00:30:29.387 "nvme_iov_md": false 00:30:29.387 }, 00:30:29.387 "memory_domains": [ 00:30:29.387 { 00:30:29.387 "dma_device_id": "system", 00:30:29.387 "dma_device_type": 1 00:30:29.387 }, 00:30:29.387 { 00:30:29.387 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:30:29.387 "dma_device_type": 2 00:30:29.387 } 00:30:29.387 ], 00:30:29.387 "driver_specific": {} 00:30:29.387 } 00:30:29.387 ] 00:30:29.387 17:27:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:29.387 17:27:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:30:29.387 17:27:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:30:29.387 17:27:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:30:29.387 17:27:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:30:29.387 17:27:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:29.387 17:27:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:30:29.387 BaseBdev3 00:30:29.387 17:27:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:29.387 17:27:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:30:29.387 17:27:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:30:29.387 17:27:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:30:29.387 17:27:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:30:29.387 17:27:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:30:29.387 17:27:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:30:29.387 17:27:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:30:29.387 17:27:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:29.387 17:27:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:30:29.387 17:27:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:29.387 17:27:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:30:29.387 17:27:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:29.387 17:27:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:30:29.387 [ 00:30:29.387 { 00:30:29.387 "name": "BaseBdev3", 00:30:29.387 "aliases": [ 00:30:29.387 "6db5d762-6f7f-4ca0-a9de-5b1eff2ee844" 00:30:29.387 ], 00:30:29.387 "product_name": "Malloc disk", 00:30:29.387 "block_size": 512, 00:30:29.387 "num_blocks": 65536, 00:30:29.387 "uuid": "6db5d762-6f7f-4ca0-a9de-5b1eff2ee844", 00:30:29.387 "assigned_rate_limits": { 00:30:29.387 "rw_ios_per_sec": 0, 00:30:29.387 "rw_mbytes_per_sec": 0, 00:30:29.387 "r_mbytes_per_sec": 0, 00:30:29.387 "w_mbytes_per_sec": 0 00:30:29.387 }, 00:30:29.387 "claimed": false, 00:30:29.387 "zoned": false, 00:30:29.387 "supported_io_types": { 00:30:29.387 "read": true, 00:30:29.387 "write": true, 00:30:29.387 "unmap": true, 00:30:29.387 "flush": true, 00:30:29.387 "reset": true, 00:30:29.387 "nvme_admin": false, 00:30:29.387 "nvme_io": false, 00:30:29.387 "nvme_io_md": false, 00:30:29.387 "write_zeroes": true, 00:30:29.387 "zcopy": true, 00:30:29.387 "get_zone_info": false, 00:30:29.387 "zone_management": false, 00:30:29.387 "zone_append": false, 00:30:29.387 "compare": false, 00:30:29.387 "compare_and_write": false, 00:30:29.387 "abort": true, 00:30:29.387 "seek_hole": false, 00:30:29.387 "seek_data": false, 00:30:29.387 "copy": true, 00:30:29.387 "nvme_iov_md": false 00:30:29.387 }, 00:30:29.387 "memory_domains": [ 00:30:29.387 { 00:30:29.387 "dma_device_id": "system", 00:30:29.387 "dma_device_type": 1 00:30:29.387 }, 00:30:29.387 { 00:30:29.387 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:30:29.387 "dma_device_type": 2 00:30:29.387 } 00:30:29.387 ], 00:30:29.387 "driver_specific": {} 00:30:29.387 } 00:30:29.387 ] 00:30:29.387 17:27:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:29.387 17:27:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:30:29.387 17:27:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:30:29.647 17:27:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:30:29.647 17:27:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:30:29.647 17:27:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:29.647 17:27:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:30:29.647 [2024-11-26 17:27:06.840292] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:30:29.647 [2024-11-26 17:27:06.840344] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:30:29.647 [2024-11-26 17:27:06.840369] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:30:29.647 [2024-11-26 17:27:06.842606] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:30:29.647 17:27:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:29.647 17:27:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:30:29.647 17:27:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:30:29.647 17:27:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:30:29.647 17:27:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:30:29.647 17:27:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:30:29.647 17:27:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:30:29.647 17:27:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:30:29.647 17:27:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:30:29.647 17:27:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:30:29.647 17:27:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:30:29.647 17:27:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:30:29.647 17:27:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:29.647 17:27:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:30:29.647 17:27:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:30:29.647 17:27:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:29.647 17:27:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:30:29.647 "name": "Existed_Raid", 00:30:29.647 "uuid": "00000000-0000-0000-0000-000000000000", 00:30:29.647 "strip_size_kb": 64, 00:30:29.647 "state": "configuring", 00:30:29.647 "raid_level": "concat", 00:30:29.647 "superblock": false, 00:30:29.647 "num_base_bdevs": 3, 00:30:29.647 "num_base_bdevs_discovered": 2, 00:30:29.647 "num_base_bdevs_operational": 3, 00:30:29.647 "base_bdevs_list": [ 00:30:29.647 { 00:30:29.647 "name": "BaseBdev1", 00:30:29.647 "uuid": "00000000-0000-0000-0000-000000000000", 00:30:29.647 "is_configured": false, 00:30:29.647 "data_offset": 0, 00:30:29.647 "data_size": 0 00:30:29.647 }, 00:30:29.647 { 00:30:29.647 "name": "BaseBdev2", 00:30:29.647 "uuid": "fcb98a57-8c60-4a77-8790-72500e06a3dd", 00:30:29.647 "is_configured": true, 00:30:29.647 "data_offset": 0, 00:30:29.647 "data_size": 65536 00:30:29.647 }, 00:30:29.647 { 00:30:29.647 "name": "BaseBdev3", 00:30:29.647 "uuid": "6db5d762-6f7f-4ca0-a9de-5b1eff2ee844", 00:30:29.647 "is_configured": true, 00:30:29.647 "data_offset": 0, 00:30:29.647 "data_size": 65536 00:30:29.647 } 00:30:29.647 ] 00:30:29.647 }' 00:30:29.647 17:27:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:30:29.647 17:27:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:30:29.906 17:27:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:30:29.906 17:27:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:29.906 17:27:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:30:29.906 [2024-11-26 17:27:07.304770] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:30:29.906 17:27:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:29.906 17:27:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:30:29.906 17:27:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:30:29.906 17:27:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:30:29.906 17:27:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:30:29.906 17:27:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:30:29.906 17:27:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:30:29.906 17:27:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:30:29.906 17:27:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:30:29.906 17:27:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:30:29.906 17:27:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:30:29.906 17:27:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:30:29.906 17:27:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:30:29.906 17:27:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:29.906 17:27:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:30:29.906 17:27:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:30.167 17:27:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:30:30.167 "name": "Existed_Raid", 00:30:30.167 "uuid": "00000000-0000-0000-0000-000000000000", 00:30:30.167 "strip_size_kb": 64, 00:30:30.167 "state": "configuring", 00:30:30.167 "raid_level": "concat", 00:30:30.167 "superblock": false, 00:30:30.167 "num_base_bdevs": 3, 00:30:30.167 "num_base_bdevs_discovered": 1, 00:30:30.167 "num_base_bdevs_operational": 3, 00:30:30.167 "base_bdevs_list": [ 00:30:30.167 { 00:30:30.167 "name": "BaseBdev1", 00:30:30.167 "uuid": "00000000-0000-0000-0000-000000000000", 00:30:30.167 "is_configured": false, 00:30:30.167 "data_offset": 0, 00:30:30.167 "data_size": 0 00:30:30.167 }, 00:30:30.167 { 00:30:30.167 "name": null, 00:30:30.167 "uuid": "fcb98a57-8c60-4a77-8790-72500e06a3dd", 00:30:30.167 "is_configured": false, 00:30:30.167 "data_offset": 0, 00:30:30.167 "data_size": 65536 00:30:30.167 }, 00:30:30.167 { 00:30:30.167 "name": "BaseBdev3", 00:30:30.167 "uuid": "6db5d762-6f7f-4ca0-a9de-5b1eff2ee844", 00:30:30.167 "is_configured": true, 00:30:30.167 "data_offset": 0, 00:30:30.167 "data_size": 65536 00:30:30.167 } 00:30:30.167 ] 00:30:30.167 }' 00:30:30.168 17:27:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:30:30.168 17:27:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:30:30.432 17:27:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:30:30.432 17:27:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:30:30.432 17:27:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:30.432 17:27:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:30:30.432 17:27:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:30.432 17:27:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:30:30.432 17:27:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:30:30.432 17:27:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:30.432 17:27:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:30:30.432 [2024-11-26 17:27:07.831447] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:30:30.432 BaseBdev1 00:30:30.432 17:27:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:30.432 17:27:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:30:30.432 17:27:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:30:30.432 17:27:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:30:30.432 17:27:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:30:30.432 17:27:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:30:30.432 17:27:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:30:30.432 17:27:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:30:30.432 17:27:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:30.432 17:27:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:30:30.432 17:27:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:30.432 17:27:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:30:30.432 17:27:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:30.432 17:27:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:30:30.432 [ 00:30:30.432 { 00:30:30.432 "name": "BaseBdev1", 00:30:30.432 "aliases": [ 00:30:30.432 "ea58c023-6085-4675-aa37-1c02827e8410" 00:30:30.432 ], 00:30:30.432 "product_name": "Malloc disk", 00:30:30.432 "block_size": 512, 00:30:30.432 "num_blocks": 65536, 00:30:30.432 "uuid": "ea58c023-6085-4675-aa37-1c02827e8410", 00:30:30.432 "assigned_rate_limits": { 00:30:30.432 "rw_ios_per_sec": 0, 00:30:30.432 "rw_mbytes_per_sec": 0, 00:30:30.432 "r_mbytes_per_sec": 0, 00:30:30.432 "w_mbytes_per_sec": 0 00:30:30.432 }, 00:30:30.432 "claimed": true, 00:30:30.432 "claim_type": "exclusive_write", 00:30:30.432 "zoned": false, 00:30:30.432 "supported_io_types": { 00:30:30.432 "read": true, 00:30:30.432 "write": true, 00:30:30.432 "unmap": true, 00:30:30.432 "flush": true, 00:30:30.432 "reset": true, 00:30:30.432 "nvme_admin": false, 00:30:30.432 "nvme_io": false, 00:30:30.432 "nvme_io_md": false, 00:30:30.432 "write_zeroes": true, 00:30:30.432 "zcopy": true, 00:30:30.432 "get_zone_info": false, 00:30:30.432 "zone_management": false, 00:30:30.432 "zone_append": false, 00:30:30.432 "compare": false, 00:30:30.432 "compare_and_write": false, 00:30:30.433 "abort": true, 00:30:30.433 "seek_hole": false, 00:30:30.433 "seek_data": false, 00:30:30.433 "copy": true, 00:30:30.433 "nvme_iov_md": false 00:30:30.433 }, 00:30:30.433 "memory_domains": [ 00:30:30.433 { 00:30:30.433 "dma_device_id": "system", 00:30:30.433 "dma_device_type": 1 00:30:30.433 }, 00:30:30.433 { 00:30:30.433 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:30:30.433 "dma_device_type": 2 00:30:30.433 } 00:30:30.433 ], 00:30:30.433 "driver_specific": {} 00:30:30.433 } 00:30:30.433 ] 00:30:30.433 17:27:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:30.433 17:27:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:30:30.433 17:27:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:30:30.692 17:27:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:30:30.692 17:27:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:30:30.692 17:27:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:30:30.692 17:27:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:30:30.692 17:27:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:30:30.692 17:27:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:30:30.692 17:27:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:30:30.692 17:27:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:30:30.692 17:27:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:30:30.692 17:27:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:30:30.692 17:27:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:30:30.692 17:27:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:30.692 17:27:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:30:30.692 17:27:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:30.692 17:27:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:30:30.692 "name": "Existed_Raid", 00:30:30.692 "uuid": "00000000-0000-0000-0000-000000000000", 00:30:30.692 "strip_size_kb": 64, 00:30:30.692 "state": "configuring", 00:30:30.692 "raid_level": "concat", 00:30:30.692 "superblock": false, 00:30:30.692 "num_base_bdevs": 3, 00:30:30.692 "num_base_bdevs_discovered": 2, 00:30:30.692 "num_base_bdevs_operational": 3, 00:30:30.692 "base_bdevs_list": [ 00:30:30.692 { 00:30:30.692 "name": "BaseBdev1", 00:30:30.692 "uuid": "ea58c023-6085-4675-aa37-1c02827e8410", 00:30:30.692 "is_configured": true, 00:30:30.692 "data_offset": 0, 00:30:30.692 "data_size": 65536 00:30:30.692 }, 00:30:30.692 { 00:30:30.692 "name": null, 00:30:30.692 "uuid": "fcb98a57-8c60-4a77-8790-72500e06a3dd", 00:30:30.692 "is_configured": false, 00:30:30.692 "data_offset": 0, 00:30:30.692 "data_size": 65536 00:30:30.692 }, 00:30:30.692 { 00:30:30.692 "name": "BaseBdev3", 00:30:30.692 "uuid": "6db5d762-6f7f-4ca0-a9de-5b1eff2ee844", 00:30:30.692 "is_configured": true, 00:30:30.692 "data_offset": 0, 00:30:30.692 "data_size": 65536 00:30:30.692 } 00:30:30.692 ] 00:30:30.692 }' 00:30:30.692 17:27:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:30:30.692 17:27:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:30:30.952 17:27:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:30:30.952 17:27:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:30.952 17:27:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:30:30.952 17:27:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:30:30.952 17:27:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:30.952 17:27:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:30:30.952 17:27:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:30:30.952 17:27:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:30.952 17:27:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:30:30.952 [2024-11-26 17:27:08.383622] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:30:30.952 17:27:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:30.952 17:27:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:30:30.952 17:27:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:30:30.952 17:27:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:30:30.952 17:27:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:30:30.952 17:27:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:30:30.952 17:27:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:30:30.952 17:27:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:30:30.952 17:27:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:30:30.952 17:27:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:30:30.952 17:27:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:30:30.952 17:27:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:30:30.952 17:27:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:30.952 17:27:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:30:30.952 17:27:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:30:31.211 17:27:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:31.211 17:27:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:30:31.211 "name": "Existed_Raid", 00:30:31.211 "uuid": "00000000-0000-0000-0000-000000000000", 00:30:31.211 "strip_size_kb": 64, 00:30:31.211 "state": "configuring", 00:30:31.211 "raid_level": "concat", 00:30:31.211 "superblock": false, 00:30:31.211 "num_base_bdevs": 3, 00:30:31.211 "num_base_bdevs_discovered": 1, 00:30:31.211 "num_base_bdevs_operational": 3, 00:30:31.211 "base_bdevs_list": [ 00:30:31.211 { 00:30:31.211 "name": "BaseBdev1", 00:30:31.211 "uuid": "ea58c023-6085-4675-aa37-1c02827e8410", 00:30:31.211 "is_configured": true, 00:30:31.211 "data_offset": 0, 00:30:31.211 "data_size": 65536 00:30:31.211 }, 00:30:31.211 { 00:30:31.211 "name": null, 00:30:31.211 "uuid": "fcb98a57-8c60-4a77-8790-72500e06a3dd", 00:30:31.211 "is_configured": false, 00:30:31.211 "data_offset": 0, 00:30:31.211 "data_size": 65536 00:30:31.211 }, 00:30:31.211 { 00:30:31.211 "name": null, 00:30:31.211 "uuid": "6db5d762-6f7f-4ca0-a9de-5b1eff2ee844", 00:30:31.211 "is_configured": false, 00:30:31.211 "data_offset": 0, 00:30:31.211 "data_size": 65536 00:30:31.211 } 00:30:31.211 ] 00:30:31.211 }' 00:30:31.211 17:27:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:30:31.211 17:27:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:30:31.470 17:27:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:30:31.470 17:27:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:30:31.470 17:27:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:31.470 17:27:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:30:31.470 17:27:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:31.729 17:27:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:30:31.729 17:27:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:30:31.729 17:27:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:31.729 17:27:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:30:31.729 [2024-11-26 17:27:08.923780] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:30:31.729 17:27:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:31.729 17:27:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:30:31.729 17:27:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:30:31.729 17:27:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:30:31.729 17:27:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:30:31.729 17:27:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:30:31.729 17:27:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:30:31.729 17:27:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:30:31.729 17:27:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:30:31.729 17:27:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:30:31.729 17:27:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:30:31.729 17:27:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:30:31.729 17:27:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:31.729 17:27:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:30:31.729 17:27:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:30:31.729 17:27:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:31.729 17:27:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:30:31.729 "name": "Existed_Raid", 00:30:31.729 "uuid": "00000000-0000-0000-0000-000000000000", 00:30:31.729 "strip_size_kb": 64, 00:30:31.729 "state": "configuring", 00:30:31.729 "raid_level": "concat", 00:30:31.729 "superblock": false, 00:30:31.729 "num_base_bdevs": 3, 00:30:31.729 "num_base_bdevs_discovered": 2, 00:30:31.729 "num_base_bdevs_operational": 3, 00:30:31.729 "base_bdevs_list": [ 00:30:31.729 { 00:30:31.729 "name": "BaseBdev1", 00:30:31.729 "uuid": "ea58c023-6085-4675-aa37-1c02827e8410", 00:30:31.729 "is_configured": true, 00:30:31.729 "data_offset": 0, 00:30:31.729 "data_size": 65536 00:30:31.729 }, 00:30:31.729 { 00:30:31.729 "name": null, 00:30:31.729 "uuid": "fcb98a57-8c60-4a77-8790-72500e06a3dd", 00:30:31.729 "is_configured": false, 00:30:31.729 "data_offset": 0, 00:30:31.729 "data_size": 65536 00:30:31.729 }, 00:30:31.729 { 00:30:31.729 "name": "BaseBdev3", 00:30:31.729 "uuid": "6db5d762-6f7f-4ca0-a9de-5b1eff2ee844", 00:30:31.729 "is_configured": true, 00:30:31.729 "data_offset": 0, 00:30:31.729 "data_size": 65536 00:30:31.729 } 00:30:31.729 ] 00:30:31.729 }' 00:30:31.729 17:27:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:30:31.729 17:27:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:30:31.987 17:27:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:30:31.987 17:27:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:30:31.987 17:27:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:31.987 17:27:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:30:31.987 17:27:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:31.987 17:27:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:30:31.987 17:27:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:30:31.987 17:27:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:31.987 17:27:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:30:31.987 [2024-11-26 17:27:09.415904] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:30:32.246 17:27:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:32.246 17:27:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:30:32.246 17:27:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:30:32.246 17:27:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:30:32.246 17:27:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:30:32.246 17:27:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:30:32.246 17:27:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:30:32.246 17:27:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:30:32.246 17:27:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:30:32.246 17:27:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:30:32.246 17:27:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:30:32.246 17:27:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:30:32.246 17:27:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:30:32.246 17:27:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:32.246 17:27:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:30:32.246 17:27:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:32.246 17:27:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:30:32.246 "name": "Existed_Raid", 00:30:32.246 "uuid": "00000000-0000-0000-0000-000000000000", 00:30:32.246 "strip_size_kb": 64, 00:30:32.246 "state": "configuring", 00:30:32.246 "raid_level": "concat", 00:30:32.246 "superblock": false, 00:30:32.246 "num_base_bdevs": 3, 00:30:32.246 "num_base_bdevs_discovered": 1, 00:30:32.246 "num_base_bdevs_operational": 3, 00:30:32.246 "base_bdevs_list": [ 00:30:32.246 { 00:30:32.246 "name": null, 00:30:32.246 "uuid": "ea58c023-6085-4675-aa37-1c02827e8410", 00:30:32.246 "is_configured": false, 00:30:32.246 "data_offset": 0, 00:30:32.246 "data_size": 65536 00:30:32.246 }, 00:30:32.246 { 00:30:32.246 "name": null, 00:30:32.246 "uuid": "fcb98a57-8c60-4a77-8790-72500e06a3dd", 00:30:32.246 "is_configured": false, 00:30:32.246 "data_offset": 0, 00:30:32.246 "data_size": 65536 00:30:32.246 }, 00:30:32.246 { 00:30:32.246 "name": "BaseBdev3", 00:30:32.246 "uuid": "6db5d762-6f7f-4ca0-a9de-5b1eff2ee844", 00:30:32.246 "is_configured": true, 00:30:32.246 "data_offset": 0, 00:30:32.246 "data_size": 65536 00:30:32.246 } 00:30:32.247 ] 00:30:32.247 }' 00:30:32.247 17:27:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:30:32.247 17:27:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:30:32.814 17:27:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:30:32.814 17:27:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:32.814 17:27:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:30:32.814 17:27:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:30:32.814 17:27:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:32.814 17:27:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:30:32.814 17:27:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:30:32.814 17:27:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:32.814 17:27:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:30:32.814 [2024-11-26 17:27:10.006747] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:30:32.814 17:27:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:32.814 17:27:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:30:32.814 17:27:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:30:32.814 17:27:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:30:32.814 17:27:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:30:32.814 17:27:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:30:32.814 17:27:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:30:32.814 17:27:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:30:32.814 17:27:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:30:32.814 17:27:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:30:32.814 17:27:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:30:32.814 17:27:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:30:32.814 17:27:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:32.814 17:27:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:30:32.814 17:27:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:30:32.814 17:27:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:32.814 17:27:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:30:32.814 "name": "Existed_Raid", 00:30:32.814 "uuid": "00000000-0000-0000-0000-000000000000", 00:30:32.814 "strip_size_kb": 64, 00:30:32.814 "state": "configuring", 00:30:32.814 "raid_level": "concat", 00:30:32.814 "superblock": false, 00:30:32.814 "num_base_bdevs": 3, 00:30:32.814 "num_base_bdevs_discovered": 2, 00:30:32.814 "num_base_bdevs_operational": 3, 00:30:32.814 "base_bdevs_list": [ 00:30:32.814 { 00:30:32.814 "name": null, 00:30:32.814 "uuid": "ea58c023-6085-4675-aa37-1c02827e8410", 00:30:32.814 "is_configured": false, 00:30:32.814 "data_offset": 0, 00:30:32.814 "data_size": 65536 00:30:32.814 }, 00:30:32.814 { 00:30:32.814 "name": "BaseBdev2", 00:30:32.814 "uuid": "fcb98a57-8c60-4a77-8790-72500e06a3dd", 00:30:32.814 "is_configured": true, 00:30:32.814 "data_offset": 0, 00:30:32.814 "data_size": 65536 00:30:32.814 }, 00:30:32.814 { 00:30:32.814 "name": "BaseBdev3", 00:30:32.814 "uuid": "6db5d762-6f7f-4ca0-a9de-5b1eff2ee844", 00:30:32.814 "is_configured": true, 00:30:32.814 "data_offset": 0, 00:30:32.814 "data_size": 65536 00:30:32.814 } 00:30:32.814 ] 00:30:32.814 }' 00:30:32.814 17:27:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:30:32.814 17:27:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:30:33.074 17:27:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:30:33.074 17:27:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:33.074 17:27:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:30:33.074 17:27:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:30:33.074 17:27:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:33.074 17:27:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:30:33.074 17:27:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:30:33.074 17:27:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:33.074 17:27:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:30:33.074 17:27:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:30:33.336 17:27:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:33.336 17:27:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u ea58c023-6085-4675-aa37-1c02827e8410 00:30:33.336 17:27:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:33.336 17:27:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:30:33.336 [2024-11-26 17:27:10.595472] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:30:33.336 [2024-11-26 17:27:10.595653] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:30:33.336 [2024-11-26 17:27:10.595678] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 196608, blocklen 512 00:30:33.336 [2024-11-26 17:27:10.595956] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:30:33.336 [2024-11-26 17:27:10.596134] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:30:33.336 [2024-11-26 17:27:10.596146] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000008200 00:30:33.336 [2024-11-26 17:27:10.596413] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:30:33.336 NewBaseBdev 00:30:33.336 17:27:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:33.336 17:27:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:30:33.336 17:27:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=NewBaseBdev 00:30:33.336 17:27:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:30:33.336 17:27:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:30:33.336 17:27:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:30:33.336 17:27:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:30:33.336 17:27:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:30:33.336 17:27:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:33.336 17:27:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:30:33.336 17:27:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:33.336 17:27:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:30:33.336 17:27:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:33.336 17:27:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:30:33.336 [ 00:30:33.336 { 00:30:33.336 "name": "NewBaseBdev", 00:30:33.336 "aliases": [ 00:30:33.336 "ea58c023-6085-4675-aa37-1c02827e8410" 00:30:33.336 ], 00:30:33.336 "product_name": "Malloc disk", 00:30:33.336 "block_size": 512, 00:30:33.336 "num_blocks": 65536, 00:30:33.336 "uuid": "ea58c023-6085-4675-aa37-1c02827e8410", 00:30:33.336 "assigned_rate_limits": { 00:30:33.336 "rw_ios_per_sec": 0, 00:30:33.336 "rw_mbytes_per_sec": 0, 00:30:33.337 "r_mbytes_per_sec": 0, 00:30:33.337 "w_mbytes_per_sec": 0 00:30:33.337 }, 00:30:33.337 "claimed": true, 00:30:33.337 "claim_type": "exclusive_write", 00:30:33.337 "zoned": false, 00:30:33.337 "supported_io_types": { 00:30:33.337 "read": true, 00:30:33.337 "write": true, 00:30:33.337 "unmap": true, 00:30:33.337 "flush": true, 00:30:33.337 "reset": true, 00:30:33.337 "nvme_admin": false, 00:30:33.337 "nvme_io": false, 00:30:33.337 "nvme_io_md": false, 00:30:33.337 "write_zeroes": true, 00:30:33.337 "zcopy": true, 00:30:33.337 "get_zone_info": false, 00:30:33.337 "zone_management": false, 00:30:33.337 "zone_append": false, 00:30:33.337 "compare": false, 00:30:33.337 "compare_and_write": false, 00:30:33.337 "abort": true, 00:30:33.337 "seek_hole": false, 00:30:33.337 "seek_data": false, 00:30:33.337 "copy": true, 00:30:33.337 "nvme_iov_md": false 00:30:33.337 }, 00:30:33.337 "memory_domains": [ 00:30:33.337 { 00:30:33.337 "dma_device_id": "system", 00:30:33.337 "dma_device_type": 1 00:30:33.337 }, 00:30:33.337 { 00:30:33.337 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:30:33.337 "dma_device_type": 2 00:30:33.337 } 00:30:33.337 ], 00:30:33.337 "driver_specific": {} 00:30:33.337 } 00:30:33.337 ] 00:30:33.337 17:27:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:33.337 17:27:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:30:33.337 17:27:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online concat 64 3 00:30:33.337 17:27:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:30:33.337 17:27:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:30:33.337 17:27:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:30:33.337 17:27:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:30:33.337 17:27:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:30:33.337 17:27:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:30:33.337 17:27:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:30:33.337 17:27:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:30:33.337 17:27:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:30:33.337 17:27:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:30:33.337 17:27:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:33.337 17:27:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:30:33.337 17:27:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:30:33.337 17:27:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:33.337 17:27:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:30:33.337 "name": "Existed_Raid", 00:30:33.337 "uuid": "35b9fa7f-db44-4bd1-9b8c-1674b4711b8d", 00:30:33.337 "strip_size_kb": 64, 00:30:33.337 "state": "online", 00:30:33.337 "raid_level": "concat", 00:30:33.337 "superblock": false, 00:30:33.337 "num_base_bdevs": 3, 00:30:33.337 "num_base_bdevs_discovered": 3, 00:30:33.337 "num_base_bdevs_operational": 3, 00:30:33.337 "base_bdevs_list": [ 00:30:33.337 { 00:30:33.337 "name": "NewBaseBdev", 00:30:33.337 "uuid": "ea58c023-6085-4675-aa37-1c02827e8410", 00:30:33.337 "is_configured": true, 00:30:33.337 "data_offset": 0, 00:30:33.337 "data_size": 65536 00:30:33.337 }, 00:30:33.337 { 00:30:33.337 "name": "BaseBdev2", 00:30:33.337 "uuid": "fcb98a57-8c60-4a77-8790-72500e06a3dd", 00:30:33.337 "is_configured": true, 00:30:33.337 "data_offset": 0, 00:30:33.337 "data_size": 65536 00:30:33.337 }, 00:30:33.337 { 00:30:33.337 "name": "BaseBdev3", 00:30:33.337 "uuid": "6db5d762-6f7f-4ca0-a9de-5b1eff2ee844", 00:30:33.337 "is_configured": true, 00:30:33.337 "data_offset": 0, 00:30:33.337 "data_size": 65536 00:30:33.337 } 00:30:33.337 ] 00:30:33.337 }' 00:30:33.337 17:27:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:30:33.337 17:27:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:30:33.904 17:27:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:30:33.904 17:27:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:30:33.904 17:27:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:30:33.904 17:27:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:30:33.904 17:27:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:30:33.904 17:27:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:30:33.904 17:27:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:30:33.904 17:27:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:33.904 17:27:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:30:33.904 17:27:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:30:33.904 [2024-11-26 17:27:11.116007] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:30:33.904 17:27:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:33.904 17:27:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:30:33.904 "name": "Existed_Raid", 00:30:33.904 "aliases": [ 00:30:33.904 "35b9fa7f-db44-4bd1-9b8c-1674b4711b8d" 00:30:33.904 ], 00:30:33.904 "product_name": "Raid Volume", 00:30:33.904 "block_size": 512, 00:30:33.904 "num_blocks": 196608, 00:30:33.904 "uuid": "35b9fa7f-db44-4bd1-9b8c-1674b4711b8d", 00:30:33.904 "assigned_rate_limits": { 00:30:33.904 "rw_ios_per_sec": 0, 00:30:33.904 "rw_mbytes_per_sec": 0, 00:30:33.904 "r_mbytes_per_sec": 0, 00:30:33.904 "w_mbytes_per_sec": 0 00:30:33.904 }, 00:30:33.904 "claimed": false, 00:30:33.904 "zoned": false, 00:30:33.904 "supported_io_types": { 00:30:33.904 "read": true, 00:30:33.904 "write": true, 00:30:33.904 "unmap": true, 00:30:33.904 "flush": true, 00:30:33.904 "reset": true, 00:30:33.904 "nvme_admin": false, 00:30:33.904 "nvme_io": false, 00:30:33.904 "nvme_io_md": false, 00:30:33.904 "write_zeroes": true, 00:30:33.904 "zcopy": false, 00:30:33.904 "get_zone_info": false, 00:30:33.904 "zone_management": false, 00:30:33.904 "zone_append": false, 00:30:33.904 "compare": false, 00:30:33.904 "compare_and_write": false, 00:30:33.904 "abort": false, 00:30:33.904 "seek_hole": false, 00:30:33.904 "seek_data": false, 00:30:33.904 "copy": false, 00:30:33.904 "nvme_iov_md": false 00:30:33.904 }, 00:30:33.904 "memory_domains": [ 00:30:33.904 { 00:30:33.904 "dma_device_id": "system", 00:30:33.904 "dma_device_type": 1 00:30:33.904 }, 00:30:33.904 { 00:30:33.904 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:30:33.904 "dma_device_type": 2 00:30:33.904 }, 00:30:33.904 { 00:30:33.904 "dma_device_id": "system", 00:30:33.904 "dma_device_type": 1 00:30:33.904 }, 00:30:33.904 { 00:30:33.904 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:30:33.904 "dma_device_type": 2 00:30:33.904 }, 00:30:33.904 { 00:30:33.904 "dma_device_id": "system", 00:30:33.904 "dma_device_type": 1 00:30:33.904 }, 00:30:33.904 { 00:30:33.904 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:30:33.904 "dma_device_type": 2 00:30:33.904 } 00:30:33.904 ], 00:30:33.904 "driver_specific": { 00:30:33.904 "raid": { 00:30:33.904 "uuid": "35b9fa7f-db44-4bd1-9b8c-1674b4711b8d", 00:30:33.904 "strip_size_kb": 64, 00:30:33.904 "state": "online", 00:30:33.904 "raid_level": "concat", 00:30:33.904 "superblock": false, 00:30:33.904 "num_base_bdevs": 3, 00:30:33.904 "num_base_bdevs_discovered": 3, 00:30:33.904 "num_base_bdevs_operational": 3, 00:30:33.904 "base_bdevs_list": [ 00:30:33.904 { 00:30:33.905 "name": "NewBaseBdev", 00:30:33.905 "uuid": "ea58c023-6085-4675-aa37-1c02827e8410", 00:30:33.905 "is_configured": true, 00:30:33.905 "data_offset": 0, 00:30:33.905 "data_size": 65536 00:30:33.905 }, 00:30:33.905 { 00:30:33.905 "name": "BaseBdev2", 00:30:33.905 "uuid": "fcb98a57-8c60-4a77-8790-72500e06a3dd", 00:30:33.905 "is_configured": true, 00:30:33.905 "data_offset": 0, 00:30:33.905 "data_size": 65536 00:30:33.905 }, 00:30:33.905 { 00:30:33.905 "name": "BaseBdev3", 00:30:33.905 "uuid": "6db5d762-6f7f-4ca0-a9de-5b1eff2ee844", 00:30:33.905 "is_configured": true, 00:30:33.905 "data_offset": 0, 00:30:33.905 "data_size": 65536 00:30:33.905 } 00:30:33.905 ] 00:30:33.905 } 00:30:33.905 } 00:30:33.905 }' 00:30:33.905 17:27:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:30:33.905 17:27:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:30:33.905 BaseBdev2 00:30:33.905 BaseBdev3' 00:30:33.905 17:27:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:30:33.905 17:27:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:30:33.905 17:27:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:30:33.905 17:27:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:30:33.905 17:27:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:33.905 17:27:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:30:33.905 17:27:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:30:33.905 17:27:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:33.905 17:27:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:30:33.905 17:27:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:30:33.905 17:27:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:30:33.905 17:27:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:30:33.905 17:27:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:30:33.905 17:27:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:33.905 17:27:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:30:33.905 17:27:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:33.905 17:27:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:30:33.905 17:27:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:30:33.905 17:27:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:30:33.905 17:27:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:30:33.905 17:27:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:30:33.905 17:27:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:33.905 17:27:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:30:34.165 17:27:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:34.165 17:27:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:30:34.165 17:27:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:30:34.165 17:27:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:30:34.165 17:27:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:34.165 17:27:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:30:34.165 [2024-11-26 17:27:11.391697] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:30:34.165 [2024-11-26 17:27:11.391851] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:30:34.165 [2024-11-26 17:27:11.391957] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:30:34.165 [2024-11-26 17:27:11.392019] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:30:34.165 [2024-11-26 17:27:11.392036] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name Existed_Raid, state offline 00:30:34.165 17:27:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:34.165 17:27:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@326 -- # killprocess 66013 00:30:34.165 17:27:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@954 -- # '[' -z 66013 ']' 00:30:34.165 17:27:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@958 -- # kill -0 66013 00:30:34.165 17:27:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@959 -- # uname 00:30:34.165 17:27:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:30:34.165 17:27:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 66013 00:30:34.165 17:27:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:30:34.165 17:27:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:30:34.165 17:27:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 66013' 00:30:34.165 killing process with pid 66013 00:30:34.165 17:27:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@973 -- # kill 66013 00:30:34.165 [2024-11-26 17:27:11.437691] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:30:34.165 17:27:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@978 -- # wait 66013 00:30:34.423 [2024-11-26 17:27:11.756789] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:30:35.824 17:27:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@328 -- # return 0 00:30:35.824 ************************************ 00:30:35.824 END TEST raid_state_function_test 00:30:35.824 ************************************ 00:30:35.824 00:30:35.824 real 0m11.092s 00:30:35.824 user 0m17.563s 00:30:35.824 sys 0m1.996s 00:30:35.824 17:27:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:30:35.824 17:27:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:30:35.824 17:27:13 bdev_raid -- bdev/bdev_raid.sh@969 -- # run_test raid_state_function_test_sb raid_state_function_test concat 3 true 00:30:35.824 17:27:13 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:30:35.824 17:27:13 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:30:35.824 17:27:13 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:30:35.824 ************************************ 00:30:35.824 START TEST raid_state_function_test_sb 00:30:35.824 ************************************ 00:30:35.824 17:27:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1129 -- # raid_state_function_test concat 3 true 00:30:35.824 17:27:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # local raid_level=concat 00:30:35.824 17:27:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=3 00:30:35.824 17:27:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:30:35.824 17:27:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:30:35.824 17:27:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:30:35.824 17:27:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:30:35.824 17:27:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:30:35.824 17:27:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:30:35.824 17:27:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:30:35.824 17:27:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:30:35.825 17:27:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:30:35.825 17:27:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:30:35.825 17:27:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:30:35.825 17:27:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:30:35.825 17:27:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:30:35.825 17:27:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:30:35.825 17:27:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:30:35.825 17:27:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:30:35.825 17:27:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # local strip_size 00:30:35.825 17:27:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:30:35.825 17:27:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:30:35.825 17:27:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@215 -- # '[' concat '!=' raid1 ']' 00:30:35.825 17:27:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:30:35.825 17:27:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:30:35.825 17:27:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:30:35.825 17:27:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:30:35.825 Process raid pid: 66640 00:30:35.825 17:27:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@229 -- # raid_pid=66640 00:30:35.825 17:27:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 66640' 00:30:35.825 17:27:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:30:35.825 17:27:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # waitforlisten 66640 00:30:35.825 17:27:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@835 -- # '[' -z 66640 ']' 00:30:35.825 17:27:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:30:35.825 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:30:35.825 17:27:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@840 -- # local max_retries=100 00:30:35.825 17:27:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:30:35.825 17:27:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@844 -- # xtrace_disable 00:30:35.825 17:27:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:30:35.825 [2024-11-26 17:27:13.148327] Starting SPDK v25.01-pre git sha1 f7ce15267 / DPDK 24.03.0 initialization... 00:30:35.825 [2024-11-26 17:27:13.148503] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:30:36.084 [2024-11-26 17:27:13.342352] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:36.084 [2024-11-26 17:27:13.463386] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:30:36.343 [2024-11-26 17:27:13.687352] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:30:36.343 [2024-11-26 17:27:13.687387] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:30:36.910 17:27:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:30:36.910 17:27:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@868 -- # return 0 00:30:36.910 17:27:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -s -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:30:36.910 17:27:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:36.910 17:27:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:30:36.910 [2024-11-26 17:27:14.097873] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:30:36.910 [2024-11-26 17:27:14.098111] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:30:36.910 [2024-11-26 17:27:14.098136] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:30:36.910 [2024-11-26 17:27:14.098152] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:30:36.910 [2024-11-26 17:27:14.098161] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:30:36.910 [2024-11-26 17:27:14.098175] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:30:36.910 17:27:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:36.910 17:27:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:30:36.910 17:27:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:30:36.910 17:27:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:30:36.910 17:27:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:30:36.910 17:27:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:30:36.910 17:27:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:30:36.910 17:27:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:30:36.910 17:27:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:30:36.910 17:27:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:30:36.910 17:27:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:30:36.910 17:27:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:30:36.910 17:27:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:30:36.910 17:27:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:36.910 17:27:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:30:36.910 17:27:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:36.910 17:27:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:30:36.910 "name": "Existed_Raid", 00:30:36.910 "uuid": "7ce0772f-ca41-48c3-b64e-6be560420d58", 00:30:36.910 "strip_size_kb": 64, 00:30:36.910 "state": "configuring", 00:30:36.910 "raid_level": "concat", 00:30:36.910 "superblock": true, 00:30:36.910 "num_base_bdevs": 3, 00:30:36.910 "num_base_bdevs_discovered": 0, 00:30:36.910 "num_base_bdevs_operational": 3, 00:30:36.910 "base_bdevs_list": [ 00:30:36.910 { 00:30:36.910 "name": "BaseBdev1", 00:30:36.910 "uuid": "00000000-0000-0000-0000-000000000000", 00:30:36.910 "is_configured": false, 00:30:36.910 "data_offset": 0, 00:30:36.910 "data_size": 0 00:30:36.910 }, 00:30:36.910 { 00:30:36.910 "name": "BaseBdev2", 00:30:36.910 "uuid": "00000000-0000-0000-0000-000000000000", 00:30:36.910 "is_configured": false, 00:30:36.910 "data_offset": 0, 00:30:36.910 "data_size": 0 00:30:36.910 }, 00:30:36.910 { 00:30:36.910 "name": "BaseBdev3", 00:30:36.910 "uuid": "00000000-0000-0000-0000-000000000000", 00:30:36.910 "is_configured": false, 00:30:36.910 "data_offset": 0, 00:30:36.910 "data_size": 0 00:30:36.910 } 00:30:36.910 ] 00:30:36.910 }' 00:30:36.910 17:27:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:30:36.910 17:27:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:30:37.169 17:27:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:30:37.169 17:27:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:37.169 17:27:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:30:37.169 [2024-11-26 17:27:14.529920] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:30:37.169 [2024-11-26 17:27:14.529990] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:30:37.169 17:27:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:37.169 17:27:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -s -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:30:37.169 17:27:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:37.169 17:27:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:30:37.169 [2024-11-26 17:27:14.541946] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:30:37.169 [2024-11-26 17:27:14.542020] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:30:37.169 [2024-11-26 17:27:14.542045] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:30:37.169 [2024-11-26 17:27:14.542058] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:30:37.169 [2024-11-26 17:27:14.542067] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:30:37.169 [2024-11-26 17:27:14.542089] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:30:37.169 17:27:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:37.169 17:27:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:30:37.169 17:27:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:37.170 17:27:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:30:37.170 [2024-11-26 17:27:14.588466] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:30:37.170 BaseBdev1 00:30:37.170 17:27:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:37.170 17:27:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:30:37.170 17:27:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:30:37.170 17:27:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:30:37.170 17:27:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:30:37.170 17:27:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:30:37.170 17:27:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:30:37.170 17:27:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:30:37.170 17:27:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:37.170 17:27:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:30:37.170 17:27:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:37.170 17:27:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:30:37.170 17:27:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:37.170 17:27:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:30:37.170 [ 00:30:37.170 { 00:30:37.170 "name": "BaseBdev1", 00:30:37.170 "aliases": [ 00:30:37.170 "bbd905b9-3adf-4d41-9776-4ec0f1362820" 00:30:37.170 ], 00:30:37.170 "product_name": "Malloc disk", 00:30:37.170 "block_size": 512, 00:30:37.170 "num_blocks": 65536, 00:30:37.170 "uuid": "bbd905b9-3adf-4d41-9776-4ec0f1362820", 00:30:37.170 "assigned_rate_limits": { 00:30:37.170 "rw_ios_per_sec": 0, 00:30:37.170 "rw_mbytes_per_sec": 0, 00:30:37.170 "r_mbytes_per_sec": 0, 00:30:37.170 "w_mbytes_per_sec": 0 00:30:37.170 }, 00:30:37.170 "claimed": true, 00:30:37.170 "claim_type": "exclusive_write", 00:30:37.170 "zoned": false, 00:30:37.429 "supported_io_types": { 00:30:37.429 "read": true, 00:30:37.429 "write": true, 00:30:37.429 "unmap": true, 00:30:37.429 "flush": true, 00:30:37.429 "reset": true, 00:30:37.429 "nvme_admin": false, 00:30:37.429 "nvme_io": false, 00:30:37.429 "nvme_io_md": false, 00:30:37.429 "write_zeroes": true, 00:30:37.429 "zcopy": true, 00:30:37.429 "get_zone_info": false, 00:30:37.429 "zone_management": false, 00:30:37.429 "zone_append": false, 00:30:37.429 "compare": false, 00:30:37.429 "compare_and_write": false, 00:30:37.429 "abort": true, 00:30:37.429 "seek_hole": false, 00:30:37.429 "seek_data": false, 00:30:37.429 "copy": true, 00:30:37.429 "nvme_iov_md": false 00:30:37.429 }, 00:30:37.429 "memory_domains": [ 00:30:37.429 { 00:30:37.429 "dma_device_id": "system", 00:30:37.429 "dma_device_type": 1 00:30:37.429 }, 00:30:37.429 { 00:30:37.429 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:30:37.429 "dma_device_type": 2 00:30:37.429 } 00:30:37.429 ], 00:30:37.429 "driver_specific": {} 00:30:37.429 } 00:30:37.429 ] 00:30:37.429 17:27:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:37.429 17:27:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:30:37.429 17:27:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:30:37.429 17:27:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:30:37.429 17:27:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:30:37.429 17:27:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:30:37.429 17:27:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:30:37.429 17:27:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:30:37.429 17:27:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:30:37.429 17:27:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:30:37.429 17:27:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:30:37.429 17:27:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:30:37.429 17:27:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:30:37.429 17:27:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:37.429 17:27:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:30:37.429 17:27:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:30:37.429 17:27:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:37.429 17:27:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:30:37.429 "name": "Existed_Raid", 00:30:37.429 "uuid": "89ed3820-d6fc-46dc-950a-a4065887907e", 00:30:37.429 "strip_size_kb": 64, 00:30:37.429 "state": "configuring", 00:30:37.429 "raid_level": "concat", 00:30:37.429 "superblock": true, 00:30:37.429 "num_base_bdevs": 3, 00:30:37.429 "num_base_bdevs_discovered": 1, 00:30:37.429 "num_base_bdevs_operational": 3, 00:30:37.429 "base_bdevs_list": [ 00:30:37.429 { 00:30:37.429 "name": "BaseBdev1", 00:30:37.429 "uuid": "bbd905b9-3adf-4d41-9776-4ec0f1362820", 00:30:37.429 "is_configured": true, 00:30:37.429 "data_offset": 2048, 00:30:37.429 "data_size": 63488 00:30:37.429 }, 00:30:37.429 { 00:30:37.429 "name": "BaseBdev2", 00:30:37.429 "uuid": "00000000-0000-0000-0000-000000000000", 00:30:37.429 "is_configured": false, 00:30:37.429 "data_offset": 0, 00:30:37.429 "data_size": 0 00:30:37.429 }, 00:30:37.429 { 00:30:37.429 "name": "BaseBdev3", 00:30:37.429 "uuid": "00000000-0000-0000-0000-000000000000", 00:30:37.429 "is_configured": false, 00:30:37.429 "data_offset": 0, 00:30:37.429 "data_size": 0 00:30:37.429 } 00:30:37.429 ] 00:30:37.429 }' 00:30:37.429 17:27:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:30:37.429 17:27:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:30:37.689 17:27:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:30:37.689 17:27:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:37.689 17:27:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:30:37.689 [2024-11-26 17:27:15.068642] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:30:37.689 [2024-11-26 17:27:15.068861] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:30:37.689 17:27:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:37.689 17:27:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -s -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:30:37.689 17:27:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:37.689 17:27:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:30:37.689 [2024-11-26 17:27:15.080701] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:30:37.689 [2024-11-26 17:27:15.083014] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:30:37.689 [2024-11-26 17:27:15.083240] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:30:37.689 [2024-11-26 17:27:15.083265] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:30:37.689 [2024-11-26 17:27:15.083282] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:30:37.689 17:27:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:37.689 17:27:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:30:37.689 17:27:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:30:37.689 17:27:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:30:37.689 17:27:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:30:37.689 17:27:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:30:37.689 17:27:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:30:37.689 17:27:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:30:37.689 17:27:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:30:37.689 17:27:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:30:37.689 17:27:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:30:37.689 17:27:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:30:37.689 17:27:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:30:37.689 17:27:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:30:37.689 17:27:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:37.689 17:27:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:30:37.689 17:27:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:30:37.689 17:27:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:37.948 17:27:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:30:37.948 "name": "Existed_Raid", 00:30:37.948 "uuid": "033d5d3c-d7b4-4fa6-b50c-f55587ad388f", 00:30:37.948 "strip_size_kb": 64, 00:30:37.948 "state": "configuring", 00:30:37.948 "raid_level": "concat", 00:30:37.948 "superblock": true, 00:30:37.948 "num_base_bdevs": 3, 00:30:37.948 "num_base_bdevs_discovered": 1, 00:30:37.948 "num_base_bdevs_operational": 3, 00:30:37.948 "base_bdevs_list": [ 00:30:37.948 { 00:30:37.948 "name": "BaseBdev1", 00:30:37.948 "uuid": "bbd905b9-3adf-4d41-9776-4ec0f1362820", 00:30:37.948 "is_configured": true, 00:30:37.948 "data_offset": 2048, 00:30:37.948 "data_size": 63488 00:30:37.948 }, 00:30:37.948 { 00:30:37.948 "name": "BaseBdev2", 00:30:37.948 "uuid": "00000000-0000-0000-0000-000000000000", 00:30:37.948 "is_configured": false, 00:30:37.948 "data_offset": 0, 00:30:37.948 "data_size": 0 00:30:37.948 }, 00:30:37.948 { 00:30:37.948 "name": "BaseBdev3", 00:30:37.948 "uuid": "00000000-0000-0000-0000-000000000000", 00:30:37.948 "is_configured": false, 00:30:37.948 "data_offset": 0, 00:30:37.948 "data_size": 0 00:30:37.948 } 00:30:37.948 ] 00:30:37.948 }' 00:30:37.949 17:27:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:30:37.949 17:27:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:30:38.208 17:27:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:30:38.208 17:27:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:38.208 17:27:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:30:38.208 [2024-11-26 17:27:15.600250] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:30:38.208 BaseBdev2 00:30:38.208 17:27:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:38.209 17:27:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:30:38.209 17:27:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:30:38.209 17:27:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:30:38.209 17:27:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:30:38.209 17:27:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:30:38.209 17:27:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:30:38.209 17:27:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:30:38.209 17:27:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:38.209 17:27:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:30:38.209 17:27:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:38.209 17:27:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:30:38.209 17:27:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:38.209 17:27:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:30:38.209 [ 00:30:38.209 { 00:30:38.209 "name": "BaseBdev2", 00:30:38.209 "aliases": [ 00:30:38.209 "ad79eb65-d659-4ceb-ae6b-9a480a28958f" 00:30:38.209 ], 00:30:38.209 "product_name": "Malloc disk", 00:30:38.209 "block_size": 512, 00:30:38.209 "num_blocks": 65536, 00:30:38.209 "uuid": "ad79eb65-d659-4ceb-ae6b-9a480a28958f", 00:30:38.209 "assigned_rate_limits": { 00:30:38.209 "rw_ios_per_sec": 0, 00:30:38.209 "rw_mbytes_per_sec": 0, 00:30:38.209 "r_mbytes_per_sec": 0, 00:30:38.209 "w_mbytes_per_sec": 0 00:30:38.209 }, 00:30:38.209 "claimed": true, 00:30:38.209 "claim_type": "exclusive_write", 00:30:38.209 "zoned": false, 00:30:38.209 "supported_io_types": { 00:30:38.209 "read": true, 00:30:38.209 "write": true, 00:30:38.209 "unmap": true, 00:30:38.209 "flush": true, 00:30:38.209 "reset": true, 00:30:38.209 "nvme_admin": false, 00:30:38.209 "nvme_io": false, 00:30:38.209 "nvme_io_md": false, 00:30:38.209 "write_zeroes": true, 00:30:38.209 "zcopy": true, 00:30:38.209 "get_zone_info": false, 00:30:38.209 "zone_management": false, 00:30:38.209 "zone_append": false, 00:30:38.209 "compare": false, 00:30:38.209 "compare_and_write": false, 00:30:38.209 "abort": true, 00:30:38.209 "seek_hole": false, 00:30:38.209 "seek_data": false, 00:30:38.209 "copy": true, 00:30:38.209 "nvme_iov_md": false 00:30:38.209 }, 00:30:38.209 "memory_domains": [ 00:30:38.209 { 00:30:38.209 "dma_device_id": "system", 00:30:38.209 "dma_device_type": 1 00:30:38.209 }, 00:30:38.209 { 00:30:38.209 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:30:38.209 "dma_device_type": 2 00:30:38.209 } 00:30:38.209 ], 00:30:38.209 "driver_specific": {} 00:30:38.209 } 00:30:38.209 ] 00:30:38.209 17:27:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:38.209 17:27:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:30:38.209 17:27:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:30:38.209 17:27:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:30:38.209 17:27:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:30:38.209 17:27:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:30:38.209 17:27:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:30:38.209 17:27:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:30:38.209 17:27:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:30:38.209 17:27:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:30:38.209 17:27:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:30:38.209 17:27:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:30:38.209 17:27:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:30:38.209 17:27:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:30:38.209 17:27:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:30:38.209 17:27:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:38.209 17:27:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:30:38.209 17:27:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:30:38.468 17:27:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:38.468 17:27:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:30:38.468 "name": "Existed_Raid", 00:30:38.468 "uuid": "033d5d3c-d7b4-4fa6-b50c-f55587ad388f", 00:30:38.468 "strip_size_kb": 64, 00:30:38.468 "state": "configuring", 00:30:38.468 "raid_level": "concat", 00:30:38.469 "superblock": true, 00:30:38.469 "num_base_bdevs": 3, 00:30:38.469 "num_base_bdevs_discovered": 2, 00:30:38.469 "num_base_bdevs_operational": 3, 00:30:38.469 "base_bdevs_list": [ 00:30:38.469 { 00:30:38.469 "name": "BaseBdev1", 00:30:38.469 "uuid": "bbd905b9-3adf-4d41-9776-4ec0f1362820", 00:30:38.469 "is_configured": true, 00:30:38.469 "data_offset": 2048, 00:30:38.469 "data_size": 63488 00:30:38.469 }, 00:30:38.469 { 00:30:38.469 "name": "BaseBdev2", 00:30:38.469 "uuid": "ad79eb65-d659-4ceb-ae6b-9a480a28958f", 00:30:38.469 "is_configured": true, 00:30:38.469 "data_offset": 2048, 00:30:38.469 "data_size": 63488 00:30:38.469 }, 00:30:38.469 { 00:30:38.469 "name": "BaseBdev3", 00:30:38.469 "uuid": "00000000-0000-0000-0000-000000000000", 00:30:38.469 "is_configured": false, 00:30:38.469 "data_offset": 0, 00:30:38.469 "data_size": 0 00:30:38.469 } 00:30:38.469 ] 00:30:38.469 }' 00:30:38.469 17:27:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:30:38.469 17:27:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:30:38.727 17:27:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:30:38.727 17:27:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:38.727 17:27:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:30:38.727 [2024-11-26 17:27:16.157937] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:30:38.727 [2024-11-26 17:27:16.158459] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:30:38.727 [2024-11-26 17:27:16.158602] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:30:38.727 [2024-11-26 17:27:16.158946] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:30:38.727 BaseBdev3 00:30:38.727 [2024-11-26 17:27:16.159278] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:30:38.728 [2024-11-26 17:27:16.159295] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:30:38.728 [2024-11-26 17:27:16.159444] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:30:38.728 17:27:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:38.728 17:27:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:30:38.728 17:27:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:30:38.728 17:27:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:30:38.728 17:27:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:30:38.728 17:27:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:30:38.728 17:27:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:30:38.728 17:27:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:30:38.728 17:27:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:38.728 17:27:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:30:38.728 17:27:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:38.728 17:27:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:30:38.728 17:27:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:38.728 17:27:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:30:38.987 [ 00:30:38.987 { 00:30:38.987 "name": "BaseBdev3", 00:30:38.987 "aliases": [ 00:30:38.987 "f66391f1-cb6e-4b91-93b0-0d81fae15d35" 00:30:38.987 ], 00:30:38.987 "product_name": "Malloc disk", 00:30:38.987 "block_size": 512, 00:30:38.987 "num_blocks": 65536, 00:30:38.987 "uuid": "f66391f1-cb6e-4b91-93b0-0d81fae15d35", 00:30:38.988 "assigned_rate_limits": { 00:30:38.988 "rw_ios_per_sec": 0, 00:30:38.988 "rw_mbytes_per_sec": 0, 00:30:38.988 "r_mbytes_per_sec": 0, 00:30:38.988 "w_mbytes_per_sec": 0 00:30:38.988 }, 00:30:38.988 "claimed": true, 00:30:38.988 "claim_type": "exclusive_write", 00:30:38.988 "zoned": false, 00:30:38.988 "supported_io_types": { 00:30:38.988 "read": true, 00:30:38.988 "write": true, 00:30:38.988 "unmap": true, 00:30:38.988 "flush": true, 00:30:38.988 "reset": true, 00:30:38.988 "nvme_admin": false, 00:30:38.988 "nvme_io": false, 00:30:38.988 "nvme_io_md": false, 00:30:38.988 "write_zeroes": true, 00:30:38.988 "zcopy": true, 00:30:38.988 "get_zone_info": false, 00:30:38.988 "zone_management": false, 00:30:38.988 "zone_append": false, 00:30:38.988 "compare": false, 00:30:38.988 "compare_and_write": false, 00:30:38.988 "abort": true, 00:30:38.988 "seek_hole": false, 00:30:38.988 "seek_data": false, 00:30:38.988 "copy": true, 00:30:38.988 "nvme_iov_md": false 00:30:38.988 }, 00:30:38.988 "memory_domains": [ 00:30:38.988 { 00:30:38.988 "dma_device_id": "system", 00:30:38.988 "dma_device_type": 1 00:30:38.988 }, 00:30:38.988 { 00:30:38.988 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:30:38.988 "dma_device_type": 2 00:30:38.988 } 00:30:38.988 ], 00:30:38.988 "driver_specific": {} 00:30:38.988 } 00:30:38.988 ] 00:30:38.988 17:27:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:38.988 17:27:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:30:38.988 17:27:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:30:38.988 17:27:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:30:38.988 17:27:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online concat 64 3 00:30:38.988 17:27:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:30:38.988 17:27:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:30:38.988 17:27:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:30:38.988 17:27:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:30:38.988 17:27:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:30:38.988 17:27:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:30:38.988 17:27:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:30:38.988 17:27:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:30:38.988 17:27:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:30:38.988 17:27:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:30:38.988 17:27:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:30:38.988 17:27:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:38.988 17:27:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:30:38.988 17:27:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:38.988 17:27:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:30:38.988 "name": "Existed_Raid", 00:30:38.988 "uuid": "033d5d3c-d7b4-4fa6-b50c-f55587ad388f", 00:30:38.988 "strip_size_kb": 64, 00:30:38.988 "state": "online", 00:30:38.988 "raid_level": "concat", 00:30:38.988 "superblock": true, 00:30:38.988 "num_base_bdevs": 3, 00:30:38.988 "num_base_bdevs_discovered": 3, 00:30:38.988 "num_base_bdevs_operational": 3, 00:30:38.988 "base_bdevs_list": [ 00:30:38.988 { 00:30:38.988 "name": "BaseBdev1", 00:30:38.988 "uuid": "bbd905b9-3adf-4d41-9776-4ec0f1362820", 00:30:38.988 "is_configured": true, 00:30:38.988 "data_offset": 2048, 00:30:38.988 "data_size": 63488 00:30:38.988 }, 00:30:38.988 { 00:30:38.988 "name": "BaseBdev2", 00:30:38.988 "uuid": "ad79eb65-d659-4ceb-ae6b-9a480a28958f", 00:30:38.988 "is_configured": true, 00:30:38.988 "data_offset": 2048, 00:30:38.988 "data_size": 63488 00:30:38.988 }, 00:30:38.988 { 00:30:38.988 "name": "BaseBdev3", 00:30:38.988 "uuid": "f66391f1-cb6e-4b91-93b0-0d81fae15d35", 00:30:38.988 "is_configured": true, 00:30:38.988 "data_offset": 2048, 00:30:38.988 "data_size": 63488 00:30:38.988 } 00:30:38.988 ] 00:30:38.988 }' 00:30:38.988 17:27:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:30:38.988 17:27:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:30:39.247 17:27:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:30:39.247 17:27:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:30:39.247 17:27:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:30:39.247 17:27:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:30:39.247 17:27:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:30:39.247 17:27:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:30:39.247 17:27:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:30:39.247 17:27:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:30:39.247 17:27:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:39.247 17:27:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:30:39.247 [2024-11-26 17:27:16.678486] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:30:39.507 17:27:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:39.507 17:27:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:30:39.507 "name": "Existed_Raid", 00:30:39.507 "aliases": [ 00:30:39.507 "033d5d3c-d7b4-4fa6-b50c-f55587ad388f" 00:30:39.507 ], 00:30:39.507 "product_name": "Raid Volume", 00:30:39.507 "block_size": 512, 00:30:39.507 "num_blocks": 190464, 00:30:39.507 "uuid": "033d5d3c-d7b4-4fa6-b50c-f55587ad388f", 00:30:39.507 "assigned_rate_limits": { 00:30:39.507 "rw_ios_per_sec": 0, 00:30:39.507 "rw_mbytes_per_sec": 0, 00:30:39.507 "r_mbytes_per_sec": 0, 00:30:39.507 "w_mbytes_per_sec": 0 00:30:39.507 }, 00:30:39.507 "claimed": false, 00:30:39.507 "zoned": false, 00:30:39.507 "supported_io_types": { 00:30:39.507 "read": true, 00:30:39.507 "write": true, 00:30:39.507 "unmap": true, 00:30:39.507 "flush": true, 00:30:39.507 "reset": true, 00:30:39.507 "nvme_admin": false, 00:30:39.507 "nvme_io": false, 00:30:39.507 "nvme_io_md": false, 00:30:39.507 "write_zeroes": true, 00:30:39.507 "zcopy": false, 00:30:39.507 "get_zone_info": false, 00:30:39.507 "zone_management": false, 00:30:39.507 "zone_append": false, 00:30:39.507 "compare": false, 00:30:39.507 "compare_and_write": false, 00:30:39.507 "abort": false, 00:30:39.507 "seek_hole": false, 00:30:39.507 "seek_data": false, 00:30:39.507 "copy": false, 00:30:39.507 "nvme_iov_md": false 00:30:39.507 }, 00:30:39.507 "memory_domains": [ 00:30:39.507 { 00:30:39.507 "dma_device_id": "system", 00:30:39.507 "dma_device_type": 1 00:30:39.507 }, 00:30:39.507 { 00:30:39.507 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:30:39.507 "dma_device_type": 2 00:30:39.507 }, 00:30:39.507 { 00:30:39.507 "dma_device_id": "system", 00:30:39.507 "dma_device_type": 1 00:30:39.507 }, 00:30:39.507 { 00:30:39.507 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:30:39.507 "dma_device_type": 2 00:30:39.507 }, 00:30:39.507 { 00:30:39.507 "dma_device_id": "system", 00:30:39.507 "dma_device_type": 1 00:30:39.507 }, 00:30:39.507 { 00:30:39.507 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:30:39.507 "dma_device_type": 2 00:30:39.507 } 00:30:39.507 ], 00:30:39.507 "driver_specific": { 00:30:39.507 "raid": { 00:30:39.507 "uuid": "033d5d3c-d7b4-4fa6-b50c-f55587ad388f", 00:30:39.507 "strip_size_kb": 64, 00:30:39.507 "state": "online", 00:30:39.507 "raid_level": "concat", 00:30:39.507 "superblock": true, 00:30:39.507 "num_base_bdevs": 3, 00:30:39.507 "num_base_bdevs_discovered": 3, 00:30:39.507 "num_base_bdevs_operational": 3, 00:30:39.507 "base_bdevs_list": [ 00:30:39.507 { 00:30:39.507 "name": "BaseBdev1", 00:30:39.507 "uuid": "bbd905b9-3adf-4d41-9776-4ec0f1362820", 00:30:39.507 "is_configured": true, 00:30:39.507 "data_offset": 2048, 00:30:39.507 "data_size": 63488 00:30:39.507 }, 00:30:39.507 { 00:30:39.507 "name": "BaseBdev2", 00:30:39.507 "uuid": "ad79eb65-d659-4ceb-ae6b-9a480a28958f", 00:30:39.507 "is_configured": true, 00:30:39.507 "data_offset": 2048, 00:30:39.507 "data_size": 63488 00:30:39.507 }, 00:30:39.507 { 00:30:39.507 "name": "BaseBdev3", 00:30:39.507 "uuid": "f66391f1-cb6e-4b91-93b0-0d81fae15d35", 00:30:39.507 "is_configured": true, 00:30:39.507 "data_offset": 2048, 00:30:39.507 "data_size": 63488 00:30:39.507 } 00:30:39.507 ] 00:30:39.507 } 00:30:39.507 } 00:30:39.507 }' 00:30:39.507 17:27:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:30:39.507 17:27:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:30:39.507 BaseBdev2 00:30:39.507 BaseBdev3' 00:30:39.507 17:27:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:30:39.507 17:27:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:30:39.507 17:27:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:30:39.507 17:27:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:30:39.507 17:27:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:39.507 17:27:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:30:39.507 17:27:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:30:39.507 17:27:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:39.507 17:27:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:30:39.507 17:27:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:30:39.507 17:27:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:30:39.507 17:27:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:30:39.507 17:27:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:39.507 17:27:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:30:39.507 17:27:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:30:39.507 17:27:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:39.507 17:27:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:30:39.507 17:27:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:30:39.507 17:27:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:30:39.507 17:27:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:30:39.507 17:27:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:30:39.507 17:27:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:39.507 17:27:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:30:39.507 17:27:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:39.507 17:27:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:30:39.507 17:27:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:30:39.507 17:27:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:30:39.507 17:27:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:39.507 17:27:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:30:39.508 [2024-11-26 17:27:16.938310] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:30:39.508 [2024-11-26 17:27:16.938341] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:30:39.508 [2024-11-26 17:27:16.938399] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:30:39.767 17:27:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:39.767 17:27:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # local expected_state 00:30:39.767 17:27:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@261 -- # has_redundancy concat 00:30:39.767 17:27:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # case $1 in 00:30:39.767 17:27:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # return 1 00:30:39.767 17:27:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@262 -- # expected_state=offline 00:30:39.767 17:27:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid offline concat 64 2 00:30:39.767 17:27:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:30:39.767 17:27:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=offline 00:30:39.767 17:27:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:30:39.767 17:27:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:30:39.767 17:27:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:30:39.767 17:27:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:30:39.767 17:27:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:30:39.767 17:27:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:30:39.767 17:27:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:30:39.767 17:27:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:30:39.767 17:27:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:39.767 17:27:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:30:39.767 17:27:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:30:39.767 17:27:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:39.767 17:27:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:30:39.767 "name": "Existed_Raid", 00:30:39.767 "uuid": "033d5d3c-d7b4-4fa6-b50c-f55587ad388f", 00:30:39.767 "strip_size_kb": 64, 00:30:39.767 "state": "offline", 00:30:39.767 "raid_level": "concat", 00:30:39.767 "superblock": true, 00:30:39.767 "num_base_bdevs": 3, 00:30:39.767 "num_base_bdevs_discovered": 2, 00:30:39.767 "num_base_bdevs_operational": 2, 00:30:39.767 "base_bdevs_list": [ 00:30:39.767 { 00:30:39.767 "name": null, 00:30:39.767 "uuid": "00000000-0000-0000-0000-000000000000", 00:30:39.767 "is_configured": false, 00:30:39.767 "data_offset": 0, 00:30:39.767 "data_size": 63488 00:30:39.767 }, 00:30:39.767 { 00:30:39.767 "name": "BaseBdev2", 00:30:39.767 "uuid": "ad79eb65-d659-4ceb-ae6b-9a480a28958f", 00:30:39.767 "is_configured": true, 00:30:39.767 "data_offset": 2048, 00:30:39.767 "data_size": 63488 00:30:39.767 }, 00:30:39.767 { 00:30:39.767 "name": "BaseBdev3", 00:30:39.767 "uuid": "f66391f1-cb6e-4b91-93b0-0d81fae15d35", 00:30:39.767 "is_configured": true, 00:30:39.767 "data_offset": 2048, 00:30:39.767 "data_size": 63488 00:30:39.767 } 00:30:39.767 ] 00:30:39.767 }' 00:30:39.767 17:27:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:30:39.767 17:27:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:30:40.336 17:27:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:30:40.336 17:27:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:30:40.336 17:27:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:30:40.336 17:27:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:30:40.336 17:27:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:40.336 17:27:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:30:40.336 17:27:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:40.336 17:27:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:30:40.336 17:27:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:30:40.336 17:27:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:30:40.336 17:27:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:40.336 17:27:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:30:40.336 [2024-11-26 17:27:17.593885] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:30:40.336 17:27:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:40.336 17:27:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:30:40.336 17:27:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:30:40.336 17:27:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:30:40.336 17:27:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:40.337 17:27:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:30:40.337 17:27:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:30:40.337 17:27:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:40.337 17:27:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:30:40.337 17:27:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:30:40.337 17:27:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:30:40.337 17:27:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:40.337 17:27:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:30:40.337 [2024-11-26 17:27:17.752471] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:30:40.337 [2024-11-26 17:27:17.752661] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:30:40.596 17:27:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:40.596 17:27:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:30:40.596 17:27:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:30:40.596 17:27:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:30:40.596 17:27:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:40.596 17:27:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:30:40.596 17:27:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:30:40.596 17:27:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:40.596 17:27:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:30:40.596 17:27:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:30:40.596 17:27:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@284 -- # '[' 3 -gt 2 ']' 00:30:40.596 17:27:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:30:40.596 17:27:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:30:40.596 17:27:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:30:40.596 17:27:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:40.596 17:27:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:30:40.596 BaseBdev2 00:30:40.596 17:27:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:40.596 17:27:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:30:40.596 17:27:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:30:40.596 17:27:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:30:40.596 17:27:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:30:40.596 17:27:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:30:40.596 17:27:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:30:40.596 17:27:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:30:40.596 17:27:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:40.596 17:27:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:30:40.597 17:27:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:40.597 17:27:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:30:40.597 17:27:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:40.597 17:27:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:30:40.597 [ 00:30:40.597 { 00:30:40.597 "name": "BaseBdev2", 00:30:40.597 "aliases": [ 00:30:40.597 "94c10fab-d1ac-4154-96e0-3abd3a07d45d" 00:30:40.597 ], 00:30:40.597 "product_name": "Malloc disk", 00:30:40.597 "block_size": 512, 00:30:40.597 "num_blocks": 65536, 00:30:40.597 "uuid": "94c10fab-d1ac-4154-96e0-3abd3a07d45d", 00:30:40.597 "assigned_rate_limits": { 00:30:40.597 "rw_ios_per_sec": 0, 00:30:40.597 "rw_mbytes_per_sec": 0, 00:30:40.597 "r_mbytes_per_sec": 0, 00:30:40.597 "w_mbytes_per_sec": 0 00:30:40.597 }, 00:30:40.597 "claimed": false, 00:30:40.597 "zoned": false, 00:30:40.597 "supported_io_types": { 00:30:40.597 "read": true, 00:30:40.597 "write": true, 00:30:40.597 "unmap": true, 00:30:40.597 "flush": true, 00:30:40.597 "reset": true, 00:30:40.597 "nvme_admin": false, 00:30:40.597 "nvme_io": false, 00:30:40.597 "nvme_io_md": false, 00:30:40.597 "write_zeroes": true, 00:30:40.597 "zcopy": true, 00:30:40.597 "get_zone_info": false, 00:30:40.597 "zone_management": false, 00:30:40.597 "zone_append": false, 00:30:40.597 "compare": false, 00:30:40.597 "compare_and_write": false, 00:30:40.597 "abort": true, 00:30:40.597 "seek_hole": false, 00:30:40.597 "seek_data": false, 00:30:40.597 "copy": true, 00:30:40.597 "nvme_iov_md": false 00:30:40.597 }, 00:30:40.597 "memory_domains": [ 00:30:40.597 { 00:30:40.597 "dma_device_id": "system", 00:30:40.597 "dma_device_type": 1 00:30:40.597 }, 00:30:40.597 { 00:30:40.597 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:30:40.597 "dma_device_type": 2 00:30:40.597 } 00:30:40.597 ], 00:30:40.597 "driver_specific": {} 00:30:40.597 } 00:30:40.597 ] 00:30:40.597 17:27:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:40.597 17:27:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:30:40.597 17:27:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:30:40.597 17:27:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:30:40.597 17:27:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:30:40.597 17:27:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:40.597 17:27:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:30:40.597 BaseBdev3 00:30:40.597 17:27:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:40.597 17:27:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:30:40.597 17:27:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:30:40.597 17:27:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:30:40.597 17:27:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:30:40.597 17:27:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:30:40.597 17:27:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:30:40.597 17:27:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:30:40.597 17:27:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:40.597 17:27:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:30:40.597 17:27:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:40.597 17:27:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:30:40.597 17:27:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:40.597 17:27:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:30:40.856 [ 00:30:40.857 { 00:30:40.857 "name": "BaseBdev3", 00:30:40.857 "aliases": [ 00:30:40.857 "a1153f5c-0587-441a-92e4-1685d1965241" 00:30:40.857 ], 00:30:40.857 "product_name": "Malloc disk", 00:30:40.857 "block_size": 512, 00:30:40.857 "num_blocks": 65536, 00:30:40.857 "uuid": "a1153f5c-0587-441a-92e4-1685d1965241", 00:30:40.857 "assigned_rate_limits": { 00:30:40.857 "rw_ios_per_sec": 0, 00:30:40.857 "rw_mbytes_per_sec": 0, 00:30:40.857 "r_mbytes_per_sec": 0, 00:30:40.857 "w_mbytes_per_sec": 0 00:30:40.857 }, 00:30:40.857 "claimed": false, 00:30:40.857 "zoned": false, 00:30:40.857 "supported_io_types": { 00:30:40.857 "read": true, 00:30:40.857 "write": true, 00:30:40.857 "unmap": true, 00:30:40.857 "flush": true, 00:30:40.857 "reset": true, 00:30:40.857 "nvme_admin": false, 00:30:40.857 "nvme_io": false, 00:30:40.857 "nvme_io_md": false, 00:30:40.857 "write_zeroes": true, 00:30:40.857 "zcopy": true, 00:30:40.857 "get_zone_info": false, 00:30:40.857 "zone_management": false, 00:30:40.857 "zone_append": false, 00:30:40.857 "compare": false, 00:30:40.857 "compare_and_write": false, 00:30:40.857 "abort": true, 00:30:40.857 "seek_hole": false, 00:30:40.857 "seek_data": false, 00:30:40.857 "copy": true, 00:30:40.857 "nvme_iov_md": false 00:30:40.857 }, 00:30:40.857 "memory_domains": [ 00:30:40.857 { 00:30:40.857 "dma_device_id": "system", 00:30:40.857 "dma_device_type": 1 00:30:40.857 }, 00:30:40.857 { 00:30:40.857 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:30:40.857 "dma_device_type": 2 00:30:40.857 } 00:30:40.857 ], 00:30:40.857 "driver_specific": {} 00:30:40.857 } 00:30:40.857 ] 00:30:40.857 17:27:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:40.857 17:27:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:30:40.857 17:27:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:30:40.857 17:27:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:30:40.857 17:27:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -z 64 -s -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:30:40.857 17:27:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:40.857 17:27:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:30:40.857 [2024-11-26 17:27:18.063989] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:30:40.857 [2024-11-26 17:27:18.064188] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:30:40.857 [2024-11-26 17:27:18.064290] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:30:40.857 [2024-11-26 17:27:18.066675] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:30:40.857 17:27:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:40.857 17:27:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:30:40.857 17:27:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:30:40.857 17:27:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:30:40.857 17:27:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:30:40.857 17:27:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:30:40.857 17:27:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:30:40.857 17:27:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:30:40.857 17:27:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:30:40.857 17:27:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:30:40.857 17:27:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:30:40.857 17:27:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:30:40.857 17:27:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:30:40.857 17:27:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:40.857 17:27:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:30:40.857 17:27:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:40.857 17:27:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:30:40.857 "name": "Existed_Raid", 00:30:40.857 "uuid": "ec71fd0e-5372-4ed8-9f93-4c3b06985504", 00:30:40.857 "strip_size_kb": 64, 00:30:40.857 "state": "configuring", 00:30:40.857 "raid_level": "concat", 00:30:40.857 "superblock": true, 00:30:40.857 "num_base_bdevs": 3, 00:30:40.857 "num_base_bdevs_discovered": 2, 00:30:40.857 "num_base_bdevs_operational": 3, 00:30:40.857 "base_bdevs_list": [ 00:30:40.857 { 00:30:40.857 "name": "BaseBdev1", 00:30:40.857 "uuid": "00000000-0000-0000-0000-000000000000", 00:30:40.857 "is_configured": false, 00:30:40.857 "data_offset": 0, 00:30:40.857 "data_size": 0 00:30:40.857 }, 00:30:40.857 { 00:30:40.857 "name": "BaseBdev2", 00:30:40.857 "uuid": "94c10fab-d1ac-4154-96e0-3abd3a07d45d", 00:30:40.857 "is_configured": true, 00:30:40.857 "data_offset": 2048, 00:30:40.857 "data_size": 63488 00:30:40.857 }, 00:30:40.857 { 00:30:40.857 "name": "BaseBdev3", 00:30:40.857 "uuid": "a1153f5c-0587-441a-92e4-1685d1965241", 00:30:40.857 "is_configured": true, 00:30:40.857 "data_offset": 2048, 00:30:40.857 "data_size": 63488 00:30:40.857 } 00:30:40.857 ] 00:30:40.857 }' 00:30:40.857 17:27:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:30:40.857 17:27:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:30:41.116 17:27:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:30:41.116 17:27:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:41.116 17:27:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:30:41.116 [2024-11-26 17:27:18.504113] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:30:41.117 17:27:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:41.117 17:27:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:30:41.117 17:27:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:30:41.117 17:27:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:30:41.117 17:27:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:30:41.117 17:27:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:30:41.117 17:27:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:30:41.117 17:27:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:30:41.117 17:27:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:30:41.117 17:27:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:30:41.117 17:27:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:30:41.117 17:27:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:30:41.117 17:27:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:30:41.117 17:27:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:41.117 17:27:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:30:41.117 17:27:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:41.117 17:27:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:30:41.117 "name": "Existed_Raid", 00:30:41.117 "uuid": "ec71fd0e-5372-4ed8-9f93-4c3b06985504", 00:30:41.117 "strip_size_kb": 64, 00:30:41.117 "state": "configuring", 00:30:41.117 "raid_level": "concat", 00:30:41.117 "superblock": true, 00:30:41.117 "num_base_bdevs": 3, 00:30:41.117 "num_base_bdevs_discovered": 1, 00:30:41.117 "num_base_bdevs_operational": 3, 00:30:41.117 "base_bdevs_list": [ 00:30:41.117 { 00:30:41.117 "name": "BaseBdev1", 00:30:41.117 "uuid": "00000000-0000-0000-0000-000000000000", 00:30:41.117 "is_configured": false, 00:30:41.117 "data_offset": 0, 00:30:41.117 "data_size": 0 00:30:41.117 }, 00:30:41.117 { 00:30:41.117 "name": null, 00:30:41.117 "uuid": "94c10fab-d1ac-4154-96e0-3abd3a07d45d", 00:30:41.117 "is_configured": false, 00:30:41.117 "data_offset": 0, 00:30:41.117 "data_size": 63488 00:30:41.117 }, 00:30:41.117 { 00:30:41.117 "name": "BaseBdev3", 00:30:41.117 "uuid": "a1153f5c-0587-441a-92e4-1685d1965241", 00:30:41.117 "is_configured": true, 00:30:41.117 "data_offset": 2048, 00:30:41.117 "data_size": 63488 00:30:41.117 } 00:30:41.117 ] 00:30:41.117 }' 00:30:41.117 17:27:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:30:41.117 17:27:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:30:41.685 17:27:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:30:41.685 17:27:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:30:41.686 17:27:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:41.686 17:27:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:30:41.686 17:27:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:41.686 17:27:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:30:41.686 17:27:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:30:41.686 17:27:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:41.686 17:27:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:30:41.686 [2024-11-26 17:27:19.002907] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:30:41.686 BaseBdev1 00:30:41.686 17:27:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:41.686 17:27:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:30:41.686 17:27:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:30:41.686 17:27:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:30:41.686 17:27:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:30:41.686 17:27:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:30:41.686 17:27:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:30:41.686 17:27:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:30:41.686 17:27:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:41.686 17:27:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:30:41.686 17:27:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:41.686 17:27:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:30:41.686 17:27:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:41.686 17:27:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:30:41.686 [ 00:30:41.686 { 00:30:41.686 "name": "BaseBdev1", 00:30:41.686 "aliases": [ 00:30:41.686 "4677d886-e750-457c-b30f-9cfed3580438" 00:30:41.686 ], 00:30:41.686 "product_name": "Malloc disk", 00:30:41.686 "block_size": 512, 00:30:41.686 "num_blocks": 65536, 00:30:41.686 "uuid": "4677d886-e750-457c-b30f-9cfed3580438", 00:30:41.686 "assigned_rate_limits": { 00:30:41.686 "rw_ios_per_sec": 0, 00:30:41.686 "rw_mbytes_per_sec": 0, 00:30:41.686 "r_mbytes_per_sec": 0, 00:30:41.686 "w_mbytes_per_sec": 0 00:30:41.686 }, 00:30:41.686 "claimed": true, 00:30:41.686 "claim_type": "exclusive_write", 00:30:41.686 "zoned": false, 00:30:41.686 "supported_io_types": { 00:30:41.686 "read": true, 00:30:41.686 "write": true, 00:30:41.686 "unmap": true, 00:30:41.686 "flush": true, 00:30:41.686 "reset": true, 00:30:41.686 "nvme_admin": false, 00:30:41.686 "nvme_io": false, 00:30:41.686 "nvme_io_md": false, 00:30:41.686 "write_zeroes": true, 00:30:41.686 "zcopy": true, 00:30:41.686 "get_zone_info": false, 00:30:41.686 "zone_management": false, 00:30:41.686 "zone_append": false, 00:30:41.686 "compare": false, 00:30:41.686 "compare_and_write": false, 00:30:41.686 "abort": true, 00:30:41.686 "seek_hole": false, 00:30:41.686 "seek_data": false, 00:30:41.686 "copy": true, 00:30:41.686 "nvme_iov_md": false 00:30:41.686 }, 00:30:41.686 "memory_domains": [ 00:30:41.686 { 00:30:41.686 "dma_device_id": "system", 00:30:41.686 "dma_device_type": 1 00:30:41.686 }, 00:30:41.686 { 00:30:41.686 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:30:41.686 "dma_device_type": 2 00:30:41.686 } 00:30:41.686 ], 00:30:41.686 "driver_specific": {} 00:30:41.686 } 00:30:41.686 ] 00:30:41.686 17:27:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:41.686 17:27:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:30:41.686 17:27:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:30:41.686 17:27:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:30:41.686 17:27:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:30:41.686 17:27:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:30:41.686 17:27:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:30:41.686 17:27:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:30:41.686 17:27:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:30:41.686 17:27:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:30:41.686 17:27:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:30:41.686 17:27:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:30:41.686 17:27:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:30:41.686 17:27:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:30:41.686 17:27:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:41.686 17:27:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:30:41.686 17:27:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:41.686 17:27:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:30:41.686 "name": "Existed_Raid", 00:30:41.686 "uuid": "ec71fd0e-5372-4ed8-9f93-4c3b06985504", 00:30:41.686 "strip_size_kb": 64, 00:30:41.686 "state": "configuring", 00:30:41.686 "raid_level": "concat", 00:30:41.686 "superblock": true, 00:30:41.686 "num_base_bdevs": 3, 00:30:41.686 "num_base_bdevs_discovered": 2, 00:30:41.686 "num_base_bdevs_operational": 3, 00:30:41.686 "base_bdevs_list": [ 00:30:41.686 { 00:30:41.686 "name": "BaseBdev1", 00:30:41.686 "uuid": "4677d886-e750-457c-b30f-9cfed3580438", 00:30:41.686 "is_configured": true, 00:30:41.686 "data_offset": 2048, 00:30:41.686 "data_size": 63488 00:30:41.686 }, 00:30:41.686 { 00:30:41.686 "name": null, 00:30:41.686 "uuid": "94c10fab-d1ac-4154-96e0-3abd3a07d45d", 00:30:41.686 "is_configured": false, 00:30:41.686 "data_offset": 0, 00:30:41.686 "data_size": 63488 00:30:41.686 }, 00:30:41.686 { 00:30:41.686 "name": "BaseBdev3", 00:30:41.686 "uuid": "a1153f5c-0587-441a-92e4-1685d1965241", 00:30:41.686 "is_configured": true, 00:30:41.686 "data_offset": 2048, 00:30:41.686 "data_size": 63488 00:30:41.686 } 00:30:41.686 ] 00:30:41.686 }' 00:30:41.686 17:27:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:30:41.686 17:27:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:30:42.252 17:27:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:30:42.252 17:27:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:42.252 17:27:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:30:42.252 17:27:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:30:42.252 17:27:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:42.252 17:27:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:30:42.252 17:27:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:30:42.252 17:27:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:42.252 17:27:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:30:42.252 [2024-11-26 17:27:19.539109] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:30:42.252 17:27:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:42.252 17:27:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:30:42.252 17:27:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:30:42.252 17:27:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:30:42.252 17:27:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:30:42.252 17:27:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:30:42.252 17:27:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:30:42.252 17:27:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:30:42.252 17:27:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:30:42.252 17:27:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:30:42.252 17:27:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:30:42.252 17:27:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:30:42.252 17:27:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:30:42.252 17:27:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:42.253 17:27:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:30:42.253 17:27:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:42.253 17:27:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:30:42.253 "name": "Existed_Raid", 00:30:42.253 "uuid": "ec71fd0e-5372-4ed8-9f93-4c3b06985504", 00:30:42.253 "strip_size_kb": 64, 00:30:42.253 "state": "configuring", 00:30:42.253 "raid_level": "concat", 00:30:42.253 "superblock": true, 00:30:42.253 "num_base_bdevs": 3, 00:30:42.253 "num_base_bdevs_discovered": 1, 00:30:42.253 "num_base_bdevs_operational": 3, 00:30:42.253 "base_bdevs_list": [ 00:30:42.253 { 00:30:42.253 "name": "BaseBdev1", 00:30:42.253 "uuid": "4677d886-e750-457c-b30f-9cfed3580438", 00:30:42.253 "is_configured": true, 00:30:42.253 "data_offset": 2048, 00:30:42.253 "data_size": 63488 00:30:42.253 }, 00:30:42.253 { 00:30:42.253 "name": null, 00:30:42.253 "uuid": "94c10fab-d1ac-4154-96e0-3abd3a07d45d", 00:30:42.253 "is_configured": false, 00:30:42.253 "data_offset": 0, 00:30:42.253 "data_size": 63488 00:30:42.253 }, 00:30:42.253 { 00:30:42.253 "name": null, 00:30:42.253 "uuid": "a1153f5c-0587-441a-92e4-1685d1965241", 00:30:42.253 "is_configured": false, 00:30:42.253 "data_offset": 0, 00:30:42.253 "data_size": 63488 00:30:42.253 } 00:30:42.253 ] 00:30:42.253 }' 00:30:42.253 17:27:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:30:42.253 17:27:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:30:42.821 17:27:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:30:42.821 17:27:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:30:42.821 17:27:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:42.821 17:27:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:30:42.821 17:27:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:42.821 17:27:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:30:42.821 17:27:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:30:42.821 17:27:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:42.821 17:27:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:30:42.821 [2024-11-26 17:27:20.031286] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:30:42.821 17:27:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:42.821 17:27:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:30:42.821 17:27:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:30:42.821 17:27:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:30:42.821 17:27:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:30:42.821 17:27:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:30:42.821 17:27:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:30:42.821 17:27:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:30:42.821 17:27:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:30:42.821 17:27:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:30:42.821 17:27:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:30:42.822 17:27:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:30:42.822 17:27:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:42.822 17:27:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:30:42.822 17:27:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:30:42.822 17:27:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:42.822 17:27:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:30:42.822 "name": "Existed_Raid", 00:30:42.822 "uuid": "ec71fd0e-5372-4ed8-9f93-4c3b06985504", 00:30:42.822 "strip_size_kb": 64, 00:30:42.822 "state": "configuring", 00:30:42.822 "raid_level": "concat", 00:30:42.822 "superblock": true, 00:30:42.822 "num_base_bdevs": 3, 00:30:42.822 "num_base_bdevs_discovered": 2, 00:30:42.822 "num_base_bdevs_operational": 3, 00:30:42.822 "base_bdevs_list": [ 00:30:42.822 { 00:30:42.822 "name": "BaseBdev1", 00:30:42.822 "uuid": "4677d886-e750-457c-b30f-9cfed3580438", 00:30:42.822 "is_configured": true, 00:30:42.822 "data_offset": 2048, 00:30:42.822 "data_size": 63488 00:30:42.822 }, 00:30:42.822 { 00:30:42.822 "name": null, 00:30:42.822 "uuid": "94c10fab-d1ac-4154-96e0-3abd3a07d45d", 00:30:42.822 "is_configured": false, 00:30:42.822 "data_offset": 0, 00:30:42.822 "data_size": 63488 00:30:42.822 }, 00:30:42.822 { 00:30:42.822 "name": "BaseBdev3", 00:30:42.822 "uuid": "a1153f5c-0587-441a-92e4-1685d1965241", 00:30:42.822 "is_configured": true, 00:30:42.822 "data_offset": 2048, 00:30:42.822 "data_size": 63488 00:30:42.822 } 00:30:42.822 ] 00:30:42.822 }' 00:30:42.822 17:27:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:30:42.822 17:27:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:30:43.080 17:27:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:30:43.080 17:27:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:43.080 17:27:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:30:43.080 17:27:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:30:43.080 17:27:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:43.339 17:27:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:30:43.340 17:27:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:30:43.340 17:27:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:43.340 17:27:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:30:43.340 [2024-11-26 17:27:20.543432] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:30:43.340 17:27:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:43.340 17:27:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:30:43.340 17:27:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:30:43.340 17:27:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:30:43.340 17:27:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:30:43.340 17:27:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:30:43.340 17:27:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:30:43.340 17:27:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:30:43.340 17:27:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:30:43.340 17:27:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:30:43.340 17:27:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:30:43.340 17:27:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:30:43.340 17:27:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:30:43.340 17:27:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:43.340 17:27:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:30:43.340 17:27:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:43.340 17:27:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:30:43.340 "name": "Existed_Raid", 00:30:43.340 "uuid": "ec71fd0e-5372-4ed8-9f93-4c3b06985504", 00:30:43.340 "strip_size_kb": 64, 00:30:43.340 "state": "configuring", 00:30:43.340 "raid_level": "concat", 00:30:43.340 "superblock": true, 00:30:43.340 "num_base_bdevs": 3, 00:30:43.340 "num_base_bdevs_discovered": 1, 00:30:43.340 "num_base_bdevs_operational": 3, 00:30:43.340 "base_bdevs_list": [ 00:30:43.340 { 00:30:43.340 "name": null, 00:30:43.340 "uuid": "4677d886-e750-457c-b30f-9cfed3580438", 00:30:43.340 "is_configured": false, 00:30:43.340 "data_offset": 0, 00:30:43.340 "data_size": 63488 00:30:43.340 }, 00:30:43.340 { 00:30:43.340 "name": null, 00:30:43.340 "uuid": "94c10fab-d1ac-4154-96e0-3abd3a07d45d", 00:30:43.340 "is_configured": false, 00:30:43.340 "data_offset": 0, 00:30:43.340 "data_size": 63488 00:30:43.340 }, 00:30:43.340 { 00:30:43.340 "name": "BaseBdev3", 00:30:43.340 "uuid": "a1153f5c-0587-441a-92e4-1685d1965241", 00:30:43.340 "is_configured": true, 00:30:43.340 "data_offset": 2048, 00:30:43.340 "data_size": 63488 00:30:43.340 } 00:30:43.340 ] 00:30:43.340 }' 00:30:43.340 17:27:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:30:43.340 17:27:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:30:43.913 17:27:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:30:43.913 17:27:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:30:43.913 17:27:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:43.913 17:27:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:30:43.913 17:27:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:43.913 17:27:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:30:43.913 17:27:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:30:43.913 17:27:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:43.913 17:27:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:30:43.913 [2024-11-26 17:27:21.146605] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:30:43.913 17:27:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:43.913 17:27:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:30:43.913 17:27:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:30:43.913 17:27:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:30:43.913 17:27:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:30:43.913 17:27:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:30:43.913 17:27:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:30:43.913 17:27:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:30:43.913 17:27:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:30:43.913 17:27:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:30:43.913 17:27:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:30:43.913 17:27:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:30:43.913 17:27:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:30:43.913 17:27:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:43.913 17:27:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:30:43.913 17:27:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:43.913 17:27:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:30:43.913 "name": "Existed_Raid", 00:30:43.913 "uuid": "ec71fd0e-5372-4ed8-9f93-4c3b06985504", 00:30:43.913 "strip_size_kb": 64, 00:30:43.913 "state": "configuring", 00:30:43.913 "raid_level": "concat", 00:30:43.913 "superblock": true, 00:30:43.913 "num_base_bdevs": 3, 00:30:43.913 "num_base_bdevs_discovered": 2, 00:30:43.913 "num_base_bdevs_operational": 3, 00:30:43.913 "base_bdevs_list": [ 00:30:43.913 { 00:30:43.913 "name": null, 00:30:43.913 "uuid": "4677d886-e750-457c-b30f-9cfed3580438", 00:30:43.913 "is_configured": false, 00:30:43.913 "data_offset": 0, 00:30:43.913 "data_size": 63488 00:30:43.913 }, 00:30:43.913 { 00:30:43.913 "name": "BaseBdev2", 00:30:43.913 "uuid": "94c10fab-d1ac-4154-96e0-3abd3a07d45d", 00:30:43.913 "is_configured": true, 00:30:43.913 "data_offset": 2048, 00:30:43.913 "data_size": 63488 00:30:43.913 }, 00:30:43.913 { 00:30:43.913 "name": "BaseBdev3", 00:30:43.913 "uuid": "a1153f5c-0587-441a-92e4-1685d1965241", 00:30:43.913 "is_configured": true, 00:30:43.913 "data_offset": 2048, 00:30:43.913 "data_size": 63488 00:30:43.913 } 00:30:43.913 ] 00:30:43.913 }' 00:30:43.913 17:27:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:30:43.913 17:27:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:30:44.480 17:27:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:30:44.480 17:27:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:44.480 17:27:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:30:44.480 17:27:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:30:44.480 17:27:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:44.480 17:27:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:30:44.480 17:27:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:30:44.480 17:27:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:44.480 17:27:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:30:44.480 17:27:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:30:44.480 17:27:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:44.480 17:27:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u 4677d886-e750-457c-b30f-9cfed3580438 00:30:44.480 17:27:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:44.480 17:27:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:30:44.480 [2024-11-26 17:27:21.751394] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:30:44.480 [2024-11-26 17:27:21.751676] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:30:44.480 [2024-11-26 17:27:21.751695] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:30:44.480 [2024-11-26 17:27:21.751972] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:30:44.480 NewBaseBdev 00:30:44.480 [2024-11-26 17:27:21.752152] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:30:44.480 [2024-11-26 17:27:21.752165] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000008200 00:30:44.480 [2024-11-26 17:27:21.752295] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:30:44.480 17:27:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:44.480 17:27:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:30:44.480 17:27:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=NewBaseBdev 00:30:44.480 17:27:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:30:44.480 17:27:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:30:44.480 17:27:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:30:44.480 17:27:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:30:44.480 17:27:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:30:44.480 17:27:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:44.480 17:27:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:30:44.480 17:27:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:44.480 17:27:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:30:44.480 17:27:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:44.480 17:27:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:30:44.480 [ 00:30:44.480 { 00:30:44.480 "name": "NewBaseBdev", 00:30:44.480 "aliases": [ 00:30:44.480 "4677d886-e750-457c-b30f-9cfed3580438" 00:30:44.480 ], 00:30:44.480 "product_name": "Malloc disk", 00:30:44.480 "block_size": 512, 00:30:44.480 "num_blocks": 65536, 00:30:44.480 "uuid": "4677d886-e750-457c-b30f-9cfed3580438", 00:30:44.480 "assigned_rate_limits": { 00:30:44.480 "rw_ios_per_sec": 0, 00:30:44.480 "rw_mbytes_per_sec": 0, 00:30:44.480 "r_mbytes_per_sec": 0, 00:30:44.480 "w_mbytes_per_sec": 0 00:30:44.480 }, 00:30:44.480 "claimed": true, 00:30:44.480 "claim_type": "exclusive_write", 00:30:44.480 "zoned": false, 00:30:44.480 "supported_io_types": { 00:30:44.480 "read": true, 00:30:44.480 "write": true, 00:30:44.480 "unmap": true, 00:30:44.480 "flush": true, 00:30:44.480 "reset": true, 00:30:44.480 "nvme_admin": false, 00:30:44.480 "nvme_io": false, 00:30:44.480 "nvme_io_md": false, 00:30:44.480 "write_zeroes": true, 00:30:44.480 "zcopy": true, 00:30:44.480 "get_zone_info": false, 00:30:44.480 "zone_management": false, 00:30:44.480 "zone_append": false, 00:30:44.480 "compare": false, 00:30:44.480 "compare_and_write": false, 00:30:44.480 "abort": true, 00:30:44.480 "seek_hole": false, 00:30:44.480 "seek_data": false, 00:30:44.480 "copy": true, 00:30:44.480 "nvme_iov_md": false 00:30:44.480 }, 00:30:44.480 "memory_domains": [ 00:30:44.480 { 00:30:44.480 "dma_device_id": "system", 00:30:44.480 "dma_device_type": 1 00:30:44.480 }, 00:30:44.480 { 00:30:44.480 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:30:44.480 "dma_device_type": 2 00:30:44.480 } 00:30:44.480 ], 00:30:44.480 "driver_specific": {} 00:30:44.480 } 00:30:44.480 ] 00:30:44.480 17:27:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:44.480 17:27:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:30:44.480 17:27:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online concat 64 3 00:30:44.480 17:27:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:30:44.480 17:27:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:30:44.480 17:27:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:30:44.480 17:27:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:30:44.480 17:27:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:30:44.480 17:27:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:30:44.480 17:27:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:30:44.480 17:27:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:30:44.480 17:27:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:30:44.480 17:27:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:30:44.480 17:27:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:30:44.480 17:27:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:44.480 17:27:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:30:44.480 17:27:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:44.480 17:27:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:30:44.480 "name": "Existed_Raid", 00:30:44.480 "uuid": "ec71fd0e-5372-4ed8-9f93-4c3b06985504", 00:30:44.480 "strip_size_kb": 64, 00:30:44.480 "state": "online", 00:30:44.480 "raid_level": "concat", 00:30:44.480 "superblock": true, 00:30:44.480 "num_base_bdevs": 3, 00:30:44.480 "num_base_bdevs_discovered": 3, 00:30:44.480 "num_base_bdevs_operational": 3, 00:30:44.480 "base_bdevs_list": [ 00:30:44.480 { 00:30:44.480 "name": "NewBaseBdev", 00:30:44.480 "uuid": "4677d886-e750-457c-b30f-9cfed3580438", 00:30:44.480 "is_configured": true, 00:30:44.480 "data_offset": 2048, 00:30:44.480 "data_size": 63488 00:30:44.480 }, 00:30:44.480 { 00:30:44.481 "name": "BaseBdev2", 00:30:44.481 "uuid": "94c10fab-d1ac-4154-96e0-3abd3a07d45d", 00:30:44.481 "is_configured": true, 00:30:44.481 "data_offset": 2048, 00:30:44.481 "data_size": 63488 00:30:44.481 }, 00:30:44.481 { 00:30:44.481 "name": "BaseBdev3", 00:30:44.481 "uuid": "a1153f5c-0587-441a-92e4-1685d1965241", 00:30:44.481 "is_configured": true, 00:30:44.481 "data_offset": 2048, 00:30:44.481 "data_size": 63488 00:30:44.481 } 00:30:44.481 ] 00:30:44.481 }' 00:30:44.481 17:27:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:30:44.481 17:27:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:30:45.046 17:27:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:30:45.046 17:27:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:30:45.046 17:27:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:30:45.046 17:27:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:30:45.046 17:27:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:30:45.046 17:27:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:30:45.046 17:27:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:30:45.046 17:27:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:30:45.046 17:27:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:45.046 17:27:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:30:45.046 [2024-11-26 17:27:22.243880] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:30:45.046 17:27:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:45.046 17:27:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:30:45.046 "name": "Existed_Raid", 00:30:45.046 "aliases": [ 00:30:45.046 "ec71fd0e-5372-4ed8-9f93-4c3b06985504" 00:30:45.046 ], 00:30:45.046 "product_name": "Raid Volume", 00:30:45.046 "block_size": 512, 00:30:45.046 "num_blocks": 190464, 00:30:45.046 "uuid": "ec71fd0e-5372-4ed8-9f93-4c3b06985504", 00:30:45.046 "assigned_rate_limits": { 00:30:45.046 "rw_ios_per_sec": 0, 00:30:45.046 "rw_mbytes_per_sec": 0, 00:30:45.046 "r_mbytes_per_sec": 0, 00:30:45.046 "w_mbytes_per_sec": 0 00:30:45.046 }, 00:30:45.046 "claimed": false, 00:30:45.046 "zoned": false, 00:30:45.046 "supported_io_types": { 00:30:45.046 "read": true, 00:30:45.046 "write": true, 00:30:45.046 "unmap": true, 00:30:45.046 "flush": true, 00:30:45.046 "reset": true, 00:30:45.046 "nvme_admin": false, 00:30:45.046 "nvme_io": false, 00:30:45.046 "nvme_io_md": false, 00:30:45.046 "write_zeroes": true, 00:30:45.046 "zcopy": false, 00:30:45.046 "get_zone_info": false, 00:30:45.046 "zone_management": false, 00:30:45.046 "zone_append": false, 00:30:45.046 "compare": false, 00:30:45.046 "compare_and_write": false, 00:30:45.046 "abort": false, 00:30:45.046 "seek_hole": false, 00:30:45.046 "seek_data": false, 00:30:45.046 "copy": false, 00:30:45.046 "nvme_iov_md": false 00:30:45.046 }, 00:30:45.046 "memory_domains": [ 00:30:45.046 { 00:30:45.046 "dma_device_id": "system", 00:30:45.046 "dma_device_type": 1 00:30:45.046 }, 00:30:45.046 { 00:30:45.046 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:30:45.046 "dma_device_type": 2 00:30:45.046 }, 00:30:45.046 { 00:30:45.046 "dma_device_id": "system", 00:30:45.046 "dma_device_type": 1 00:30:45.046 }, 00:30:45.046 { 00:30:45.047 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:30:45.047 "dma_device_type": 2 00:30:45.047 }, 00:30:45.047 { 00:30:45.047 "dma_device_id": "system", 00:30:45.047 "dma_device_type": 1 00:30:45.047 }, 00:30:45.047 { 00:30:45.047 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:30:45.047 "dma_device_type": 2 00:30:45.047 } 00:30:45.047 ], 00:30:45.047 "driver_specific": { 00:30:45.047 "raid": { 00:30:45.047 "uuid": "ec71fd0e-5372-4ed8-9f93-4c3b06985504", 00:30:45.047 "strip_size_kb": 64, 00:30:45.047 "state": "online", 00:30:45.047 "raid_level": "concat", 00:30:45.047 "superblock": true, 00:30:45.047 "num_base_bdevs": 3, 00:30:45.047 "num_base_bdevs_discovered": 3, 00:30:45.047 "num_base_bdevs_operational": 3, 00:30:45.047 "base_bdevs_list": [ 00:30:45.047 { 00:30:45.047 "name": "NewBaseBdev", 00:30:45.047 "uuid": "4677d886-e750-457c-b30f-9cfed3580438", 00:30:45.047 "is_configured": true, 00:30:45.047 "data_offset": 2048, 00:30:45.047 "data_size": 63488 00:30:45.047 }, 00:30:45.047 { 00:30:45.047 "name": "BaseBdev2", 00:30:45.047 "uuid": "94c10fab-d1ac-4154-96e0-3abd3a07d45d", 00:30:45.047 "is_configured": true, 00:30:45.047 "data_offset": 2048, 00:30:45.047 "data_size": 63488 00:30:45.047 }, 00:30:45.047 { 00:30:45.047 "name": "BaseBdev3", 00:30:45.047 "uuid": "a1153f5c-0587-441a-92e4-1685d1965241", 00:30:45.047 "is_configured": true, 00:30:45.047 "data_offset": 2048, 00:30:45.047 "data_size": 63488 00:30:45.047 } 00:30:45.047 ] 00:30:45.047 } 00:30:45.047 } 00:30:45.047 }' 00:30:45.047 17:27:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:30:45.047 17:27:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:30:45.047 BaseBdev2 00:30:45.047 BaseBdev3' 00:30:45.047 17:27:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:30:45.047 17:27:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:30:45.047 17:27:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:30:45.047 17:27:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:30:45.047 17:27:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:45.047 17:27:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:30:45.047 17:27:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:30:45.047 17:27:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:45.047 17:27:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:30:45.047 17:27:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:30:45.047 17:27:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:30:45.047 17:27:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:30:45.047 17:27:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:45.047 17:27:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:30:45.047 17:27:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:30:45.047 17:27:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:45.047 17:27:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:30:45.047 17:27:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:30:45.047 17:27:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:30:45.047 17:27:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:30:45.047 17:27:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:30:45.047 17:27:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:45.047 17:27:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:30:45.047 17:27:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:45.047 17:27:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:30:45.047 17:27:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:30:45.047 17:27:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:30:45.047 17:27:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:45.047 17:27:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:30:45.305 [2024-11-26 17:27:22.491651] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:30:45.305 [2024-11-26 17:27:22.491684] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:30:45.305 [2024-11-26 17:27:22.491774] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:30:45.305 [2024-11-26 17:27:22.491840] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:30:45.305 [2024-11-26 17:27:22.491857] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name Existed_Raid, state offline 00:30:45.305 17:27:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:45.305 17:27:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@326 -- # killprocess 66640 00:30:45.305 17:27:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@954 -- # '[' -z 66640 ']' 00:30:45.305 17:27:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@958 -- # kill -0 66640 00:30:45.305 17:27:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@959 -- # uname 00:30:45.305 17:27:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:30:45.305 17:27:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 66640 00:30:45.305 17:27:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:30:45.305 17:27:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:30:45.305 killing process with pid 66640 00:30:45.305 17:27:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@972 -- # echo 'killing process with pid 66640' 00:30:45.305 17:27:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@973 -- # kill 66640 00:30:45.305 [2024-11-26 17:27:22.535527] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:30:45.305 17:27:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@978 -- # wait 66640 00:30:45.562 [2024-11-26 17:27:22.858025] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:30:46.936 17:27:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@328 -- # return 0 00:30:46.936 00:30:46.936 real 0m11.013s 00:30:46.936 user 0m17.602s 00:30:46.936 sys 0m1.944s 00:30:46.936 17:27:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1130 -- # xtrace_disable 00:30:46.936 ************************************ 00:30:46.936 END TEST raid_state_function_test_sb 00:30:46.936 ************************************ 00:30:46.936 17:27:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:30:46.936 17:27:24 bdev_raid -- bdev/bdev_raid.sh@970 -- # run_test raid_superblock_test raid_superblock_test concat 3 00:30:46.936 17:27:24 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:30:46.936 17:27:24 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:30:46.936 17:27:24 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:30:46.936 ************************************ 00:30:46.936 START TEST raid_superblock_test 00:30:46.936 ************************************ 00:30:46.936 17:27:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1129 -- # raid_superblock_test concat 3 00:30:46.936 17:27:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@393 -- # local raid_level=concat 00:30:46.936 17:27:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=3 00:30:46.936 17:27:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:30:46.936 17:27:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:30:46.936 17:27:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:30:46.936 17:27:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:30:46.936 17:27:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:30:46.936 17:27:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:30:46.936 17:27:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:30:46.936 17:27:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size 00:30:46.936 17:27:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:30:46.936 17:27:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:30:46.936 17:27:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:30:46.936 17:27:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@404 -- # '[' concat '!=' raid1 ']' 00:30:46.936 17:27:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@405 -- # strip_size=64 00:30:46.936 17:27:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@406 -- # strip_size_create_arg='-z 64' 00:30:46.936 17:27:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@412 -- # raid_pid=67270 00:30:46.936 17:27:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@413 -- # waitforlisten 67270 00:30:46.936 17:27:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:30:46.936 17:27:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@835 -- # '[' -z 67270 ']' 00:30:46.936 17:27:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:30:46.936 17:27:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:30:46.936 17:27:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:30:46.936 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:30:46.936 17:27:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:30:46.936 17:27:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:30:46.936 [2024-11-26 17:27:24.228448] Starting SPDK v25.01-pre git sha1 f7ce15267 / DPDK 24.03.0 initialization... 00:30:46.937 [2024-11-26 17:27:24.228840] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid67270 ] 00:30:47.195 [2024-11-26 17:27:24.422179] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:47.195 [2024-11-26 17:27:24.543530] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:30:47.454 [2024-11-26 17:27:24.752257] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:30:47.454 [2024-11-26 17:27:24.752317] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:30:47.713 17:27:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:30:47.713 17:27:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@868 -- # return 0 00:30:47.713 17:27:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:30:47.713 17:27:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:30:47.713 17:27:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:30:47.713 17:27:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:30:47.713 17:27:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:30:47.713 17:27:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:30:47.714 17:27:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:30:47.714 17:27:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:30:47.714 17:27:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc1 00:30:47.714 17:27:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:47.714 17:27:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:30:47.974 malloc1 00:30:47.974 17:27:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:47.974 17:27:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:30:47.974 17:27:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:47.974 17:27:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:30:47.974 [2024-11-26 17:27:25.193313] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:30:47.974 [2024-11-26 17:27:25.193842] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:30:47.974 [2024-11-26 17:27:25.193909] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:30:47.974 [2024-11-26 17:27:25.194012] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:30:47.974 [2024-11-26 17:27:25.196528] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:30:47.974 [2024-11-26 17:27:25.196698] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:30:47.974 pt1 00:30:47.974 17:27:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:47.974 17:27:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:30:47.974 17:27:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:30:47.974 17:27:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:30:47.974 17:27:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:30:47.974 17:27:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:30:47.974 17:27:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:30:47.974 17:27:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:30:47.974 17:27:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:30:47.974 17:27:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc2 00:30:47.974 17:27:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:47.974 17:27:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:30:47.974 malloc2 00:30:47.974 17:27:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:47.975 17:27:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:30:47.975 17:27:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:47.975 17:27:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:30:47.975 [2024-11-26 17:27:25.251143] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:30:47.975 [2024-11-26 17:27:25.251201] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:30:47.975 [2024-11-26 17:27:25.251232] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:30:47.975 [2024-11-26 17:27:25.251243] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:30:47.975 [2024-11-26 17:27:25.253952] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:30:47.975 [2024-11-26 17:27:25.254126] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:30:47.975 pt2 00:30:47.975 17:27:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:47.975 17:27:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:30:47.975 17:27:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:30:47.975 17:27:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc3 00:30:47.975 17:27:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt3 00:30:47.975 17:27:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000003 00:30:47.975 17:27:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:30:47.975 17:27:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:30:47.975 17:27:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:30:47.975 17:27:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc3 00:30:47.975 17:27:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:47.975 17:27:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:30:47.975 malloc3 00:30:47.975 17:27:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:47.975 17:27:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:30:47.975 17:27:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:47.975 17:27:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:30:47.975 [2024-11-26 17:27:25.320233] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:30:47.975 [2024-11-26 17:27:25.320288] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:30:47.975 [2024-11-26 17:27:25.320313] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:30:47.975 [2024-11-26 17:27:25.320325] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:30:47.975 [2024-11-26 17:27:25.322686] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:30:47.975 [2024-11-26 17:27:25.322725] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:30:47.975 pt3 00:30:47.975 17:27:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:47.975 17:27:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:30:47.975 17:27:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:30:47.975 17:27:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''pt1 pt2 pt3'\''' -n raid_bdev1 -s 00:30:47.975 17:27:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:47.975 17:27:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:30:47.975 [2024-11-26 17:27:25.332291] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:30:47.975 [2024-11-26 17:27:25.334365] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:30:47.975 [2024-11-26 17:27:25.334433] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:30:47.975 [2024-11-26 17:27:25.334579] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:30:47.975 [2024-11-26 17:27:25.334594] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:30:47.975 [2024-11-26 17:27:25.334846] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:30:47.975 [2024-11-26 17:27:25.335003] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:30:47.975 [2024-11-26 17:27:25.335013] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:30:47.975 [2024-11-26 17:27:25.335179] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:30:47.975 17:27:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:47.975 17:27:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online concat 64 3 00:30:47.975 17:27:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:30:47.975 17:27:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:30:47.975 17:27:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:30:47.975 17:27:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:30:47.975 17:27:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:30:47.975 17:27:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:30:47.975 17:27:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:30:47.975 17:27:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:30:47.975 17:27:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:30:47.975 17:27:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:30:47.975 17:27:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:47.975 17:27:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:30:47.975 17:27:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:30:47.975 17:27:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:47.975 17:27:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:30:47.975 "name": "raid_bdev1", 00:30:47.975 "uuid": "9cc92139-b88b-4412-b95e-4f9d7edbcc09", 00:30:47.975 "strip_size_kb": 64, 00:30:47.975 "state": "online", 00:30:47.975 "raid_level": "concat", 00:30:47.975 "superblock": true, 00:30:47.975 "num_base_bdevs": 3, 00:30:47.975 "num_base_bdevs_discovered": 3, 00:30:47.975 "num_base_bdevs_operational": 3, 00:30:47.975 "base_bdevs_list": [ 00:30:47.975 { 00:30:47.975 "name": "pt1", 00:30:47.975 "uuid": "00000000-0000-0000-0000-000000000001", 00:30:47.975 "is_configured": true, 00:30:47.975 "data_offset": 2048, 00:30:47.975 "data_size": 63488 00:30:47.975 }, 00:30:47.975 { 00:30:47.975 "name": "pt2", 00:30:47.975 "uuid": "00000000-0000-0000-0000-000000000002", 00:30:47.975 "is_configured": true, 00:30:47.975 "data_offset": 2048, 00:30:47.975 "data_size": 63488 00:30:47.975 }, 00:30:47.975 { 00:30:47.975 "name": "pt3", 00:30:47.975 "uuid": "00000000-0000-0000-0000-000000000003", 00:30:47.975 "is_configured": true, 00:30:47.975 "data_offset": 2048, 00:30:47.975 "data_size": 63488 00:30:47.975 } 00:30:47.975 ] 00:30:47.975 }' 00:30:47.975 17:27:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:30:47.975 17:27:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:30:48.544 17:27:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:30:48.544 17:27:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:30:48.544 17:27:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:30:48.544 17:27:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:30:48.544 17:27:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:30:48.544 17:27:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:30:48.544 17:27:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:30:48.544 17:27:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:30:48.544 17:27:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:48.544 17:27:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:30:48.544 [2024-11-26 17:27:25.801040] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:30:48.544 17:27:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:48.544 17:27:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:30:48.544 "name": "raid_bdev1", 00:30:48.544 "aliases": [ 00:30:48.544 "9cc92139-b88b-4412-b95e-4f9d7edbcc09" 00:30:48.544 ], 00:30:48.544 "product_name": "Raid Volume", 00:30:48.544 "block_size": 512, 00:30:48.544 "num_blocks": 190464, 00:30:48.544 "uuid": "9cc92139-b88b-4412-b95e-4f9d7edbcc09", 00:30:48.544 "assigned_rate_limits": { 00:30:48.544 "rw_ios_per_sec": 0, 00:30:48.544 "rw_mbytes_per_sec": 0, 00:30:48.544 "r_mbytes_per_sec": 0, 00:30:48.544 "w_mbytes_per_sec": 0 00:30:48.544 }, 00:30:48.544 "claimed": false, 00:30:48.544 "zoned": false, 00:30:48.544 "supported_io_types": { 00:30:48.544 "read": true, 00:30:48.544 "write": true, 00:30:48.544 "unmap": true, 00:30:48.544 "flush": true, 00:30:48.544 "reset": true, 00:30:48.544 "nvme_admin": false, 00:30:48.544 "nvme_io": false, 00:30:48.544 "nvme_io_md": false, 00:30:48.544 "write_zeroes": true, 00:30:48.544 "zcopy": false, 00:30:48.544 "get_zone_info": false, 00:30:48.544 "zone_management": false, 00:30:48.544 "zone_append": false, 00:30:48.544 "compare": false, 00:30:48.544 "compare_and_write": false, 00:30:48.544 "abort": false, 00:30:48.544 "seek_hole": false, 00:30:48.544 "seek_data": false, 00:30:48.544 "copy": false, 00:30:48.544 "nvme_iov_md": false 00:30:48.544 }, 00:30:48.544 "memory_domains": [ 00:30:48.544 { 00:30:48.544 "dma_device_id": "system", 00:30:48.544 "dma_device_type": 1 00:30:48.544 }, 00:30:48.544 { 00:30:48.544 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:30:48.544 "dma_device_type": 2 00:30:48.544 }, 00:30:48.544 { 00:30:48.544 "dma_device_id": "system", 00:30:48.544 "dma_device_type": 1 00:30:48.544 }, 00:30:48.544 { 00:30:48.544 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:30:48.544 "dma_device_type": 2 00:30:48.544 }, 00:30:48.544 { 00:30:48.544 "dma_device_id": "system", 00:30:48.544 "dma_device_type": 1 00:30:48.544 }, 00:30:48.544 { 00:30:48.544 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:30:48.544 "dma_device_type": 2 00:30:48.544 } 00:30:48.544 ], 00:30:48.544 "driver_specific": { 00:30:48.544 "raid": { 00:30:48.544 "uuid": "9cc92139-b88b-4412-b95e-4f9d7edbcc09", 00:30:48.544 "strip_size_kb": 64, 00:30:48.544 "state": "online", 00:30:48.544 "raid_level": "concat", 00:30:48.544 "superblock": true, 00:30:48.544 "num_base_bdevs": 3, 00:30:48.544 "num_base_bdevs_discovered": 3, 00:30:48.544 "num_base_bdevs_operational": 3, 00:30:48.544 "base_bdevs_list": [ 00:30:48.544 { 00:30:48.544 "name": "pt1", 00:30:48.544 "uuid": "00000000-0000-0000-0000-000000000001", 00:30:48.544 "is_configured": true, 00:30:48.544 "data_offset": 2048, 00:30:48.544 "data_size": 63488 00:30:48.544 }, 00:30:48.544 { 00:30:48.544 "name": "pt2", 00:30:48.544 "uuid": "00000000-0000-0000-0000-000000000002", 00:30:48.544 "is_configured": true, 00:30:48.544 "data_offset": 2048, 00:30:48.544 "data_size": 63488 00:30:48.544 }, 00:30:48.544 { 00:30:48.544 "name": "pt3", 00:30:48.544 "uuid": "00000000-0000-0000-0000-000000000003", 00:30:48.544 "is_configured": true, 00:30:48.544 "data_offset": 2048, 00:30:48.544 "data_size": 63488 00:30:48.544 } 00:30:48.544 ] 00:30:48.544 } 00:30:48.544 } 00:30:48.544 }' 00:30:48.544 17:27:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:30:48.544 17:27:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:30:48.544 pt2 00:30:48.544 pt3' 00:30:48.544 17:27:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:30:48.544 17:27:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:30:48.544 17:27:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:30:48.544 17:27:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:30:48.544 17:27:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:30:48.544 17:27:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:48.544 17:27:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:30:48.544 17:27:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:48.544 17:27:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:30:48.544 17:27:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:30:48.544 17:27:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:30:48.544 17:27:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:30:48.544 17:27:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:30:48.544 17:27:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:48.544 17:27:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:30:48.544 17:27:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:48.803 17:27:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:30:48.803 17:27:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:30:48.803 17:27:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:30:48.803 17:27:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:30:48.803 17:27:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:30:48.804 17:27:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:48.804 17:27:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:30:48.804 17:27:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:48.804 17:27:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:30:48.804 17:27:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:30:48.804 17:27:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:30:48.804 17:27:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:48.804 17:27:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:30:48.804 17:27:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:30:48.804 [2024-11-26 17:27:26.049029] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:30:48.804 17:27:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:48.804 17:27:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=9cc92139-b88b-4412-b95e-4f9d7edbcc09 00:30:48.804 17:27:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@436 -- # '[' -z 9cc92139-b88b-4412-b95e-4f9d7edbcc09 ']' 00:30:48.804 17:27:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:30:48.804 17:27:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:48.804 17:27:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:30:48.804 [2024-11-26 17:27:26.096810] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:30:48.804 [2024-11-26 17:27:26.096954] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:30:48.804 [2024-11-26 17:27:26.097133] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:30:48.804 [2024-11-26 17:27:26.097240] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:30:48.804 [2024-11-26 17:27:26.097373] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:30:48.804 17:27:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:48.804 17:27:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:30:48.804 17:27:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:30:48.804 17:27:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:48.804 17:27:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:30:48.804 17:27:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:48.804 17:27:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:30:48.804 17:27:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:30:48.804 17:27:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:30:48.804 17:27:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:30:48.804 17:27:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:48.804 17:27:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:30:48.804 17:27:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:48.804 17:27:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:30:48.804 17:27:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:30:48.804 17:27:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:48.804 17:27:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:30:48.804 17:27:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:48.804 17:27:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:30:48.804 17:27:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt3 00:30:48.804 17:27:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:48.804 17:27:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:30:48.804 17:27:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:48.804 17:27:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:30:48.804 17:27:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:30:48.804 17:27:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:48.804 17:27:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:30:48.804 17:27:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:48.804 17:27:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:30:48.804 17:27:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''malloc1 malloc2 malloc3'\''' -n raid_bdev1 00:30:48.804 17:27:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@652 -- # local es=0 00:30:48.804 17:27:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''malloc1 malloc2 malloc3'\''' -n raid_bdev1 00:30:48.804 17:27:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:30:48.804 17:27:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:30:48.804 17:27:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:30:48.804 17:27:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:30:48.804 17:27:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''malloc1 malloc2 malloc3'\''' -n raid_bdev1 00:30:48.804 17:27:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:48.804 17:27:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:30:48.804 [2024-11-26 17:27:26.228849] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:30:48.804 [2024-11-26 17:27:26.231089] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:30:48.804 [2024-11-26 17:27:26.231255] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc3 is claimed 00:30:48.804 [2024-11-26 17:27:26.231343] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:30:48.804 [2024-11-26 17:27:26.231538] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:30:48.804 [2024-11-26 17:27:26.231616] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc3 00:30:48.804 [2024-11-26 17:27:26.231792] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:30:48.804 [2024-11-26 17:27:26.231871] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state configuring 00:30:48.804 request: 00:30:48.804 { 00:30:48.804 "name": "raid_bdev1", 00:30:48.804 "raid_level": "concat", 00:30:48.804 "base_bdevs": [ 00:30:48.804 "malloc1", 00:30:48.804 "malloc2", 00:30:48.804 "malloc3" 00:30:48.804 ], 00:30:48.804 "strip_size_kb": 64, 00:30:48.804 "superblock": false, 00:30:48.804 "method": "bdev_raid_create", 00:30:48.804 "req_id": 1 00:30:48.804 } 00:30:48.804 Got JSON-RPC error response 00:30:48.804 response: 00:30:48.804 { 00:30:48.804 "code": -17, 00:30:48.804 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:30:48.804 } 00:30:48.804 17:27:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:30:48.804 17:27:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@655 -- # es=1 00:30:48.804 17:27:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:30:48.804 17:27:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:30:48.804 17:27:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:30:48.804 17:27:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:30:48.804 17:27:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:30:48.804 17:27:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:48.804 17:27:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:30:49.063 17:27:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:49.063 17:27:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:30:49.063 17:27:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:30:49.063 17:27:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:30:49.063 17:27:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:49.063 17:27:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:30:49.063 [2024-11-26 17:27:26.292831] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:30:49.064 [2024-11-26 17:27:26.292895] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:30:49.064 [2024-11-26 17:27:26.292921] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:30:49.064 [2024-11-26 17:27:26.292935] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:30:49.064 [2024-11-26 17:27:26.295753] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:30:49.064 [2024-11-26 17:27:26.295796] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:30:49.064 [2024-11-26 17:27:26.295884] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:30:49.064 [2024-11-26 17:27:26.295938] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:30:49.064 pt1 00:30:49.064 17:27:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:49.064 17:27:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring concat 64 3 00:30:49.064 17:27:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:30:49.064 17:27:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:30:49.064 17:27:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:30:49.064 17:27:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:30:49.064 17:27:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:30:49.064 17:27:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:30:49.064 17:27:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:30:49.064 17:27:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:30:49.064 17:27:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:30:49.064 17:27:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:30:49.064 17:27:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:30:49.064 17:27:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:49.064 17:27:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:30:49.064 17:27:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:49.064 17:27:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:30:49.064 "name": "raid_bdev1", 00:30:49.064 "uuid": "9cc92139-b88b-4412-b95e-4f9d7edbcc09", 00:30:49.064 "strip_size_kb": 64, 00:30:49.064 "state": "configuring", 00:30:49.064 "raid_level": "concat", 00:30:49.064 "superblock": true, 00:30:49.064 "num_base_bdevs": 3, 00:30:49.064 "num_base_bdevs_discovered": 1, 00:30:49.064 "num_base_bdevs_operational": 3, 00:30:49.064 "base_bdevs_list": [ 00:30:49.064 { 00:30:49.064 "name": "pt1", 00:30:49.064 "uuid": "00000000-0000-0000-0000-000000000001", 00:30:49.064 "is_configured": true, 00:30:49.064 "data_offset": 2048, 00:30:49.064 "data_size": 63488 00:30:49.064 }, 00:30:49.064 { 00:30:49.064 "name": null, 00:30:49.064 "uuid": "00000000-0000-0000-0000-000000000002", 00:30:49.064 "is_configured": false, 00:30:49.064 "data_offset": 2048, 00:30:49.064 "data_size": 63488 00:30:49.064 }, 00:30:49.064 { 00:30:49.064 "name": null, 00:30:49.064 "uuid": "00000000-0000-0000-0000-000000000003", 00:30:49.064 "is_configured": false, 00:30:49.064 "data_offset": 2048, 00:30:49.064 "data_size": 63488 00:30:49.064 } 00:30:49.064 ] 00:30:49.064 }' 00:30:49.064 17:27:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:30:49.064 17:27:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:30:49.323 17:27:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@470 -- # '[' 3 -gt 2 ']' 00:30:49.323 17:27:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@472 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:30:49.323 17:27:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:49.323 17:27:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:30:49.323 [2024-11-26 17:27:26.760940] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:30:49.323 [2024-11-26 17:27:26.761013] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:30:49.323 [2024-11-26 17:27:26.761043] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009c80 00:30:49.323 [2024-11-26 17:27:26.761066] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:30:49.323 [2024-11-26 17:27:26.761510] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:30:49.323 [2024-11-26 17:27:26.761530] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:30:49.323 [2024-11-26 17:27:26.761624] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:30:49.323 [2024-11-26 17:27:26.761652] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:30:49.323 pt2 00:30:49.323 17:27:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:49.323 17:27:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@473 -- # rpc_cmd bdev_passthru_delete pt2 00:30:49.323 17:27:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:49.323 17:27:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:30:49.581 [2024-11-26 17:27:26.768936] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: pt2 00:30:49.581 17:27:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:49.582 17:27:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@474 -- # verify_raid_bdev_state raid_bdev1 configuring concat 64 3 00:30:49.582 17:27:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:30:49.582 17:27:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:30:49.582 17:27:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:30:49.582 17:27:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:30:49.582 17:27:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:30:49.582 17:27:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:30:49.582 17:27:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:30:49.582 17:27:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:30:49.582 17:27:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:30:49.582 17:27:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:30:49.582 17:27:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:30:49.582 17:27:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:49.582 17:27:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:30:49.582 17:27:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:49.582 17:27:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:30:49.582 "name": "raid_bdev1", 00:30:49.582 "uuid": "9cc92139-b88b-4412-b95e-4f9d7edbcc09", 00:30:49.582 "strip_size_kb": 64, 00:30:49.582 "state": "configuring", 00:30:49.582 "raid_level": "concat", 00:30:49.582 "superblock": true, 00:30:49.582 "num_base_bdevs": 3, 00:30:49.582 "num_base_bdevs_discovered": 1, 00:30:49.582 "num_base_bdevs_operational": 3, 00:30:49.582 "base_bdevs_list": [ 00:30:49.582 { 00:30:49.582 "name": "pt1", 00:30:49.582 "uuid": "00000000-0000-0000-0000-000000000001", 00:30:49.582 "is_configured": true, 00:30:49.582 "data_offset": 2048, 00:30:49.582 "data_size": 63488 00:30:49.582 }, 00:30:49.582 { 00:30:49.582 "name": null, 00:30:49.582 "uuid": "00000000-0000-0000-0000-000000000002", 00:30:49.582 "is_configured": false, 00:30:49.582 "data_offset": 0, 00:30:49.582 "data_size": 63488 00:30:49.582 }, 00:30:49.582 { 00:30:49.582 "name": null, 00:30:49.582 "uuid": "00000000-0000-0000-0000-000000000003", 00:30:49.582 "is_configured": false, 00:30:49.582 "data_offset": 2048, 00:30:49.582 "data_size": 63488 00:30:49.582 } 00:30:49.582 ] 00:30:49.582 }' 00:30:49.582 17:27:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:30:49.582 17:27:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:30:49.841 17:27:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:30:49.841 17:27:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:30:49.841 17:27:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:30:49.841 17:27:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:49.841 17:27:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:30:49.841 [2024-11-26 17:27:27.209020] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:30:49.841 [2024-11-26 17:27:27.209104] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:30:49.841 [2024-11-26 17:27:27.209125] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009f80 00:30:49.841 [2024-11-26 17:27:27.209140] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:30:49.841 [2024-11-26 17:27:27.209610] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:30:49.841 [2024-11-26 17:27:27.209633] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:30:49.841 [2024-11-26 17:27:27.209716] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:30:49.841 [2024-11-26 17:27:27.209743] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:30:49.841 pt2 00:30:49.841 17:27:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:49.841 17:27:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:30:49.841 17:27:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:30:49.841 17:27:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:30:49.841 17:27:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:49.841 17:27:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:30:49.841 [2024-11-26 17:27:27.221035] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:30:49.841 [2024-11-26 17:27:27.221113] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:30:49.841 [2024-11-26 17:27:27.221132] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:30:49.841 [2024-11-26 17:27:27.221146] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:30:49.841 [2024-11-26 17:27:27.221594] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:30:49.841 [2024-11-26 17:27:27.221620] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:30:49.841 [2024-11-26 17:27:27.221699] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:30:49.841 [2024-11-26 17:27:27.221724] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:30:49.841 [2024-11-26 17:27:27.221849] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:30:49.841 [2024-11-26 17:27:27.221863] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:30:49.841 [2024-11-26 17:27:27.222157] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:30:49.841 [2024-11-26 17:27:27.222312] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:30:49.841 [2024-11-26 17:27:27.222322] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:30:49.841 [2024-11-26 17:27:27.222501] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:30:49.841 pt3 00:30:49.841 17:27:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:49.841 17:27:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:30:49.841 17:27:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:30:49.841 17:27:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online concat 64 3 00:30:49.841 17:27:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:30:49.841 17:27:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:30:49.841 17:27:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:30:49.841 17:27:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:30:49.841 17:27:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:30:49.841 17:27:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:30:49.841 17:27:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:30:49.841 17:27:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:30:49.841 17:27:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:30:49.841 17:27:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:30:49.841 17:27:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:30:49.841 17:27:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:49.841 17:27:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:30:49.841 17:27:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:49.841 17:27:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:30:49.841 "name": "raid_bdev1", 00:30:49.841 "uuid": "9cc92139-b88b-4412-b95e-4f9d7edbcc09", 00:30:49.841 "strip_size_kb": 64, 00:30:49.841 "state": "online", 00:30:49.841 "raid_level": "concat", 00:30:49.841 "superblock": true, 00:30:49.841 "num_base_bdevs": 3, 00:30:49.841 "num_base_bdevs_discovered": 3, 00:30:49.841 "num_base_bdevs_operational": 3, 00:30:49.841 "base_bdevs_list": [ 00:30:49.841 { 00:30:49.841 "name": "pt1", 00:30:49.841 "uuid": "00000000-0000-0000-0000-000000000001", 00:30:49.841 "is_configured": true, 00:30:49.841 "data_offset": 2048, 00:30:49.841 "data_size": 63488 00:30:49.841 }, 00:30:49.841 { 00:30:49.841 "name": "pt2", 00:30:49.841 "uuid": "00000000-0000-0000-0000-000000000002", 00:30:49.841 "is_configured": true, 00:30:49.841 "data_offset": 2048, 00:30:49.841 "data_size": 63488 00:30:49.841 }, 00:30:49.841 { 00:30:49.841 "name": "pt3", 00:30:49.841 "uuid": "00000000-0000-0000-0000-000000000003", 00:30:49.841 "is_configured": true, 00:30:49.841 "data_offset": 2048, 00:30:49.841 "data_size": 63488 00:30:49.841 } 00:30:49.841 ] 00:30:49.841 }' 00:30:49.841 17:27:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:30:49.841 17:27:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:30:50.410 17:27:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:30:50.410 17:27:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:30:50.410 17:27:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:30:50.410 17:27:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:30:50.410 17:27:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:30:50.410 17:27:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:30:50.410 17:27:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:30:50.410 17:27:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:50.410 17:27:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:30:50.410 17:27:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:30:50.410 [2024-11-26 17:27:27.705465] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:30:50.410 17:27:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:50.410 17:27:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:30:50.410 "name": "raid_bdev1", 00:30:50.410 "aliases": [ 00:30:50.410 "9cc92139-b88b-4412-b95e-4f9d7edbcc09" 00:30:50.410 ], 00:30:50.410 "product_name": "Raid Volume", 00:30:50.410 "block_size": 512, 00:30:50.410 "num_blocks": 190464, 00:30:50.410 "uuid": "9cc92139-b88b-4412-b95e-4f9d7edbcc09", 00:30:50.410 "assigned_rate_limits": { 00:30:50.410 "rw_ios_per_sec": 0, 00:30:50.410 "rw_mbytes_per_sec": 0, 00:30:50.410 "r_mbytes_per_sec": 0, 00:30:50.410 "w_mbytes_per_sec": 0 00:30:50.410 }, 00:30:50.410 "claimed": false, 00:30:50.410 "zoned": false, 00:30:50.410 "supported_io_types": { 00:30:50.410 "read": true, 00:30:50.410 "write": true, 00:30:50.410 "unmap": true, 00:30:50.410 "flush": true, 00:30:50.410 "reset": true, 00:30:50.410 "nvme_admin": false, 00:30:50.410 "nvme_io": false, 00:30:50.410 "nvme_io_md": false, 00:30:50.410 "write_zeroes": true, 00:30:50.410 "zcopy": false, 00:30:50.410 "get_zone_info": false, 00:30:50.410 "zone_management": false, 00:30:50.410 "zone_append": false, 00:30:50.410 "compare": false, 00:30:50.410 "compare_and_write": false, 00:30:50.410 "abort": false, 00:30:50.410 "seek_hole": false, 00:30:50.410 "seek_data": false, 00:30:50.410 "copy": false, 00:30:50.410 "nvme_iov_md": false 00:30:50.410 }, 00:30:50.410 "memory_domains": [ 00:30:50.410 { 00:30:50.410 "dma_device_id": "system", 00:30:50.410 "dma_device_type": 1 00:30:50.410 }, 00:30:50.410 { 00:30:50.410 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:30:50.410 "dma_device_type": 2 00:30:50.410 }, 00:30:50.410 { 00:30:50.410 "dma_device_id": "system", 00:30:50.410 "dma_device_type": 1 00:30:50.410 }, 00:30:50.410 { 00:30:50.410 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:30:50.410 "dma_device_type": 2 00:30:50.410 }, 00:30:50.410 { 00:30:50.410 "dma_device_id": "system", 00:30:50.410 "dma_device_type": 1 00:30:50.410 }, 00:30:50.410 { 00:30:50.410 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:30:50.410 "dma_device_type": 2 00:30:50.410 } 00:30:50.410 ], 00:30:50.410 "driver_specific": { 00:30:50.410 "raid": { 00:30:50.410 "uuid": "9cc92139-b88b-4412-b95e-4f9d7edbcc09", 00:30:50.410 "strip_size_kb": 64, 00:30:50.410 "state": "online", 00:30:50.410 "raid_level": "concat", 00:30:50.410 "superblock": true, 00:30:50.410 "num_base_bdevs": 3, 00:30:50.410 "num_base_bdevs_discovered": 3, 00:30:50.410 "num_base_bdevs_operational": 3, 00:30:50.410 "base_bdevs_list": [ 00:30:50.410 { 00:30:50.410 "name": "pt1", 00:30:50.410 "uuid": "00000000-0000-0000-0000-000000000001", 00:30:50.410 "is_configured": true, 00:30:50.410 "data_offset": 2048, 00:30:50.410 "data_size": 63488 00:30:50.410 }, 00:30:50.410 { 00:30:50.410 "name": "pt2", 00:30:50.410 "uuid": "00000000-0000-0000-0000-000000000002", 00:30:50.410 "is_configured": true, 00:30:50.410 "data_offset": 2048, 00:30:50.410 "data_size": 63488 00:30:50.410 }, 00:30:50.410 { 00:30:50.410 "name": "pt3", 00:30:50.410 "uuid": "00000000-0000-0000-0000-000000000003", 00:30:50.410 "is_configured": true, 00:30:50.410 "data_offset": 2048, 00:30:50.410 "data_size": 63488 00:30:50.410 } 00:30:50.410 ] 00:30:50.410 } 00:30:50.410 } 00:30:50.410 }' 00:30:50.410 17:27:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:30:50.410 17:27:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:30:50.410 pt2 00:30:50.410 pt3' 00:30:50.410 17:27:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:30:50.410 17:27:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:30:50.410 17:27:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:30:50.410 17:27:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:30:50.410 17:27:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:30:50.410 17:27:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:50.410 17:27:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:30:50.669 17:27:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:50.669 17:27:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:30:50.669 17:27:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:30:50.669 17:27:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:30:50.669 17:27:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:30:50.669 17:27:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:30:50.669 17:27:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:50.669 17:27:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:30:50.669 17:27:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:50.669 17:27:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:30:50.669 17:27:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:30:50.669 17:27:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:30:50.669 17:27:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:30:50.669 17:27:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:30:50.669 17:27:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:50.669 17:27:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:30:50.669 17:27:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:50.669 17:27:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:30:50.669 17:27:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:30:50.669 17:27:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:30:50.669 17:27:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:30:50.669 17:27:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:50.669 17:27:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:30:50.669 [2024-11-26 17:27:27.993476] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:30:50.669 17:27:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:50.669 17:27:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # '[' 9cc92139-b88b-4412-b95e-4f9d7edbcc09 '!=' 9cc92139-b88b-4412-b95e-4f9d7edbcc09 ']' 00:30:50.669 17:27:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@491 -- # has_redundancy concat 00:30:50.669 17:27:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:30:50.669 17:27:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # return 1 00:30:50.669 17:27:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@563 -- # killprocess 67270 00:30:50.669 17:27:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@954 -- # '[' -z 67270 ']' 00:30:50.669 17:27:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@958 -- # kill -0 67270 00:30:50.669 17:27:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@959 -- # uname 00:30:50.669 17:27:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:30:50.669 17:27:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 67270 00:30:50.669 killing process with pid 67270 00:30:50.669 17:27:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:30:50.669 17:27:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:30:50.669 17:27:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 67270' 00:30:50.669 17:27:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@973 -- # kill 67270 00:30:50.669 [2024-11-26 17:27:28.059821] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:30:50.669 17:27:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@978 -- # wait 67270 00:30:50.669 [2024-11-26 17:27:28.059920] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:30:50.669 [2024-11-26 17:27:28.059982] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:30:50.670 [2024-11-26 17:27:28.059997] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:30:51.237 [2024-11-26 17:27:28.374619] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:30:52.174 17:27:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@565 -- # return 0 00:30:52.174 00:30:52.174 real 0m5.444s 00:30:52.174 user 0m7.849s 00:30:52.174 sys 0m1.017s 00:30:52.174 17:27:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:30:52.174 ************************************ 00:30:52.174 END TEST raid_superblock_test 00:30:52.174 ************************************ 00:30:52.174 17:27:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:30:52.174 17:27:29 bdev_raid -- bdev/bdev_raid.sh@971 -- # run_test raid_read_error_test raid_io_error_test concat 3 read 00:30:52.174 17:27:29 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:30:52.174 17:27:29 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:30:52.174 17:27:29 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:30:52.174 ************************************ 00:30:52.174 START TEST raid_read_error_test 00:30:52.174 ************************************ 00:30:52.174 17:27:29 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1129 -- # raid_io_error_test concat 3 read 00:30:52.174 17:27:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=concat 00:30:52.174 17:27:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=3 00:30:52.174 17:27:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=read 00:30:52.174 17:27:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:30:52.174 17:27:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:30:52.174 17:27:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:30:52.174 17:27:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:30:52.174 17:27:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:30:52.174 17:27:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:30:52.174 17:27:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:30:52.174 17:27:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:30:52.174 17:27:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev3 00:30:52.174 17:27:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:30:52.174 17:27:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:30:52.174 17:27:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:30:52.174 17:27:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:30:52.174 17:27:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:30:52.174 17:27:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:30:52.174 17:27:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:30:52.174 17:27:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:30:52.174 17:27:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:30:52.174 17:27:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@800 -- # '[' concat '!=' raid1 ']' 00:30:52.174 17:27:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@801 -- # strip_size=64 00:30:52.174 17:27:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@802 -- # create_arg+=' -z 64' 00:30:52.174 17:27:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:30:52.174 17:27:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.AAB0krnLdO 00:30:52.174 17:27:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=67525 00:30:52.174 17:27:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 67525 00:30:52.174 17:27:29 bdev_raid.raid_read_error_test -- common/autotest_common.sh@835 -- # '[' -z 67525 ']' 00:30:52.174 17:27:29 bdev_raid.raid_read_error_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:30:52.174 17:27:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:30:52.174 17:27:29 bdev_raid.raid_read_error_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:30:52.174 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:30:52.174 17:27:29 bdev_raid.raid_read_error_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:30:52.174 17:27:29 bdev_raid.raid_read_error_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:30:52.174 17:27:29 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:30:52.434 [2024-11-26 17:27:29.729204] Starting SPDK v25.01-pre git sha1 f7ce15267 / DPDK 24.03.0 initialization... 00:30:52.434 [2024-11-26 17:27:29.729378] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid67525 ] 00:30:52.692 [2024-11-26 17:27:29.921458] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:52.692 [2024-11-26 17:27:30.040569] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:30:52.951 [2024-11-26 17:27:30.254390] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:30:52.951 [2024-11-26 17:27:30.254458] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:30:53.210 17:27:30 bdev_raid.raid_read_error_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:30:53.210 17:27:30 bdev_raid.raid_read_error_test -- common/autotest_common.sh@868 -- # return 0 00:30:53.210 17:27:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:30:53.210 17:27:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:30:53.210 17:27:30 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:53.210 17:27:30 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:30:53.470 BaseBdev1_malloc 00:30:53.470 17:27:30 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:53.470 17:27:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:30:53.470 17:27:30 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:53.470 17:27:30 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:30:53.470 true 00:30:53.470 17:27:30 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:53.470 17:27:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:30:53.470 17:27:30 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:53.470 17:27:30 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:30:53.470 [2024-11-26 17:27:30.688482] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:30:53.470 [2024-11-26 17:27:30.688704] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:30:53.470 [2024-11-26 17:27:30.688742] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:30:53.470 [2024-11-26 17:27:30.688760] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:30:53.470 [2024-11-26 17:27:30.691610] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:30:53.471 [2024-11-26 17:27:30.691658] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:30:53.471 BaseBdev1 00:30:53.471 17:27:30 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:53.471 17:27:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:30:53.471 17:27:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:30:53.471 17:27:30 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:53.471 17:27:30 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:30:53.471 BaseBdev2_malloc 00:30:53.471 17:27:30 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:53.471 17:27:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:30:53.471 17:27:30 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:53.471 17:27:30 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:30:53.471 true 00:30:53.471 17:27:30 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:53.471 17:27:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:30:53.471 17:27:30 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:53.471 17:27:30 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:30:53.471 [2024-11-26 17:27:30.752200] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:30:53.471 [2024-11-26 17:27:30.752259] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:30:53.471 [2024-11-26 17:27:30.752278] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:30:53.471 [2024-11-26 17:27:30.752292] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:30:53.471 [2024-11-26 17:27:30.754680] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:30:53.471 [2024-11-26 17:27:30.754851] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:30:53.471 BaseBdev2 00:30:53.471 17:27:30 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:53.471 17:27:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:30:53.471 17:27:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:30:53.471 17:27:30 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:53.471 17:27:30 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:30:53.471 BaseBdev3_malloc 00:30:53.471 17:27:30 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:53.471 17:27:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev3_malloc 00:30:53.471 17:27:30 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:53.471 17:27:30 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:30:53.471 true 00:30:53.471 17:27:30 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:53.471 17:27:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:30:53.471 17:27:30 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:53.471 17:27:30 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:30:53.471 [2024-11-26 17:27:30.837042] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:30:53.471 [2024-11-26 17:27:30.837108] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:30:53.471 [2024-11-26 17:27:30.837128] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:30:53.471 [2024-11-26 17:27:30.837142] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:30:53.471 [2024-11-26 17:27:30.839513] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:30:53.471 [2024-11-26 17:27:30.839556] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:30:53.471 BaseBdev3 00:30:53.471 17:27:30 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:53.471 17:27:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n raid_bdev1 -s 00:30:53.471 17:27:30 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:53.471 17:27:30 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:30:53.471 [2024-11-26 17:27:30.845151] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:30:53.471 [2024-11-26 17:27:30.847247] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:30:53.471 [2024-11-26 17:27:30.847319] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:30:53.471 [2024-11-26 17:27:30.847506] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:30:53.471 [2024-11-26 17:27:30.847519] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:30:53.471 [2024-11-26 17:27:30.847784] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006560 00:30:53.471 [2024-11-26 17:27:30.847939] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:30:53.471 [2024-11-26 17:27:30.847967] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008200 00:30:53.471 [2024-11-26 17:27:30.848139] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:30:53.471 17:27:30 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:53.471 17:27:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online concat 64 3 00:30:53.471 17:27:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:30:53.471 17:27:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:30:53.471 17:27:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:30:53.471 17:27:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:30:53.471 17:27:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:30:53.471 17:27:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:30:53.471 17:27:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:30:53.471 17:27:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:30:53.471 17:27:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:30:53.471 17:27:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:30:53.471 17:27:30 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:53.471 17:27:30 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:30:53.471 17:27:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:30:53.471 17:27:30 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:53.471 17:27:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:30:53.471 "name": "raid_bdev1", 00:30:53.471 "uuid": "f989f85a-3a46-4015-8e6e-dfe56b352ff1", 00:30:53.471 "strip_size_kb": 64, 00:30:53.471 "state": "online", 00:30:53.471 "raid_level": "concat", 00:30:53.471 "superblock": true, 00:30:53.471 "num_base_bdevs": 3, 00:30:53.471 "num_base_bdevs_discovered": 3, 00:30:53.471 "num_base_bdevs_operational": 3, 00:30:53.471 "base_bdevs_list": [ 00:30:53.471 { 00:30:53.471 "name": "BaseBdev1", 00:30:53.471 "uuid": "c8448f11-fc48-5287-acc8-ead024ef2381", 00:30:53.471 "is_configured": true, 00:30:53.471 "data_offset": 2048, 00:30:53.471 "data_size": 63488 00:30:53.471 }, 00:30:53.471 { 00:30:53.471 "name": "BaseBdev2", 00:30:53.471 "uuid": "acd0bc9e-8b99-5deb-9206-1554841c7422", 00:30:53.471 "is_configured": true, 00:30:53.471 "data_offset": 2048, 00:30:53.471 "data_size": 63488 00:30:53.471 }, 00:30:53.471 { 00:30:53.471 "name": "BaseBdev3", 00:30:53.471 "uuid": "156d9aa6-ef4a-5cc4-94ea-3f29379f35a1", 00:30:53.471 "is_configured": true, 00:30:53.471 "data_offset": 2048, 00:30:53.471 "data_size": 63488 00:30:53.471 } 00:30:53.471 ] 00:30:53.471 }' 00:30:53.471 17:27:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:30:53.471 17:27:30 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:30:54.039 17:27:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:30:54.039 17:27:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:30:54.039 [2024-11-26 17:27:31.418644] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006700 00:30:54.975 17:27:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc read failure 00:30:54.975 17:27:32 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:54.975 17:27:32 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:30:54.975 17:27:32 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:54.975 17:27:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:30:54.975 17:27:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@832 -- # [[ concat = \r\a\i\d\1 ]] 00:30:54.975 17:27:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=3 00:30:54.975 17:27:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online concat 64 3 00:30:54.975 17:27:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:30:54.975 17:27:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:30:54.975 17:27:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:30:54.975 17:27:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:30:54.975 17:27:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:30:54.975 17:27:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:30:54.975 17:27:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:30:54.975 17:27:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:30:54.975 17:27:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:30:54.975 17:27:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:30:54.975 17:27:32 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:54.975 17:27:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:30:54.975 17:27:32 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:30:54.975 17:27:32 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:54.975 17:27:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:30:54.976 "name": "raid_bdev1", 00:30:54.976 "uuid": "f989f85a-3a46-4015-8e6e-dfe56b352ff1", 00:30:54.976 "strip_size_kb": 64, 00:30:54.976 "state": "online", 00:30:54.976 "raid_level": "concat", 00:30:54.976 "superblock": true, 00:30:54.976 "num_base_bdevs": 3, 00:30:54.976 "num_base_bdevs_discovered": 3, 00:30:54.976 "num_base_bdevs_operational": 3, 00:30:54.976 "base_bdevs_list": [ 00:30:54.976 { 00:30:54.976 "name": "BaseBdev1", 00:30:54.976 "uuid": "c8448f11-fc48-5287-acc8-ead024ef2381", 00:30:54.976 "is_configured": true, 00:30:54.976 "data_offset": 2048, 00:30:54.976 "data_size": 63488 00:30:54.976 }, 00:30:54.976 { 00:30:54.976 "name": "BaseBdev2", 00:30:54.976 "uuid": "acd0bc9e-8b99-5deb-9206-1554841c7422", 00:30:54.976 "is_configured": true, 00:30:54.976 "data_offset": 2048, 00:30:54.976 "data_size": 63488 00:30:54.976 }, 00:30:54.976 { 00:30:54.976 "name": "BaseBdev3", 00:30:54.976 "uuid": "156d9aa6-ef4a-5cc4-94ea-3f29379f35a1", 00:30:54.976 "is_configured": true, 00:30:54.976 "data_offset": 2048, 00:30:54.976 "data_size": 63488 00:30:54.976 } 00:30:54.976 ] 00:30:54.976 }' 00:30:54.976 17:27:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:30:54.976 17:27:32 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:30:55.543 17:27:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:30:55.543 17:27:32 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:55.543 17:27:32 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:30:55.543 [2024-11-26 17:27:32.773995] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:30:55.543 [2024-11-26 17:27:32.774026] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:30:55.543 [2024-11-26 17:27:32.776853] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:30:55.543 [2024-11-26 17:27:32.776899] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:30:55.543 [2024-11-26 17:27:32.776937] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:30:55.543 [2024-11-26 17:27:32.776951] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name raid_bdev1, state offline 00:30:55.543 { 00:30:55.543 "results": [ 00:30:55.543 { 00:30:55.543 "job": "raid_bdev1", 00:30:55.543 "core_mask": "0x1", 00:30:55.543 "workload": "randrw", 00:30:55.543 "percentage": 50, 00:30:55.543 "status": "finished", 00:30:55.543 "queue_depth": 1, 00:30:55.543 "io_size": 131072, 00:30:55.543 "runtime": 1.352983, 00:30:55.543 "iops": 14581.853578352426, 00:30:55.543 "mibps": 1822.7316972940532, 00:30:55.543 "io_failed": 1, 00:30:55.543 "io_timeout": 0, 00:30:55.543 "avg_latency_us": 94.42511775637777, 00:30:55.543 "min_latency_us": 27.67238095238095, 00:30:55.543 "max_latency_us": 1771.032380952381 00:30:55.543 } 00:30:55.543 ], 00:30:55.543 "core_count": 1 00:30:55.543 } 00:30:55.543 17:27:32 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:55.543 17:27:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 67525 00:30:55.543 17:27:32 bdev_raid.raid_read_error_test -- common/autotest_common.sh@954 -- # '[' -z 67525 ']' 00:30:55.543 17:27:32 bdev_raid.raid_read_error_test -- common/autotest_common.sh@958 -- # kill -0 67525 00:30:55.543 17:27:32 bdev_raid.raid_read_error_test -- common/autotest_common.sh@959 -- # uname 00:30:55.544 17:27:32 bdev_raid.raid_read_error_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:30:55.544 17:27:32 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 67525 00:30:55.544 killing process with pid 67525 00:30:55.544 17:27:32 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:30:55.544 17:27:32 bdev_raid.raid_read_error_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:30:55.544 17:27:32 bdev_raid.raid_read_error_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 67525' 00:30:55.544 17:27:32 bdev_raid.raid_read_error_test -- common/autotest_common.sh@973 -- # kill 67525 00:30:55.544 [2024-11-26 17:27:32.828659] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:30:55.544 17:27:32 bdev_raid.raid_read_error_test -- common/autotest_common.sh@978 -- # wait 67525 00:30:55.803 [2024-11-26 17:27:33.071990] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:30:57.182 17:27:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:30:57.182 17:27:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.AAB0krnLdO 00:30:57.182 17:27:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:30:57.182 17:27:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.74 00:30:57.182 17:27:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy concat 00:30:57.182 17:27:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:30:57.182 17:27:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@200 -- # return 1 00:30:57.182 17:27:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@849 -- # [[ 0.74 != \0\.\0\0 ]] 00:30:57.182 00:30:57.182 real 0m4.734s 00:30:57.182 user 0m5.644s 00:30:57.182 sys 0m0.648s 00:30:57.182 ************************************ 00:30:57.182 END TEST raid_read_error_test 00:30:57.182 ************************************ 00:30:57.182 17:27:34 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:30:57.182 17:27:34 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:30:57.182 17:27:34 bdev_raid -- bdev/bdev_raid.sh@972 -- # run_test raid_write_error_test raid_io_error_test concat 3 write 00:30:57.182 17:27:34 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:30:57.182 17:27:34 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:30:57.182 17:27:34 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:30:57.182 ************************************ 00:30:57.182 START TEST raid_write_error_test 00:30:57.182 ************************************ 00:30:57.182 17:27:34 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1129 -- # raid_io_error_test concat 3 write 00:30:57.182 17:27:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=concat 00:30:57.182 17:27:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=3 00:30:57.182 17:27:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=write 00:30:57.182 17:27:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:30:57.182 17:27:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:30:57.182 17:27:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:30:57.182 17:27:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:30:57.182 17:27:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:30:57.182 17:27:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:30:57.182 17:27:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:30:57.182 17:27:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:30:57.182 17:27:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev3 00:30:57.182 17:27:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:30:57.182 17:27:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:30:57.182 17:27:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:30:57.182 17:27:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:30:57.182 17:27:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:30:57.182 17:27:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:30:57.182 17:27:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:30:57.182 17:27:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:30:57.182 17:27:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:30:57.182 17:27:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@800 -- # '[' concat '!=' raid1 ']' 00:30:57.182 17:27:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@801 -- # strip_size=64 00:30:57.182 17:27:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@802 -- # create_arg+=' -z 64' 00:30:57.182 17:27:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:30:57.182 17:27:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.aSeAHXNKnP 00:30:57.182 17:27:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=67671 00:30:57.182 17:27:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 67671 00:30:57.182 17:27:34 bdev_raid.raid_write_error_test -- common/autotest_common.sh@835 -- # '[' -z 67671 ']' 00:30:57.182 17:27:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:30:57.182 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:30:57.182 17:27:34 bdev_raid.raid_write_error_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:30:57.182 17:27:34 bdev_raid.raid_write_error_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:30:57.182 17:27:34 bdev_raid.raid_write_error_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:30:57.182 17:27:34 bdev_raid.raid_write_error_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:30:57.182 17:27:34 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:30:57.182 [2024-11-26 17:27:34.500313] Starting SPDK v25.01-pre git sha1 f7ce15267 / DPDK 24.03.0 initialization... 00:30:57.182 [2024-11-26 17:27:34.500448] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid67671 ] 00:30:57.442 [2024-11-26 17:27:34.668027] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:57.442 [2024-11-26 17:27:34.786504] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:30:57.701 [2024-11-26 17:27:34.998696] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:30:57.701 [2024-11-26 17:27:34.998936] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:30:57.960 17:27:35 bdev_raid.raid_write_error_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:30:57.960 17:27:35 bdev_raid.raid_write_error_test -- common/autotest_common.sh@868 -- # return 0 00:30:57.960 17:27:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:30:57.960 17:27:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:30:57.960 17:27:35 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:57.960 17:27:35 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:30:58.219 BaseBdev1_malloc 00:30:58.219 17:27:35 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:58.219 17:27:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:30:58.219 17:27:35 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:58.219 17:27:35 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:30:58.219 true 00:30:58.219 17:27:35 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:58.219 17:27:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:30:58.219 17:27:35 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:58.219 17:27:35 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:30:58.219 [2024-11-26 17:27:35.446998] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:30:58.219 [2024-11-26 17:27:35.447069] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:30:58.219 [2024-11-26 17:27:35.447094] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:30:58.219 [2024-11-26 17:27:35.447109] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:30:58.219 [2024-11-26 17:27:35.449564] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:30:58.219 [2024-11-26 17:27:35.449611] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:30:58.219 BaseBdev1 00:30:58.219 17:27:35 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:58.219 17:27:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:30:58.219 17:27:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:30:58.219 17:27:35 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:58.219 17:27:35 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:30:58.219 BaseBdev2_malloc 00:30:58.219 17:27:35 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:58.219 17:27:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:30:58.219 17:27:35 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:58.219 17:27:35 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:30:58.219 true 00:30:58.219 17:27:35 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:58.219 17:27:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:30:58.219 17:27:35 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:58.219 17:27:35 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:30:58.219 [2024-11-26 17:27:35.516706] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:30:58.219 [2024-11-26 17:27:35.516771] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:30:58.219 [2024-11-26 17:27:35.516792] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:30:58.219 [2024-11-26 17:27:35.516806] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:30:58.219 [2024-11-26 17:27:35.519374] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:30:58.219 [2024-11-26 17:27:35.519545] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:30:58.219 BaseBdev2 00:30:58.219 17:27:35 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:58.219 17:27:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:30:58.219 17:27:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:30:58.220 17:27:35 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:58.220 17:27:35 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:30:58.220 BaseBdev3_malloc 00:30:58.220 17:27:35 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:58.220 17:27:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev3_malloc 00:30:58.220 17:27:35 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:58.220 17:27:35 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:30:58.220 true 00:30:58.220 17:27:35 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:58.220 17:27:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:30:58.220 17:27:35 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:58.220 17:27:35 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:30:58.220 [2024-11-26 17:27:35.586078] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:30:58.220 [2024-11-26 17:27:35.586137] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:30:58.220 [2024-11-26 17:27:35.586160] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:30:58.220 [2024-11-26 17:27:35.586177] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:30:58.220 [2024-11-26 17:27:35.588729] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:30:58.220 [2024-11-26 17:27:35.588924] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:30:58.220 BaseBdev3 00:30:58.220 17:27:35 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:58.220 17:27:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n raid_bdev1 -s 00:30:58.220 17:27:35 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:58.220 17:27:35 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:30:58.220 [2024-11-26 17:27:35.598206] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:30:58.220 [2024-11-26 17:27:35.600587] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:30:58.220 [2024-11-26 17:27:35.600817] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:30:58.220 [2024-11-26 17:27:35.601297] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:30:58.220 [2024-11-26 17:27:35.601431] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:30:58.220 [2024-11-26 17:27:35.601937] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006560 00:30:58.220 [2024-11-26 17:27:35.602331] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:30:58.220 [2024-11-26 17:27:35.602472] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008200 00:30:58.220 [2024-11-26 17:27:35.602937] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:30:58.220 17:27:35 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:58.220 17:27:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online concat 64 3 00:30:58.220 17:27:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:30:58.220 17:27:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:30:58.220 17:27:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:30:58.220 17:27:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:30:58.220 17:27:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:30:58.220 17:27:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:30:58.220 17:27:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:30:58.220 17:27:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:30:58.220 17:27:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:30:58.220 17:27:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:30:58.220 17:27:35 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:58.220 17:27:35 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:30:58.220 17:27:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:30:58.220 17:27:35 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:58.220 17:27:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:30:58.220 "name": "raid_bdev1", 00:30:58.220 "uuid": "4ecdb581-f492-4918-a35f-c12f5492b7f7", 00:30:58.220 "strip_size_kb": 64, 00:30:58.220 "state": "online", 00:30:58.220 "raid_level": "concat", 00:30:58.220 "superblock": true, 00:30:58.220 "num_base_bdevs": 3, 00:30:58.220 "num_base_bdevs_discovered": 3, 00:30:58.220 "num_base_bdevs_operational": 3, 00:30:58.220 "base_bdevs_list": [ 00:30:58.220 { 00:30:58.220 "name": "BaseBdev1", 00:30:58.220 "uuid": "049bb6c6-f9c7-56ff-9e7a-b5156e9370f4", 00:30:58.220 "is_configured": true, 00:30:58.220 "data_offset": 2048, 00:30:58.220 "data_size": 63488 00:30:58.220 }, 00:30:58.220 { 00:30:58.220 "name": "BaseBdev2", 00:30:58.220 "uuid": "2da38541-5140-5ba9-8b30-1b31b6cc321d", 00:30:58.220 "is_configured": true, 00:30:58.220 "data_offset": 2048, 00:30:58.220 "data_size": 63488 00:30:58.220 }, 00:30:58.220 { 00:30:58.220 "name": "BaseBdev3", 00:30:58.220 "uuid": "b4f22c56-4a85-5c08-8a48-766ecb62c291", 00:30:58.220 "is_configured": true, 00:30:58.220 "data_offset": 2048, 00:30:58.220 "data_size": 63488 00:30:58.220 } 00:30:58.220 ] 00:30:58.220 }' 00:30:58.220 17:27:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:30:58.220 17:27:35 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:30:58.786 17:27:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:30:58.786 17:27:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:30:58.786 [2024-11-26 17:27:36.168271] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006700 00:30:59.775 17:27:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc write failure 00:30:59.775 17:27:37 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:59.775 17:27:37 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:30:59.775 17:27:37 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:59.775 17:27:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:30:59.775 17:27:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@832 -- # [[ concat = \r\a\i\d\1 ]] 00:30:59.775 17:27:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=3 00:30:59.775 17:27:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online concat 64 3 00:30:59.775 17:27:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:30:59.775 17:27:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:30:59.775 17:27:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:30:59.775 17:27:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:30:59.775 17:27:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:30:59.775 17:27:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:30:59.775 17:27:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:30:59.775 17:27:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:30:59.775 17:27:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:30:59.775 17:27:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:30:59.775 17:27:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:30:59.775 17:27:37 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:59.775 17:27:37 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:30:59.775 17:27:37 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:59.775 17:27:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:30:59.775 "name": "raid_bdev1", 00:30:59.775 "uuid": "4ecdb581-f492-4918-a35f-c12f5492b7f7", 00:30:59.775 "strip_size_kb": 64, 00:30:59.775 "state": "online", 00:30:59.775 "raid_level": "concat", 00:30:59.775 "superblock": true, 00:30:59.775 "num_base_bdevs": 3, 00:30:59.775 "num_base_bdevs_discovered": 3, 00:30:59.775 "num_base_bdevs_operational": 3, 00:30:59.775 "base_bdevs_list": [ 00:30:59.775 { 00:30:59.775 "name": "BaseBdev1", 00:30:59.775 "uuid": "049bb6c6-f9c7-56ff-9e7a-b5156e9370f4", 00:30:59.775 "is_configured": true, 00:30:59.775 "data_offset": 2048, 00:30:59.775 "data_size": 63488 00:30:59.775 }, 00:30:59.775 { 00:30:59.775 "name": "BaseBdev2", 00:30:59.775 "uuid": "2da38541-5140-5ba9-8b30-1b31b6cc321d", 00:30:59.775 "is_configured": true, 00:30:59.775 "data_offset": 2048, 00:30:59.775 "data_size": 63488 00:30:59.775 }, 00:30:59.775 { 00:30:59.775 "name": "BaseBdev3", 00:30:59.775 "uuid": "b4f22c56-4a85-5c08-8a48-766ecb62c291", 00:30:59.775 "is_configured": true, 00:30:59.775 "data_offset": 2048, 00:30:59.775 "data_size": 63488 00:30:59.775 } 00:30:59.775 ] 00:30:59.775 }' 00:30:59.775 17:27:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:30:59.775 17:27:37 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:31:00.341 17:27:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:31:00.341 17:27:37 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:00.341 17:27:37 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:31:00.341 [2024-11-26 17:27:37.523213] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:31:00.341 [2024-11-26 17:27:37.523416] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:31:00.341 [2024-11-26 17:27:37.526511] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:31:00.341 [2024-11-26 17:27:37.526685] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:31:00.341 [2024-11-26 17:27:37.526766] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:31:00.341 [2024-11-26 17:27:37.526782] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name raid_bdev1, state offline 00:31:00.341 { 00:31:00.341 "results": [ 00:31:00.341 { 00:31:00.341 "job": "raid_bdev1", 00:31:00.341 "core_mask": "0x1", 00:31:00.341 "workload": "randrw", 00:31:00.341 "percentage": 50, 00:31:00.341 "status": "finished", 00:31:00.341 "queue_depth": 1, 00:31:00.341 "io_size": 131072, 00:31:00.341 "runtime": 1.353193, 00:31:00.341 "iops": 15338.536335910694, 00:31:00.341 "mibps": 1917.3170419888368, 00:31:00.341 "io_failed": 1, 00:31:00.341 "io_timeout": 0, 00:31:00.341 "avg_latency_us": 89.87894736600617, 00:31:00.341 "min_latency_us": 26.81904761904762, 00:31:00.341 "max_latency_us": 1607.192380952381 00:31:00.341 } 00:31:00.341 ], 00:31:00.341 "core_count": 1 00:31:00.341 } 00:31:00.341 17:27:37 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:00.341 17:27:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 67671 00:31:00.341 17:27:37 bdev_raid.raid_write_error_test -- common/autotest_common.sh@954 -- # '[' -z 67671 ']' 00:31:00.341 17:27:37 bdev_raid.raid_write_error_test -- common/autotest_common.sh@958 -- # kill -0 67671 00:31:00.341 17:27:37 bdev_raid.raid_write_error_test -- common/autotest_common.sh@959 -- # uname 00:31:00.341 17:27:37 bdev_raid.raid_write_error_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:31:00.341 17:27:37 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 67671 00:31:00.341 killing process with pid 67671 00:31:00.341 17:27:37 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:31:00.341 17:27:37 bdev_raid.raid_write_error_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:31:00.341 17:27:37 bdev_raid.raid_write_error_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 67671' 00:31:00.342 17:27:37 bdev_raid.raid_write_error_test -- common/autotest_common.sh@973 -- # kill 67671 00:31:00.342 [2024-11-26 17:27:37.572452] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:31:00.342 17:27:37 bdev_raid.raid_write_error_test -- common/autotest_common.sh@978 -- # wait 67671 00:31:00.600 [2024-11-26 17:27:37.814411] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:31:01.977 17:27:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.aSeAHXNKnP 00:31:01.977 17:27:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:31:01.977 17:27:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:31:01.977 17:27:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.74 00:31:01.977 ************************************ 00:31:01.977 END TEST raid_write_error_test 00:31:01.977 ************************************ 00:31:01.977 17:27:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy concat 00:31:01.977 17:27:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:31:01.977 17:27:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@200 -- # return 1 00:31:01.977 17:27:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@849 -- # [[ 0.74 != \0\.\0\0 ]] 00:31:01.977 00:31:01.977 real 0m4.657s 00:31:01.977 user 0m5.573s 00:31:01.977 sys 0m0.584s 00:31:01.977 17:27:39 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:31:01.977 17:27:39 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:31:01.977 17:27:39 bdev_raid -- bdev/bdev_raid.sh@967 -- # for level in raid0 concat raid1 00:31:01.977 17:27:39 bdev_raid -- bdev/bdev_raid.sh@968 -- # run_test raid_state_function_test raid_state_function_test raid1 3 false 00:31:01.977 17:27:39 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:31:01.977 17:27:39 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:31:01.977 17:27:39 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:31:01.977 ************************************ 00:31:01.977 START TEST raid_state_function_test 00:31:01.977 ************************************ 00:31:01.977 17:27:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1129 -- # raid_state_function_test raid1 3 false 00:31:01.977 17:27:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # local raid_level=raid1 00:31:01.977 17:27:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=3 00:31:01.977 17:27:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # local superblock=false 00:31:01.977 17:27:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:31:01.977 17:27:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:31:01.977 17:27:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:31:01.977 17:27:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:31:01.977 17:27:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:31:01.977 17:27:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:31:01.977 17:27:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:31:01.977 17:27:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:31:01.977 17:27:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:31:01.977 17:27:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:31:01.977 17:27:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:31:01.977 17:27:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:31:01.977 17:27:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:31:01.977 17:27:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:31:01.977 17:27:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:31:01.977 17:27:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # local strip_size 00:31:01.977 17:27:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:31:01.977 17:27:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:31:01.977 17:27:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@215 -- # '[' raid1 '!=' raid1 ']' 00:31:01.977 17:27:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@219 -- # strip_size=0 00:31:01.977 17:27:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@222 -- # '[' false = true ']' 00:31:01.977 17:27:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # superblock_create_arg= 00:31:01.977 17:27:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@229 -- # raid_pid=67809 00:31:01.977 Process raid pid: 67809 00:31:01.977 17:27:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:31:01.977 17:27:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 67809' 00:31:01.977 17:27:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@231 -- # waitforlisten 67809 00:31:01.977 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:31:01.977 17:27:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@835 -- # '[' -z 67809 ']' 00:31:01.977 17:27:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:31:01.977 17:27:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:31:01.977 17:27:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:31:01.977 17:27:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:31:01.977 17:27:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:31:01.977 [2024-11-26 17:27:39.213060] Starting SPDK v25.01-pre git sha1 f7ce15267 / DPDK 24.03.0 initialization... 00:31:01.977 [2024-11-26 17:27:39.213777] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:31:01.977 [2024-11-26 17:27:39.384899] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:02.236 [2024-11-26 17:27:39.508428] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:31:02.494 [2024-11-26 17:27:39.732370] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:31:02.494 [2024-11-26 17:27:39.732584] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:31:02.753 17:27:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:31:02.753 17:27:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@868 -- # return 0 00:31:02.753 17:27:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:31:02.753 17:27:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:02.753 17:27:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:31:02.753 [2024-11-26 17:27:40.171911] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:31:02.753 [2024-11-26 17:27:40.172308] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:31:02.753 [2024-11-26 17:27:40.172451] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:31:02.753 [2024-11-26 17:27:40.172573] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:31:02.753 [2024-11-26 17:27:40.172759] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:31:02.753 [2024-11-26 17:27:40.172851] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:31:02.753 17:27:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:02.753 17:27:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:31:02.753 17:27:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:31:02.753 17:27:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:31:02.753 17:27:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:31:02.753 17:27:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:31:02.753 17:27:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:31:02.753 17:27:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:31:02.753 17:27:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:31:02.753 17:27:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:31:02.753 17:27:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:31:02.753 17:27:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:31:02.753 17:27:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:31:02.753 17:27:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:02.753 17:27:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:31:03.012 17:27:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:03.012 17:27:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:31:03.012 "name": "Existed_Raid", 00:31:03.012 "uuid": "00000000-0000-0000-0000-000000000000", 00:31:03.012 "strip_size_kb": 0, 00:31:03.012 "state": "configuring", 00:31:03.012 "raid_level": "raid1", 00:31:03.012 "superblock": false, 00:31:03.012 "num_base_bdevs": 3, 00:31:03.012 "num_base_bdevs_discovered": 0, 00:31:03.012 "num_base_bdevs_operational": 3, 00:31:03.012 "base_bdevs_list": [ 00:31:03.012 { 00:31:03.012 "name": "BaseBdev1", 00:31:03.012 "uuid": "00000000-0000-0000-0000-000000000000", 00:31:03.012 "is_configured": false, 00:31:03.012 "data_offset": 0, 00:31:03.012 "data_size": 0 00:31:03.012 }, 00:31:03.012 { 00:31:03.012 "name": "BaseBdev2", 00:31:03.012 "uuid": "00000000-0000-0000-0000-000000000000", 00:31:03.012 "is_configured": false, 00:31:03.012 "data_offset": 0, 00:31:03.012 "data_size": 0 00:31:03.012 }, 00:31:03.012 { 00:31:03.012 "name": "BaseBdev3", 00:31:03.012 "uuid": "00000000-0000-0000-0000-000000000000", 00:31:03.012 "is_configured": false, 00:31:03.012 "data_offset": 0, 00:31:03.012 "data_size": 0 00:31:03.012 } 00:31:03.012 ] 00:31:03.012 }' 00:31:03.012 17:27:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:31:03.012 17:27:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:31:03.272 17:27:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:31:03.272 17:27:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:03.272 17:27:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:31:03.272 [2024-11-26 17:27:40.619979] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:31:03.272 [2024-11-26 17:27:40.620017] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:31:03.272 17:27:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:03.272 17:27:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:31:03.272 17:27:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:03.272 17:27:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:31:03.272 [2024-11-26 17:27:40.627942] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:31:03.272 [2024-11-26 17:27:40.628442] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:31:03.272 [2024-11-26 17:27:40.628466] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:31:03.272 [2024-11-26 17:27:40.628487] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:31:03.272 [2024-11-26 17:27:40.628495] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:31:03.272 [2024-11-26 17:27:40.628508] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:31:03.272 17:27:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:03.272 17:27:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:31:03.272 17:27:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:03.272 17:27:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:31:03.272 [2024-11-26 17:27:40.681182] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:31:03.272 BaseBdev1 00:31:03.272 17:27:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:03.272 17:27:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:31:03.272 17:27:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:31:03.272 17:27:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:31:03.272 17:27:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:31:03.272 17:27:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:31:03.272 17:27:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:31:03.272 17:27:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:31:03.272 17:27:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:03.272 17:27:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:31:03.272 17:27:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:03.272 17:27:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:31:03.272 17:27:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:03.272 17:27:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:31:03.272 [ 00:31:03.272 { 00:31:03.272 "name": "BaseBdev1", 00:31:03.272 "aliases": [ 00:31:03.272 "8e1252e4-f8ab-4d67-a35c-9ea091e98bc2" 00:31:03.272 ], 00:31:03.272 "product_name": "Malloc disk", 00:31:03.272 "block_size": 512, 00:31:03.272 "num_blocks": 65536, 00:31:03.272 "uuid": "8e1252e4-f8ab-4d67-a35c-9ea091e98bc2", 00:31:03.272 "assigned_rate_limits": { 00:31:03.272 "rw_ios_per_sec": 0, 00:31:03.272 "rw_mbytes_per_sec": 0, 00:31:03.272 "r_mbytes_per_sec": 0, 00:31:03.272 "w_mbytes_per_sec": 0 00:31:03.272 }, 00:31:03.272 "claimed": true, 00:31:03.272 "claim_type": "exclusive_write", 00:31:03.272 "zoned": false, 00:31:03.272 "supported_io_types": { 00:31:03.272 "read": true, 00:31:03.272 "write": true, 00:31:03.272 "unmap": true, 00:31:03.272 "flush": true, 00:31:03.272 "reset": true, 00:31:03.272 "nvme_admin": false, 00:31:03.272 "nvme_io": false, 00:31:03.272 "nvme_io_md": false, 00:31:03.272 "write_zeroes": true, 00:31:03.272 "zcopy": true, 00:31:03.272 "get_zone_info": false, 00:31:03.272 "zone_management": false, 00:31:03.272 "zone_append": false, 00:31:03.272 "compare": false, 00:31:03.272 "compare_and_write": false, 00:31:03.272 "abort": true, 00:31:03.272 "seek_hole": false, 00:31:03.272 "seek_data": false, 00:31:03.272 "copy": true, 00:31:03.272 "nvme_iov_md": false 00:31:03.272 }, 00:31:03.272 "memory_domains": [ 00:31:03.272 { 00:31:03.272 "dma_device_id": "system", 00:31:03.272 "dma_device_type": 1 00:31:03.272 }, 00:31:03.272 { 00:31:03.272 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:31:03.272 "dma_device_type": 2 00:31:03.272 } 00:31:03.272 ], 00:31:03.272 "driver_specific": {} 00:31:03.272 } 00:31:03.272 ] 00:31:03.272 17:27:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:03.272 17:27:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:31:03.272 17:27:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:31:03.272 17:27:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:31:03.272 17:27:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:31:03.532 17:27:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:31:03.532 17:27:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:31:03.532 17:27:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:31:03.532 17:27:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:31:03.532 17:27:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:31:03.532 17:27:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:31:03.532 17:27:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:31:03.532 17:27:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:31:03.532 17:27:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:31:03.532 17:27:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:03.532 17:27:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:31:03.532 17:27:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:03.532 17:27:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:31:03.532 "name": "Existed_Raid", 00:31:03.532 "uuid": "00000000-0000-0000-0000-000000000000", 00:31:03.532 "strip_size_kb": 0, 00:31:03.532 "state": "configuring", 00:31:03.532 "raid_level": "raid1", 00:31:03.532 "superblock": false, 00:31:03.532 "num_base_bdevs": 3, 00:31:03.532 "num_base_bdevs_discovered": 1, 00:31:03.532 "num_base_bdevs_operational": 3, 00:31:03.532 "base_bdevs_list": [ 00:31:03.532 { 00:31:03.532 "name": "BaseBdev1", 00:31:03.532 "uuid": "8e1252e4-f8ab-4d67-a35c-9ea091e98bc2", 00:31:03.532 "is_configured": true, 00:31:03.532 "data_offset": 0, 00:31:03.532 "data_size": 65536 00:31:03.532 }, 00:31:03.532 { 00:31:03.532 "name": "BaseBdev2", 00:31:03.532 "uuid": "00000000-0000-0000-0000-000000000000", 00:31:03.532 "is_configured": false, 00:31:03.532 "data_offset": 0, 00:31:03.532 "data_size": 0 00:31:03.532 }, 00:31:03.532 { 00:31:03.532 "name": "BaseBdev3", 00:31:03.532 "uuid": "00000000-0000-0000-0000-000000000000", 00:31:03.532 "is_configured": false, 00:31:03.532 "data_offset": 0, 00:31:03.532 "data_size": 0 00:31:03.532 } 00:31:03.532 ] 00:31:03.532 }' 00:31:03.532 17:27:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:31:03.532 17:27:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:31:03.827 17:27:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:31:03.827 17:27:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:03.827 17:27:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:31:03.827 [2024-11-26 17:27:41.137312] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:31:03.827 [2024-11-26 17:27:41.137490] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:31:03.828 17:27:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:03.828 17:27:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:31:03.828 17:27:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:03.828 17:27:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:31:03.828 [2024-11-26 17:27:41.145347] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:31:03.828 [2024-11-26 17:27:41.147505] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:31:03.828 [2024-11-26 17:27:41.147552] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:31:03.828 [2024-11-26 17:27:41.147564] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:31:03.828 [2024-11-26 17:27:41.147577] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:31:03.828 17:27:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:03.828 17:27:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:31:03.828 17:27:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:31:03.828 17:27:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:31:03.828 17:27:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:31:03.828 17:27:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:31:03.828 17:27:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:31:03.828 17:27:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:31:03.828 17:27:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:31:03.828 17:27:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:31:03.828 17:27:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:31:03.828 17:27:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:31:03.828 17:27:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:31:03.828 17:27:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:31:03.828 17:27:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:03.828 17:27:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:31:03.828 17:27:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:31:03.828 17:27:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:03.828 17:27:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:31:03.828 "name": "Existed_Raid", 00:31:03.828 "uuid": "00000000-0000-0000-0000-000000000000", 00:31:03.828 "strip_size_kb": 0, 00:31:03.828 "state": "configuring", 00:31:03.828 "raid_level": "raid1", 00:31:03.828 "superblock": false, 00:31:03.828 "num_base_bdevs": 3, 00:31:03.828 "num_base_bdevs_discovered": 1, 00:31:03.828 "num_base_bdevs_operational": 3, 00:31:03.828 "base_bdevs_list": [ 00:31:03.828 { 00:31:03.828 "name": "BaseBdev1", 00:31:03.828 "uuid": "8e1252e4-f8ab-4d67-a35c-9ea091e98bc2", 00:31:03.828 "is_configured": true, 00:31:03.828 "data_offset": 0, 00:31:03.828 "data_size": 65536 00:31:03.828 }, 00:31:03.828 { 00:31:03.828 "name": "BaseBdev2", 00:31:03.828 "uuid": "00000000-0000-0000-0000-000000000000", 00:31:03.828 "is_configured": false, 00:31:03.828 "data_offset": 0, 00:31:03.828 "data_size": 0 00:31:03.828 }, 00:31:03.828 { 00:31:03.828 "name": "BaseBdev3", 00:31:03.828 "uuid": "00000000-0000-0000-0000-000000000000", 00:31:03.828 "is_configured": false, 00:31:03.828 "data_offset": 0, 00:31:03.828 "data_size": 0 00:31:03.828 } 00:31:03.828 ] 00:31:03.828 }' 00:31:03.828 17:27:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:31:03.828 17:27:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:31:04.397 17:27:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:31:04.397 17:27:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:04.397 17:27:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:31:04.397 [2024-11-26 17:27:41.663109] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:31:04.397 BaseBdev2 00:31:04.397 17:27:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:04.397 17:27:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:31:04.397 17:27:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:31:04.397 17:27:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:31:04.397 17:27:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:31:04.397 17:27:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:31:04.397 17:27:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:31:04.397 17:27:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:31:04.397 17:27:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:04.397 17:27:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:31:04.397 17:27:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:04.397 17:27:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:31:04.397 17:27:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:04.397 17:27:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:31:04.397 [ 00:31:04.397 { 00:31:04.397 "name": "BaseBdev2", 00:31:04.397 "aliases": [ 00:31:04.397 "d7bb43f4-20cc-4db5-989b-860841977156" 00:31:04.397 ], 00:31:04.397 "product_name": "Malloc disk", 00:31:04.397 "block_size": 512, 00:31:04.397 "num_blocks": 65536, 00:31:04.397 "uuid": "d7bb43f4-20cc-4db5-989b-860841977156", 00:31:04.397 "assigned_rate_limits": { 00:31:04.397 "rw_ios_per_sec": 0, 00:31:04.397 "rw_mbytes_per_sec": 0, 00:31:04.397 "r_mbytes_per_sec": 0, 00:31:04.397 "w_mbytes_per_sec": 0 00:31:04.397 }, 00:31:04.397 "claimed": true, 00:31:04.397 "claim_type": "exclusive_write", 00:31:04.397 "zoned": false, 00:31:04.397 "supported_io_types": { 00:31:04.397 "read": true, 00:31:04.397 "write": true, 00:31:04.397 "unmap": true, 00:31:04.397 "flush": true, 00:31:04.397 "reset": true, 00:31:04.397 "nvme_admin": false, 00:31:04.397 "nvme_io": false, 00:31:04.397 "nvme_io_md": false, 00:31:04.397 "write_zeroes": true, 00:31:04.397 "zcopy": true, 00:31:04.397 "get_zone_info": false, 00:31:04.397 "zone_management": false, 00:31:04.397 "zone_append": false, 00:31:04.397 "compare": false, 00:31:04.397 "compare_and_write": false, 00:31:04.397 "abort": true, 00:31:04.397 "seek_hole": false, 00:31:04.397 "seek_data": false, 00:31:04.397 "copy": true, 00:31:04.397 "nvme_iov_md": false 00:31:04.397 }, 00:31:04.397 "memory_domains": [ 00:31:04.397 { 00:31:04.397 "dma_device_id": "system", 00:31:04.397 "dma_device_type": 1 00:31:04.397 }, 00:31:04.397 { 00:31:04.397 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:31:04.397 "dma_device_type": 2 00:31:04.397 } 00:31:04.397 ], 00:31:04.397 "driver_specific": {} 00:31:04.397 } 00:31:04.397 ] 00:31:04.397 17:27:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:04.397 17:27:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:31:04.397 17:27:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:31:04.397 17:27:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:31:04.397 17:27:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:31:04.397 17:27:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:31:04.397 17:27:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:31:04.397 17:27:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:31:04.397 17:27:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:31:04.397 17:27:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:31:04.397 17:27:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:31:04.398 17:27:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:31:04.398 17:27:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:31:04.398 17:27:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:31:04.398 17:27:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:31:04.398 17:27:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:31:04.398 17:27:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:04.398 17:27:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:31:04.398 17:27:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:04.398 17:27:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:31:04.398 "name": "Existed_Raid", 00:31:04.398 "uuid": "00000000-0000-0000-0000-000000000000", 00:31:04.398 "strip_size_kb": 0, 00:31:04.398 "state": "configuring", 00:31:04.398 "raid_level": "raid1", 00:31:04.398 "superblock": false, 00:31:04.398 "num_base_bdevs": 3, 00:31:04.398 "num_base_bdevs_discovered": 2, 00:31:04.398 "num_base_bdevs_operational": 3, 00:31:04.398 "base_bdevs_list": [ 00:31:04.398 { 00:31:04.398 "name": "BaseBdev1", 00:31:04.398 "uuid": "8e1252e4-f8ab-4d67-a35c-9ea091e98bc2", 00:31:04.398 "is_configured": true, 00:31:04.398 "data_offset": 0, 00:31:04.398 "data_size": 65536 00:31:04.398 }, 00:31:04.398 { 00:31:04.398 "name": "BaseBdev2", 00:31:04.398 "uuid": "d7bb43f4-20cc-4db5-989b-860841977156", 00:31:04.398 "is_configured": true, 00:31:04.398 "data_offset": 0, 00:31:04.398 "data_size": 65536 00:31:04.398 }, 00:31:04.398 { 00:31:04.398 "name": "BaseBdev3", 00:31:04.398 "uuid": "00000000-0000-0000-0000-000000000000", 00:31:04.398 "is_configured": false, 00:31:04.398 "data_offset": 0, 00:31:04.398 "data_size": 0 00:31:04.398 } 00:31:04.398 ] 00:31:04.398 }' 00:31:04.398 17:27:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:31:04.398 17:27:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:31:04.966 17:27:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:31:04.967 17:27:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:04.967 17:27:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:31:04.967 [2024-11-26 17:27:42.226387] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:31:04.967 [2024-11-26 17:27:42.226648] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:31:04.967 [2024-11-26 17:27:42.226677] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:31:04.967 [2024-11-26 17:27:42.226991] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:31:04.967 [2024-11-26 17:27:42.227195] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:31:04.967 [2024-11-26 17:27:42.227206] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:31:04.967 [2024-11-26 17:27:42.227491] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:31:04.967 BaseBdev3 00:31:04.967 17:27:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:04.967 17:27:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:31:04.967 17:27:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:31:04.967 17:27:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:31:04.967 17:27:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:31:04.967 17:27:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:31:04.967 17:27:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:31:04.967 17:27:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:31:04.967 17:27:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:04.967 17:27:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:31:04.967 17:27:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:04.967 17:27:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:31:04.967 17:27:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:04.967 17:27:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:31:04.967 [ 00:31:04.967 { 00:31:04.967 "name": "BaseBdev3", 00:31:04.967 "aliases": [ 00:31:04.967 "55ec4a46-9ff7-4f0e-b65e-8adad2a1de7b" 00:31:04.967 ], 00:31:04.967 "product_name": "Malloc disk", 00:31:04.967 "block_size": 512, 00:31:04.967 "num_blocks": 65536, 00:31:04.967 "uuid": "55ec4a46-9ff7-4f0e-b65e-8adad2a1de7b", 00:31:04.967 "assigned_rate_limits": { 00:31:04.967 "rw_ios_per_sec": 0, 00:31:04.967 "rw_mbytes_per_sec": 0, 00:31:04.967 "r_mbytes_per_sec": 0, 00:31:04.967 "w_mbytes_per_sec": 0 00:31:04.967 }, 00:31:04.967 "claimed": true, 00:31:04.967 "claim_type": "exclusive_write", 00:31:04.967 "zoned": false, 00:31:04.967 "supported_io_types": { 00:31:04.967 "read": true, 00:31:04.967 "write": true, 00:31:04.967 "unmap": true, 00:31:04.967 "flush": true, 00:31:04.967 "reset": true, 00:31:04.967 "nvme_admin": false, 00:31:04.967 "nvme_io": false, 00:31:04.967 "nvme_io_md": false, 00:31:04.967 "write_zeroes": true, 00:31:04.967 "zcopy": true, 00:31:04.967 "get_zone_info": false, 00:31:04.967 "zone_management": false, 00:31:04.967 "zone_append": false, 00:31:04.967 "compare": false, 00:31:04.967 "compare_and_write": false, 00:31:04.967 "abort": true, 00:31:04.967 "seek_hole": false, 00:31:04.967 "seek_data": false, 00:31:04.967 "copy": true, 00:31:04.967 "nvme_iov_md": false 00:31:04.967 }, 00:31:04.967 "memory_domains": [ 00:31:04.967 { 00:31:04.967 "dma_device_id": "system", 00:31:04.967 "dma_device_type": 1 00:31:04.967 }, 00:31:04.967 { 00:31:04.967 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:31:04.967 "dma_device_type": 2 00:31:04.967 } 00:31:04.967 ], 00:31:04.967 "driver_specific": {} 00:31:04.967 } 00:31:04.967 ] 00:31:04.967 17:27:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:04.967 17:27:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:31:04.967 17:27:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:31:04.967 17:27:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:31:04.967 17:27:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid1 0 3 00:31:04.967 17:27:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:31:04.967 17:27:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:31:04.967 17:27:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:31:04.967 17:27:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:31:04.967 17:27:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:31:04.967 17:27:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:31:04.967 17:27:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:31:04.967 17:27:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:31:04.967 17:27:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:31:04.967 17:27:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:31:04.967 17:27:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:31:04.967 17:27:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:04.967 17:27:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:31:04.967 17:27:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:04.967 17:27:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:31:04.967 "name": "Existed_Raid", 00:31:04.967 "uuid": "15e5a5a6-4ff0-494b-a25d-b6bc06d41272", 00:31:04.967 "strip_size_kb": 0, 00:31:04.967 "state": "online", 00:31:04.967 "raid_level": "raid1", 00:31:04.967 "superblock": false, 00:31:04.967 "num_base_bdevs": 3, 00:31:04.967 "num_base_bdevs_discovered": 3, 00:31:04.967 "num_base_bdevs_operational": 3, 00:31:04.967 "base_bdevs_list": [ 00:31:04.967 { 00:31:04.967 "name": "BaseBdev1", 00:31:04.967 "uuid": "8e1252e4-f8ab-4d67-a35c-9ea091e98bc2", 00:31:04.967 "is_configured": true, 00:31:04.967 "data_offset": 0, 00:31:04.967 "data_size": 65536 00:31:04.967 }, 00:31:04.967 { 00:31:04.967 "name": "BaseBdev2", 00:31:04.967 "uuid": "d7bb43f4-20cc-4db5-989b-860841977156", 00:31:04.967 "is_configured": true, 00:31:04.967 "data_offset": 0, 00:31:04.967 "data_size": 65536 00:31:04.967 }, 00:31:04.967 { 00:31:04.967 "name": "BaseBdev3", 00:31:04.967 "uuid": "55ec4a46-9ff7-4f0e-b65e-8adad2a1de7b", 00:31:04.967 "is_configured": true, 00:31:04.967 "data_offset": 0, 00:31:04.967 "data_size": 65536 00:31:04.967 } 00:31:04.967 ] 00:31:04.967 }' 00:31:04.967 17:27:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:31:04.967 17:27:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:31:05.535 17:27:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:31:05.535 17:27:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:31:05.535 17:27:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:31:05.535 17:27:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:31:05.535 17:27:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:31:05.535 17:27:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:31:05.535 17:27:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:31:05.535 17:27:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:31:05.535 17:27:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:05.535 17:27:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:31:05.535 [2024-11-26 17:27:42.734906] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:31:05.535 17:27:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:05.535 17:27:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:31:05.535 "name": "Existed_Raid", 00:31:05.535 "aliases": [ 00:31:05.535 "15e5a5a6-4ff0-494b-a25d-b6bc06d41272" 00:31:05.535 ], 00:31:05.535 "product_name": "Raid Volume", 00:31:05.535 "block_size": 512, 00:31:05.535 "num_blocks": 65536, 00:31:05.535 "uuid": "15e5a5a6-4ff0-494b-a25d-b6bc06d41272", 00:31:05.535 "assigned_rate_limits": { 00:31:05.535 "rw_ios_per_sec": 0, 00:31:05.535 "rw_mbytes_per_sec": 0, 00:31:05.535 "r_mbytes_per_sec": 0, 00:31:05.535 "w_mbytes_per_sec": 0 00:31:05.535 }, 00:31:05.535 "claimed": false, 00:31:05.535 "zoned": false, 00:31:05.535 "supported_io_types": { 00:31:05.535 "read": true, 00:31:05.535 "write": true, 00:31:05.535 "unmap": false, 00:31:05.535 "flush": false, 00:31:05.535 "reset": true, 00:31:05.535 "nvme_admin": false, 00:31:05.535 "nvme_io": false, 00:31:05.535 "nvme_io_md": false, 00:31:05.535 "write_zeroes": true, 00:31:05.535 "zcopy": false, 00:31:05.535 "get_zone_info": false, 00:31:05.535 "zone_management": false, 00:31:05.535 "zone_append": false, 00:31:05.535 "compare": false, 00:31:05.535 "compare_and_write": false, 00:31:05.535 "abort": false, 00:31:05.535 "seek_hole": false, 00:31:05.535 "seek_data": false, 00:31:05.535 "copy": false, 00:31:05.535 "nvme_iov_md": false 00:31:05.535 }, 00:31:05.535 "memory_domains": [ 00:31:05.535 { 00:31:05.535 "dma_device_id": "system", 00:31:05.535 "dma_device_type": 1 00:31:05.535 }, 00:31:05.535 { 00:31:05.535 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:31:05.535 "dma_device_type": 2 00:31:05.535 }, 00:31:05.535 { 00:31:05.535 "dma_device_id": "system", 00:31:05.535 "dma_device_type": 1 00:31:05.535 }, 00:31:05.535 { 00:31:05.535 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:31:05.535 "dma_device_type": 2 00:31:05.535 }, 00:31:05.535 { 00:31:05.535 "dma_device_id": "system", 00:31:05.535 "dma_device_type": 1 00:31:05.535 }, 00:31:05.535 { 00:31:05.535 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:31:05.535 "dma_device_type": 2 00:31:05.535 } 00:31:05.535 ], 00:31:05.535 "driver_specific": { 00:31:05.535 "raid": { 00:31:05.535 "uuid": "15e5a5a6-4ff0-494b-a25d-b6bc06d41272", 00:31:05.535 "strip_size_kb": 0, 00:31:05.535 "state": "online", 00:31:05.535 "raid_level": "raid1", 00:31:05.535 "superblock": false, 00:31:05.535 "num_base_bdevs": 3, 00:31:05.535 "num_base_bdevs_discovered": 3, 00:31:05.535 "num_base_bdevs_operational": 3, 00:31:05.535 "base_bdevs_list": [ 00:31:05.535 { 00:31:05.535 "name": "BaseBdev1", 00:31:05.535 "uuid": "8e1252e4-f8ab-4d67-a35c-9ea091e98bc2", 00:31:05.535 "is_configured": true, 00:31:05.535 "data_offset": 0, 00:31:05.535 "data_size": 65536 00:31:05.535 }, 00:31:05.535 { 00:31:05.535 "name": "BaseBdev2", 00:31:05.535 "uuid": "d7bb43f4-20cc-4db5-989b-860841977156", 00:31:05.535 "is_configured": true, 00:31:05.535 "data_offset": 0, 00:31:05.535 "data_size": 65536 00:31:05.535 }, 00:31:05.535 { 00:31:05.535 "name": "BaseBdev3", 00:31:05.535 "uuid": "55ec4a46-9ff7-4f0e-b65e-8adad2a1de7b", 00:31:05.535 "is_configured": true, 00:31:05.535 "data_offset": 0, 00:31:05.535 "data_size": 65536 00:31:05.535 } 00:31:05.535 ] 00:31:05.535 } 00:31:05.535 } 00:31:05.535 }' 00:31:05.535 17:27:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:31:05.535 17:27:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:31:05.535 BaseBdev2 00:31:05.535 BaseBdev3' 00:31:05.535 17:27:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:31:05.535 17:27:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:31:05.535 17:27:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:31:05.536 17:27:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:31:05.536 17:27:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:31:05.536 17:27:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:05.536 17:27:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:31:05.536 17:27:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:05.536 17:27:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:31:05.536 17:27:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:31:05.536 17:27:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:31:05.536 17:27:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:31:05.536 17:27:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:05.536 17:27:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:31:05.536 17:27:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:31:05.536 17:27:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:05.536 17:27:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:31:05.536 17:27:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:31:05.536 17:27:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:31:05.536 17:27:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:31:05.536 17:27:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:05.536 17:27:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:31:05.536 17:27:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:31:05.536 17:27:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:05.795 17:27:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:31:05.795 17:27:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:31:05.795 17:27:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:31:05.795 17:27:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:05.795 17:27:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:31:05.795 [2024-11-26 17:27:43.010630] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:31:05.795 17:27:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:05.795 17:27:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@260 -- # local expected_state 00:31:05.795 17:27:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@261 -- # has_redundancy raid1 00:31:05.795 17:27:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:31:05.795 17:27:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@199 -- # return 0 00:31:05.795 17:27:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@264 -- # expected_state=online 00:31:05.795 17:27:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid online raid1 0 2 00:31:05.795 17:27:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:31:05.795 17:27:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:31:05.795 17:27:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:31:05.795 17:27:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:31:05.795 17:27:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:31:05.795 17:27:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:31:05.795 17:27:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:31:05.795 17:27:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:31:05.795 17:27:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:31:05.795 17:27:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:31:05.795 17:27:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:31:05.795 17:27:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:05.795 17:27:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:31:05.795 17:27:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:05.795 17:27:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:31:05.795 "name": "Existed_Raid", 00:31:05.795 "uuid": "15e5a5a6-4ff0-494b-a25d-b6bc06d41272", 00:31:05.795 "strip_size_kb": 0, 00:31:05.796 "state": "online", 00:31:05.796 "raid_level": "raid1", 00:31:05.796 "superblock": false, 00:31:05.796 "num_base_bdevs": 3, 00:31:05.796 "num_base_bdevs_discovered": 2, 00:31:05.796 "num_base_bdevs_operational": 2, 00:31:05.796 "base_bdevs_list": [ 00:31:05.796 { 00:31:05.796 "name": null, 00:31:05.796 "uuid": "00000000-0000-0000-0000-000000000000", 00:31:05.796 "is_configured": false, 00:31:05.796 "data_offset": 0, 00:31:05.796 "data_size": 65536 00:31:05.796 }, 00:31:05.796 { 00:31:05.796 "name": "BaseBdev2", 00:31:05.796 "uuid": "d7bb43f4-20cc-4db5-989b-860841977156", 00:31:05.796 "is_configured": true, 00:31:05.796 "data_offset": 0, 00:31:05.796 "data_size": 65536 00:31:05.796 }, 00:31:05.796 { 00:31:05.796 "name": "BaseBdev3", 00:31:05.796 "uuid": "55ec4a46-9ff7-4f0e-b65e-8adad2a1de7b", 00:31:05.796 "is_configured": true, 00:31:05.796 "data_offset": 0, 00:31:05.796 "data_size": 65536 00:31:05.796 } 00:31:05.796 ] 00:31:05.796 }' 00:31:05.796 17:27:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:31:05.796 17:27:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:31:06.363 17:27:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:31:06.363 17:27:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:31:06.363 17:27:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:31:06.363 17:27:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:06.363 17:27:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:31:06.363 17:27:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:31:06.363 17:27:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:06.363 17:27:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:31:06.363 17:27:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:31:06.363 17:27:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:31:06.363 17:27:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:06.363 17:27:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:31:06.363 [2024-11-26 17:27:43.629295] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:31:06.363 17:27:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:06.363 17:27:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:31:06.363 17:27:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:31:06.363 17:27:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:31:06.363 17:27:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:31:06.363 17:27:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:06.363 17:27:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:31:06.363 17:27:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:06.363 17:27:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:31:06.363 17:27:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:31:06.363 17:27:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:31:06.363 17:27:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:06.363 17:27:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:31:06.363 [2024-11-26 17:27:43.784016] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:31:06.363 [2024-11-26 17:27:43.784272] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:31:06.621 [2024-11-26 17:27:43.888187] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:31:06.621 [2024-11-26 17:27:43.888237] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:31:06.621 [2024-11-26 17:27:43.888252] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:31:06.621 17:27:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:06.621 17:27:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:31:06.621 17:27:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:31:06.621 17:27:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:31:06.621 17:27:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:06.621 17:27:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:31:06.621 17:27:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:31:06.621 17:27:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:06.621 17:27:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:31:06.621 17:27:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:31:06.621 17:27:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@284 -- # '[' 3 -gt 2 ']' 00:31:06.621 17:27:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:31:06.621 17:27:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:31:06.621 17:27:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:31:06.621 17:27:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:06.621 17:27:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:31:06.621 BaseBdev2 00:31:06.621 17:27:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:06.621 17:27:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:31:06.621 17:27:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:31:06.621 17:27:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:31:06.621 17:27:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:31:06.622 17:27:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:31:06.622 17:27:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:31:06.622 17:27:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:31:06.622 17:27:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:06.622 17:27:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:31:06.622 17:27:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:06.622 17:27:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:31:06.622 17:27:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:06.622 17:27:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:31:06.622 [ 00:31:06.622 { 00:31:06.622 "name": "BaseBdev2", 00:31:06.622 "aliases": [ 00:31:06.622 "0928a920-5302-435b-acb2-bd3716db5c41" 00:31:06.622 ], 00:31:06.622 "product_name": "Malloc disk", 00:31:06.622 "block_size": 512, 00:31:06.622 "num_blocks": 65536, 00:31:06.622 "uuid": "0928a920-5302-435b-acb2-bd3716db5c41", 00:31:06.622 "assigned_rate_limits": { 00:31:06.622 "rw_ios_per_sec": 0, 00:31:06.622 "rw_mbytes_per_sec": 0, 00:31:06.622 "r_mbytes_per_sec": 0, 00:31:06.622 "w_mbytes_per_sec": 0 00:31:06.622 }, 00:31:06.622 "claimed": false, 00:31:06.622 "zoned": false, 00:31:06.622 "supported_io_types": { 00:31:06.622 "read": true, 00:31:06.622 "write": true, 00:31:06.622 "unmap": true, 00:31:06.622 "flush": true, 00:31:06.622 "reset": true, 00:31:06.622 "nvme_admin": false, 00:31:06.622 "nvme_io": false, 00:31:06.622 "nvme_io_md": false, 00:31:06.622 "write_zeroes": true, 00:31:06.622 "zcopy": true, 00:31:06.622 "get_zone_info": false, 00:31:06.622 "zone_management": false, 00:31:06.622 "zone_append": false, 00:31:06.622 "compare": false, 00:31:06.622 "compare_and_write": false, 00:31:06.622 "abort": true, 00:31:06.622 "seek_hole": false, 00:31:06.622 "seek_data": false, 00:31:06.622 "copy": true, 00:31:06.622 "nvme_iov_md": false 00:31:06.622 }, 00:31:06.622 "memory_domains": [ 00:31:06.622 { 00:31:06.622 "dma_device_id": "system", 00:31:06.622 "dma_device_type": 1 00:31:06.622 }, 00:31:06.622 { 00:31:06.622 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:31:06.622 "dma_device_type": 2 00:31:06.622 } 00:31:06.622 ], 00:31:06.622 "driver_specific": {} 00:31:06.622 } 00:31:06.622 ] 00:31:06.622 17:27:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:06.622 17:27:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:31:06.622 17:27:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:31:06.622 17:27:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:31:06.622 17:27:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:31:06.622 17:27:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:06.622 17:27:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:31:06.881 BaseBdev3 00:31:06.881 17:27:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:06.881 17:27:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:31:06.881 17:27:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:31:06.881 17:27:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:31:06.881 17:27:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:31:06.881 17:27:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:31:06.881 17:27:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:31:06.881 17:27:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:31:06.881 17:27:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:06.881 17:27:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:31:06.881 17:27:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:06.881 17:27:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:31:06.881 17:27:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:06.881 17:27:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:31:06.881 [ 00:31:06.881 { 00:31:06.881 "name": "BaseBdev3", 00:31:06.881 "aliases": [ 00:31:06.881 "8b1f68ef-141d-4f2d-aee0-ebddb64f9ef5" 00:31:06.881 ], 00:31:06.881 "product_name": "Malloc disk", 00:31:06.881 "block_size": 512, 00:31:06.881 "num_blocks": 65536, 00:31:06.881 "uuid": "8b1f68ef-141d-4f2d-aee0-ebddb64f9ef5", 00:31:06.881 "assigned_rate_limits": { 00:31:06.881 "rw_ios_per_sec": 0, 00:31:06.881 "rw_mbytes_per_sec": 0, 00:31:06.881 "r_mbytes_per_sec": 0, 00:31:06.881 "w_mbytes_per_sec": 0 00:31:06.881 }, 00:31:06.881 "claimed": false, 00:31:06.881 "zoned": false, 00:31:06.881 "supported_io_types": { 00:31:06.881 "read": true, 00:31:06.881 "write": true, 00:31:06.881 "unmap": true, 00:31:06.881 "flush": true, 00:31:06.881 "reset": true, 00:31:06.881 "nvme_admin": false, 00:31:06.881 "nvme_io": false, 00:31:06.881 "nvme_io_md": false, 00:31:06.881 "write_zeroes": true, 00:31:06.881 "zcopy": true, 00:31:06.881 "get_zone_info": false, 00:31:06.881 "zone_management": false, 00:31:06.881 "zone_append": false, 00:31:06.881 "compare": false, 00:31:06.881 "compare_and_write": false, 00:31:06.881 "abort": true, 00:31:06.881 "seek_hole": false, 00:31:06.881 "seek_data": false, 00:31:06.881 "copy": true, 00:31:06.881 "nvme_iov_md": false 00:31:06.881 }, 00:31:06.881 "memory_domains": [ 00:31:06.881 { 00:31:06.881 "dma_device_id": "system", 00:31:06.881 "dma_device_type": 1 00:31:06.881 }, 00:31:06.881 { 00:31:06.881 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:31:06.881 "dma_device_type": 2 00:31:06.881 } 00:31:06.882 ], 00:31:06.882 "driver_specific": {} 00:31:06.882 } 00:31:06.882 ] 00:31:06.882 17:27:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:06.882 17:27:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:31:06.882 17:27:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:31:06.882 17:27:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:31:06.882 17:27:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:31:06.882 17:27:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:06.882 17:27:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:31:06.882 [2024-11-26 17:27:44.106841] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:31:06.882 [2024-11-26 17:27:44.106892] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:31:06.882 [2024-11-26 17:27:44.106912] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:31:06.882 [2024-11-26 17:27:44.108977] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:31:06.882 17:27:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:06.882 17:27:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:31:06.882 17:27:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:31:06.882 17:27:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:31:06.882 17:27:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:31:06.882 17:27:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:31:06.882 17:27:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:31:06.882 17:27:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:31:06.882 17:27:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:31:06.882 17:27:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:31:06.882 17:27:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:31:06.882 17:27:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:31:06.882 17:27:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:31:06.882 17:27:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:06.882 17:27:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:31:06.882 17:27:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:06.882 17:27:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:31:06.882 "name": "Existed_Raid", 00:31:06.882 "uuid": "00000000-0000-0000-0000-000000000000", 00:31:06.882 "strip_size_kb": 0, 00:31:06.882 "state": "configuring", 00:31:06.882 "raid_level": "raid1", 00:31:06.882 "superblock": false, 00:31:06.882 "num_base_bdevs": 3, 00:31:06.882 "num_base_bdevs_discovered": 2, 00:31:06.882 "num_base_bdevs_operational": 3, 00:31:06.882 "base_bdevs_list": [ 00:31:06.882 { 00:31:06.882 "name": "BaseBdev1", 00:31:06.882 "uuid": "00000000-0000-0000-0000-000000000000", 00:31:06.882 "is_configured": false, 00:31:06.882 "data_offset": 0, 00:31:06.882 "data_size": 0 00:31:06.882 }, 00:31:06.882 { 00:31:06.882 "name": "BaseBdev2", 00:31:06.882 "uuid": "0928a920-5302-435b-acb2-bd3716db5c41", 00:31:06.882 "is_configured": true, 00:31:06.882 "data_offset": 0, 00:31:06.882 "data_size": 65536 00:31:06.882 }, 00:31:06.882 { 00:31:06.882 "name": "BaseBdev3", 00:31:06.882 "uuid": "8b1f68ef-141d-4f2d-aee0-ebddb64f9ef5", 00:31:06.882 "is_configured": true, 00:31:06.882 "data_offset": 0, 00:31:06.882 "data_size": 65536 00:31:06.882 } 00:31:06.882 ] 00:31:06.882 }' 00:31:06.882 17:27:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:31:06.882 17:27:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:31:07.141 17:27:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:31:07.141 17:27:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:07.141 17:27:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:31:07.141 [2024-11-26 17:27:44.566985] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:31:07.141 17:27:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:07.141 17:27:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:31:07.141 17:27:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:31:07.141 17:27:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:31:07.141 17:27:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:31:07.142 17:27:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:31:07.142 17:27:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:31:07.142 17:27:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:31:07.142 17:27:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:31:07.142 17:27:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:31:07.142 17:27:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:31:07.142 17:27:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:31:07.142 17:27:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:07.142 17:27:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:31:07.142 17:27:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:31:07.400 17:27:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:07.400 17:27:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:31:07.400 "name": "Existed_Raid", 00:31:07.400 "uuid": "00000000-0000-0000-0000-000000000000", 00:31:07.400 "strip_size_kb": 0, 00:31:07.400 "state": "configuring", 00:31:07.400 "raid_level": "raid1", 00:31:07.400 "superblock": false, 00:31:07.400 "num_base_bdevs": 3, 00:31:07.400 "num_base_bdevs_discovered": 1, 00:31:07.400 "num_base_bdevs_operational": 3, 00:31:07.400 "base_bdevs_list": [ 00:31:07.400 { 00:31:07.400 "name": "BaseBdev1", 00:31:07.400 "uuid": "00000000-0000-0000-0000-000000000000", 00:31:07.400 "is_configured": false, 00:31:07.400 "data_offset": 0, 00:31:07.400 "data_size": 0 00:31:07.400 }, 00:31:07.400 { 00:31:07.400 "name": null, 00:31:07.400 "uuid": "0928a920-5302-435b-acb2-bd3716db5c41", 00:31:07.400 "is_configured": false, 00:31:07.400 "data_offset": 0, 00:31:07.400 "data_size": 65536 00:31:07.400 }, 00:31:07.400 { 00:31:07.400 "name": "BaseBdev3", 00:31:07.400 "uuid": "8b1f68ef-141d-4f2d-aee0-ebddb64f9ef5", 00:31:07.400 "is_configured": true, 00:31:07.400 "data_offset": 0, 00:31:07.400 "data_size": 65536 00:31:07.400 } 00:31:07.400 ] 00:31:07.400 }' 00:31:07.400 17:27:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:31:07.400 17:27:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:31:07.658 17:27:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:31:07.658 17:27:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:31:07.658 17:27:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:07.658 17:27:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:31:07.658 17:27:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:07.658 17:27:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:31:07.658 17:27:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:31:07.658 17:27:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:07.658 17:27:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:31:07.916 [2024-11-26 17:27:45.117437] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:31:07.916 BaseBdev1 00:31:07.916 17:27:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:07.916 17:27:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:31:07.916 17:27:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:31:07.916 17:27:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:31:07.916 17:27:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:31:07.916 17:27:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:31:07.916 17:27:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:31:07.916 17:27:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:31:07.916 17:27:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:07.916 17:27:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:31:07.916 17:27:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:07.916 17:27:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:31:07.916 17:27:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:07.916 17:27:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:31:07.916 [ 00:31:07.916 { 00:31:07.916 "name": "BaseBdev1", 00:31:07.916 "aliases": [ 00:31:07.916 "bf7a25f3-66c8-4506-9002-65f0b0a960e6" 00:31:07.916 ], 00:31:07.916 "product_name": "Malloc disk", 00:31:07.916 "block_size": 512, 00:31:07.916 "num_blocks": 65536, 00:31:07.916 "uuid": "bf7a25f3-66c8-4506-9002-65f0b0a960e6", 00:31:07.916 "assigned_rate_limits": { 00:31:07.916 "rw_ios_per_sec": 0, 00:31:07.916 "rw_mbytes_per_sec": 0, 00:31:07.916 "r_mbytes_per_sec": 0, 00:31:07.916 "w_mbytes_per_sec": 0 00:31:07.916 }, 00:31:07.916 "claimed": true, 00:31:07.916 "claim_type": "exclusive_write", 00:31:07.916 "zoned": false, 00:31:07.916 "supported_io_types": { 00:31:07.916 "read": true, 00:31:07.916 "write": true, 00:31:07.916 "unmap": true, 00:31:07.916 "flush": true, 00:31:07.916 "reset": true, 00:31:07.916 "nvme_admin": false, 00:31:07.916 "nvme_io": false, 00:31:07.916 "nvme_io_md": false, 00:31:07.916 "write_zeroes": true, 00:31:07.916 "zcopy": true, 00:31:07.916 "get_zone_info": false, 00:31:07.916 "zone_management": false, 00:31:07.916 "zone_append": false, 00:31:07.916 "compare": false, 00:31:07.916 "compare_and_write": false, 00:31:07.916 "abort": true, 00:31:07.916 "seek_hole": false, 00:31:07.916 "seek_data": false, 00:31:07.916 "copy": true, 00:31:07.916 "nvme_iov_md": false 00:31:07.916 }, 00:31:07.916 "memory_domains": [ 00:31:07.916 { 00:31:07.916 "dma_device_id": "system", 00:31:07.916 "dma_device_type": 1 00:31:07.916 }, 00:31:07.916 { 00:31:07.916 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:31:07.916 "dma_device_type": 2 00:31:07.916 } 00:31:07.916 ], 00:31:07.916 "driver_specific": {} 00:31:07.916 } 00:31:07.916 ] 00:31:07.916 17:27:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:07.916 17:27:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:31:07.916 17:27:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:31:07.917 17:27:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:31:07.917 17:27:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:31:07.917 17:27:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:31:07.917 17:27:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:31:07.917 17:27:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:31:07.917 17:27:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:31:07.917 17:27:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:31:07.917 17:27:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:31:07.917 17:27:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:31:07.917 17:27:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:31:07.917 17:27:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:07.917 17:27:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:31:07.917 17:27:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:31:07.917 17:27:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:07.917 17:27:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:31:07.917 "name": "Existed_Raid", 00:31:07.917 "uuid": "00000000-0000-0000-0000-000000000000", 00:31:07.917 "strip_size_kb": 0, 00:31:07.917 "state": "configuring", 00:31:07.917 "raid_level": "raid1", 00:31:07.917 "superblock": false, 00:31:07.917 "num_base_bdevs": 3, 00:31:07.917 "num_base_bdevs_discovered": 2, 00:31:07.917 "num_base_bdevs_operational": 3, 00:31:07.917 "base_bdevs_list": [ 00:31:07.917 { 00:31:07.917 "name": "BaseBdev1", 00:31:07.917 "uuid": "bf7a25f3-66c8-4506-9002-65f0b0a960e6", 00:31:07.917 "is_configured": true, 00:31:07.917 "data_offset": 0, 00:31:07.917 "data_size": 65536 00:31:07.917 }, 00:31:07.917 { 00:31:07.917 "name": null, 00:31:07.917 "uuid": "0928a920-5302-435b-acb2-bd3716db5c41", 00:31:07.917 "is_configured": false, 00:31:07.917 "data_offset": 0, 00:31:07.917 "data_size": 65536 00:31:07.917 }, 00:31:07.917 { 00:31:07.917 "name": "BaseBdev3", 00:31:07.917 "uuid": "8b1f68ef-141d-4f2d-aee0-ebddb64f9ef5", 00:31:07.917 "is_configured": true, 00:31:07.917 "data_offset": 0, 00:31:07.917 "data_size": 65536 00:31:07.917 } 00:31:07.917 ] 00:31:07.917 }' 00:31:07.917 17:27:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:31:07.917 17:27:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:31:08.174 17:27:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:31:08.174 17:27:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:08.174 17:27:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:31:08.174 17:27:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:31:08.174 17:27:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:08.432 17:27:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:31:08.432 17:27:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:31:08.433 17:27:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:08.433 17:27:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:31:08.433 [2024-11-26 17:27:45.649627] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:31:08.433 17:27:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:08.433 17:27:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:31:08.433 17:27:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:31:08.433 17:27:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:31:08.433 17:27:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:31:08.433 17:27:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:31:08.433 17:27:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:31:08.433 17:27:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:31:08.433 17:27:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:31:08.433 17:27:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:31:08.433 17:27:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:31:08.433 17:27:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:31:08.433 17:27:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:08.433 17:27:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:31:08.433 17:27:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:31:08.433 17:27:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:08.433 17:27:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:31:08.433 "name": "Existed_Raid", 00:31:08.433 "uuid": "00000000-0000-0000-0000-000000000000", 00:31:08.433 "strip_size_kb": 0, 00:31:08.433 "state": "configuring", 00:31:08.433 "raid_level": "raid1", 00:31:08.433 "superblock": false, 00:31:08.433 "num_base_bdevs": 3, 00:31:08.433 "num_base_bdevs_discovered": 1, 00:31:08.433 "num_base_bdevs_operational": 3, 00:31:08.433 "base_bdevs_list": [ 00:31:08.433 { 00:31:08.433 "name": "BaseBdev1", 00:31:08.433 "uuid": "bf7a25f3-66c8-4506-9002-65f0b0a960e6", 00:31:08.433 "is_configured": true, 00:31:08.433 "data_offset": 0, 00:31:08.433 "data_size": 65536 00:31:08.433 }, 00:31:08.433 { 00:31:08.433 "name": null, 00:31:08.433 "uuid": "0928a920-5302-435b-acb2-bd3716db5c41", 00:31:08.433 "is_configured": false, 00:31:08.433 "data_offset": 0, 00:31:08.433 "data_size": 65536 00:31:08.433 }, 00:31:08.433 { 00:31:08.433 "name": null, 00:31:08.433 "uuid": "8b1f68ef-141d-4f2d-aee0-ebddb64f9ef5", 00:31:08.433 "is_configured": false, 00:31:08.433 "data_offset": 0, 00:31:08.433 "data_size": 65536 00:31:08.433 } 00:31:08.433 ] 00:31:08.433 }' 00:31:08.433 17:27:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:31:08.433 17:27:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:31:08.691 17:27:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:31:08.691 17:27:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:08.691 17:27:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:31:08.691 17:27:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:31:08.691 17:27:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:08.950 17:27:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:31:08.950 17:27:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:31:08.950 17:27:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:08.950 17:27:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:31:08.950 [2024-11-26 17:27:46.149776] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:31:08.950 17:27:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:08.950 17:27:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:31:08.950 17:27:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:31:08.950 17:27:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:31:08.950 17:27:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:31:08.950 17:27:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:31:08.950 17:27:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:31:08.950 17:27:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:31:08.950 17:27:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:31:08.950 17:27:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:31:08.950 17:27:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:31:08.950 17:27:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:31:08.950 17:27:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:31:08.950 17:27:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:08.950 17:27:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:31:08.950 17:27:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:08.950 17:27:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:31:08.950 "name": "Existed_Raid", 00:31:08.950 "uuid": "00000000-0000-0000-0000-000000000000", 00:31:08.950 "strip_size_kb": 0, 00:31:08.950 "state": "configuring", 00:31:08.950 "raid_level": "raid1", 00:31:08.950 "superblock": false, 00:31:08.950 "num_base_bdevs": 3, 00:31:08.950 "num_base_bdevs_discovered": 2, 00:31:08.950 "num_base_bdevs_operational": 3, 00:31:08.950 "base_bdevs_list": [ 00:31:08.950 { 00:31:08.950 "name": "BaseBdev1", 00:31:08.950 "uuid": "bf7a25f3-66c8-4506-9002-65f0b0a960e6", 00:31:08.950 "is_configured": true, 00:31:08.950 "data_offset": 0, 00:31:08.950 "data_size": 65536 00:31:08.950 }, 00:31:08.950 { 00:31:08.950 "name": null, 00:31:08.950 "uuid": "0928a920-5302-435b-acb2-bd3716db5c41", 00:31:08.950 "is_configured": false, 00:31:08.950 "data_offset": 0, 00:31:08.950 "data_size": 65536 00:31:08.950 }, 00:31:08.950 { 00:31:08.950 "name": "BaseBdev3", 00:31:08.950 "uuid": "8b1f68ef-141d-4f2d-aee0-ebddb64f9ef5", 00:31:08.950 "is_configured": true, 00:31:08.950 "data_offset": 0, 00:31:08.950 "data_size": 65536 00:31:08.950 } 00:31:08.950 ] 00:31:08.950 }' 00:31:08.950 17:27:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:31:08.950 17:27:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:31:09.208 17:27:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:31:09.208 17:27:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:31:09.208 17:27:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:09.209 17:27:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:31:09.209 17:27:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:09.209 17:27:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:31:09.209 17:27:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:31:09.209 17:27:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:09.209 17:27:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:31:09.209 [2024-11-26 17:27:46.649892] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:31:09.467 17:27:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:09.467 17:27:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:31:09.467 17:27:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:31:09.467 17:27:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:31:09.467 17:27:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:31:09.467 17:27:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:31:09.467 17:27:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:31:09.467 17:27:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:31:09.467 17:27:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:31:09.467 17:27:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:31:09.467 17:27:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:31:09.467 17:27:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:31:09.467 17:27:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:09.467 17:27:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:31:09.467 17:27:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:31:09.467 17:27:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:09.467 17:27:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:31:09.467 "name": "Existed_Raid", 00:31:09.467 "uuid": "00000000-0000-0000-0000-000000000000", 00:31:09.467 "strip_size_kb": 0, 00:31:09.467 "state": "configuring", 00:31:09.467 "raid_level": "raid1", 00:31:09.467 "superblock": false, 00:31:09.467 "num_base_bdevs": 3, 00:31:09.467 "num_base_bdevs_discovered": 1, 00:31:09.467 "num_base_bdevs_operational": 3, 00:31:09.467 "base_bdevs_list": [ 00:31:09.467 { 00:31:09.467 "name": null, 00:31:09.467 "uuid": "bf7a25f3-66c8-4506-9002-65f0b0a960e6", 00:31:09.467 "is_configured": false, 00:31:09.467 "data_offset": 0, 00:31:09.467 "data_size": 65536 00:31:09.467 }, 00:31:09.467 { 00:31:09.467 "name": null, 00:31:09.467 "uuid": "0928a920-5302-435b-acb2-bd3716db5c41", 00:31:09.467 "is_configured": false, 00:31:09.467 "data_offset": 0, 00:31:09.467 "data_size": 65536 00:31:09.467 }, 00:31:09.467 { 00:31:09.467 "name": "BaseBdev3", 00:31:09.467 "uuid": "8b1f68ef-141d-4f2d-aee0-ebddb64f9ef5", 00:31:09.467 "is_configured": true, 00:31:09.467 "data_offset": 0, 00:31:09.467 "data_size": 65536 00:31:09.467 } 00:31:09.467 ] 00:31:09.467 }' 00:31:09.467 17:27:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:31:09.467 17:27:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:31:10.035 17:27:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:31:10.035 17:27:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:10.035 17:27:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:31:10.035 17:27:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:31:10.035 17:27:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:10.035 17:27:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:31:10.035 17:27:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:31:10.035 17:27:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:10.035 17:27:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:31:10.035 [2024-11-26 17:27:47.244626] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:31:10.035 17:27:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:10.035 17:27:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:31:10.035 17:27:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:31:10.035 17:27:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:31:10.035 17:27:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:31:10.035 17:27:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:31:10.035 17:27:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:31:10.035 17:27:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:31:10.035 17:27:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:31:10.035 17:27:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:31:10.035 17:27:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:31:10.035 17:27:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:31:10.035 17:27:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:31:10.035 17:27:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:10.035 17:27:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:31:10.035 17:27:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:10.035 17:27:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:31:10.035 "name": "Existed_Raid", 00:31:10.035 "uuid": "00000000-0000-0000-0000-000000000000", 00:31:10.035 "strip_size_kb": 0, 00:31:10.035 "state": "configuring", 00:31:10.035 "raid_level": "raid1", 00:31:10.035 "superblock": false, 00:31:10.035 "num_base_bdevs": 3, 00:31:10.035 "num_base_bdevs_discovered": 2, 00:31:10.035 "num_base_bdevs_operational": 3, 00:31:10.035 "base_bdevs_list": [ 00:31:10.035 { 00:31:10.035 "name": null, 00:31:10.035 "uuid": "bf7a25f3-66c8-4506-9002-65f0b0a960e6", 00:31:10.035 "is_configured": false, 00:31:10.035 "data_offset": 0, 00:31:10.035 "data_size": 65536 00:31:10.035 }, 00:31:10.035 { 00:31:10.035 "name": "BaseBdev2", 00:31:10.035 "uuid": "0928a920-5302-435b-acb2-bd3716db5c41", 00:31:10.035 "is_configured": true, 00:31:10.035 "data_offset": 0, 00:31:10.035 "data_size": 65536 00:31:10.035 }, 00:31:10.035 { 00:31:10.035 "name": "BaseBdev3", 00:31:10.035 "uuid": "8b1f68ef-141d-4f2d-aee0-ebddb64f9ef5", 00:31:10.035 "is_configured": true, 00:31:10.035 "data_offset": 0, 00:31:10.035 "data_size": 65536 00:31:10.035 } 00:31:10.035 ] 00:31:10.035 }' 00:31:10.035 17:27:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:31:10.035 17:27:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:31:10.294 17:27:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:31:10.294 17:27:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:10.294 17:27:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:31:10.294 17:27:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:31:10.294 17:27:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:10.294 17:27:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:31:10.294 17:27:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:31:10.294 17:27:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:31:10.294 17:27:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:10.294 17:27:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:31:10.553 17:27:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:10.553 17:27:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u bf7a25f3-66c8-4506-9002-65f0b0a960e6 00:31:10.553 17:27:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:10.553 17:27:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:31:10.553 [2024-11-26 17:27:47.803141] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:31:10.553 [2024-11-26 17:27:47.803205] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:31:10.553 [2024-11-26 17:27:47.803214] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:31:10.553 [2024-11-26 17:27:47.803477] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:31:10.553 [2024-11-26 17:27:47.803629] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:31:10.553 [2024-11-26 17:27:47.803647] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000008200 00:31:10.553 [2024-11-26 17:27:47.803895] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:31:10.553 NewBaseBdev 00:31:10.553 17:27:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:10.553 17:27:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:31:10.553 17:27:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=NewBaseBdev 00:31:10.553 17:27:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:31:10.553 17:27:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:31:10.553 17:27:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:31:10.553 17:27:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:31:10.553 17:27:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:31:10.553 17:27:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:10.553 17:27:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:31:10.553 17:27:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:10.553 17:27:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:31:10.553 17:27:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:10.553 17:27:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:31:10.553 [ 00:31:10.553 { 00:31:10.553 "name": "NewBaseBdev", 00:31:10.553 "aliases": [ 00:31:10.553 "bf7a25f3-66c8-4506-9002-65f0b0a960e6" 00:31:10.553 ], 00:31:10.553 "product_name": "Malloc disk", 00:31:10.553 "block_size": 512, 00:31:10.553 "num_blocks": 65536, 00:31:10.553 "uuid": "bf7a25f3-66c8-4506-9002-65f0b0a960e6", 00:31:10.553 "assigned_rate_limits": { 00:31:10.553 "rw_ios_per_sec": 0, 00:31:10.553 "rw_mbytes_per_sec": 0, 00:31:10.553 "r_mbytes_per_sec": 0, 00:31:10.553 "w_mbytes_per_sec": 0 00:31:10.553 }, 00:31:10.553 "claimed": true, 00:31:10.553 "claim_type": "exclusive_write", 00:31:10.553 "zoned": false, 00:31:10.553 "supported_io_types": { 00:31:10.553 "read": true, 00:31:10.553 "write": true, 00:31:10.553 "unmap": true, 00:31:10.553 "flush": true, 00:31:10.553 "reset": true, 00:31:10.553 "nvme_admin": false, 00:31:10.553 "nvme_io": false, 00:31:10.553 "nvme_io_md": false, 00:31:10.553 "write_zeroes": true, 00:31:10.553 "zcopy": true, 00:31:10.553 "get_zone_info": false, 00:31:10.553 "zone_management": false, 00:31:10.553 "zone_append": false, 00:31:10.553 "compare": false, 00:31:10.553 "compare_and_write": false, 00:31:10.553 "abort": true, 00:31:10.553 "seek_hole": false, 00:31:10.553 "seek_data": false, 00:31:10.553 "copy": true, 00:31:10.553 "nvme_iov_md": false 00:31:10.553 }, 00:31:10.553 "memory_domains": [ 00:31:10.553 { 00:31:10.553 "dma_device_id": "system", 00:31:10.553 "dma_device_type": 1 00:31:10.553 }, 00:31:10.553 { 00:31:10.553 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:31:10.553 "dma_device_type": 2 00:31:10.553 } 00:31:10.553 ], 00:31:10.553 "driver_specific": {} 00:31:10.553 } 00:31:10.553 ] 00:31:10.553 17:27:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:10.553 17:27:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:31:10.553 17:27:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online raid1 0 3 00:31:10.553 17:27:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:31:10.553 17:27:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:31:10.553 17:27:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:31:10.553 17:27:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:31:10.553 17:27:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:31:10.553 17:27:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:31:10.553 17:27:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:31:10.553 17:27:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:31:10.553 17:27:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:31:10.553 17:27:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:31:10.553 17:27:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:10.553 17:27:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:31:10.553 17:27:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:31:10.553 17:27:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:10.553 17:27:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:31:10.553 "name": "Existed_Raid", 00:31:10.553 "uuid": "0b5b8c78-2868-4d88-b95e-ecab8ee81b76", 00:31:10.553 "strip_size_kb": 0, 00:31:10.553 "state": "online", 00:31:10.553 "raid_level": "raid1", 00:31:10.553 "superblock": false, 00:31:10.553 "num_base_bdevs": 3, 00:31:10.553 "num_base_bdevs_discovered": 3, 00:31:10.553 "num_base_bdevs_operational": 3, 00:31:10.553 "base_bdevs_list": [ 00:31:10.553 { 00:31:10.553 "name": "NewBaseBdev", 00:31:10.553 "uuid": "bf7a25f3-66c8-4506-9002-65f0b0a960e6", 00:31:10.553 "is_configured": true, 00:31:10.553 "data_offset": 0, 00:31:10.553 "data_size": 65536 00:31:10.553 }, 00:31:10.553 { 00:31:10.553 "name": "BaseBdev2", 00:31:10.553 "uuid": "0928a920-5302-435b-acb2-bd3716db5c41", 00:31:10.553 "is_configured": true, 00:31:10.553 "data_offset": 0, 00:31:10.553 "data_size": 65536 00:31:10.553 }, 00:31:10.553 { 00:31:10.553 "name": "BaseBdev3", 00:31:10.553 "uuid": "8b1f68ef-141d-4f2d-aee0-ebddb64f9ef5", 00:31:10.553 "is_configured": true, 00:31:10.553 "data_offset": 0, 00:31:10.553 "data_size": 65536 00:31:10.553 } 00:31:10.553 ] 00:31:10.553 }' 00:31:10.553 17:27:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:31:10.553 17:27:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:31:11.121 17:27:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:31:11.121 17:27:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:31:11.121 17:27:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:31:11.121 17:27:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:31:11.121 17:27:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:31:11.121 17:27:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:31:11.121 17:27:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:31:11.121 17:27:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:11.121 17:27:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:31:11.121 17:27:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:31:11.121 [2024-11-26 17:27:48.311633] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:31:11.121 17:27:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:11.121 17:27:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:31:11.121 "name": "Existed_Raid", 00:31:11.121 "aliases": [ 00:31:11.121 "0b5b8c78-2868-4d88-b95e-ecab8ee81b76" 00:31:11.121 ], 00:31:11.121 "product_name": "Raid Volume", 00:31:11.121 "block_size": 512, 00:31:11.121 "num_blocks": 65536, 00:31:11.121 "uuid": "0b5b8c78-2868-4d88-b95e-ecab8ee81b76", 00:31:11.121 "assigned_rate_limits": { 00:31:11.121 "rw_ios_per_sec": 0, 00:31:11.121 "rw_mbytes_per_sec": 0, 00:31:11.121 "r_mbytes_per_sec": 0, 00:31:11.121 "w_mbytes_per_sec": 0 00:31:11.121 }, 00:31:11.121 "claimed": false, 00:31:11.121 "zoned": false, 00:31:11.121 "supported_io_types": { 00:31:11.121 "read": true, 00:31:11.121 "write": true, 00:31:11.121 "unmap": false, 00:31:11.121 "flush": false, 00:31:11.121 "reset": true, 00:31:11.121 "nvme_admin": false, 00:31:11.121 "nvme_io": false, 00:31:11.121 "nvme_io_md": false, 00:31:11.121 "write_zeroes": true, 00:31:11.121 "zcopy": false, 00:31:11.121 "get_zone_info": false, 00:31:11.121 "zone_management": false, 00:31:11.121 "zone_append": false, 00:31:11.121 "compare": false, 00:31:11.121 "compare_and_write": false, 00:31:11.121 "abort": false, 00:31:11.121 "seek_hole": false, 00:31:11.121 "seek_data": false, 00:31:11.121 "copy": false, 00:31:11.121 "nvme_iov_md": false 00:31:11.121 }, 00:31:11.121 "memory_domains": [ 00:31:11.121 { 00:31:11.121 "dma_device_id": "system", 00:31:11.121 "dma_device_type": 1 00:31:11.121 }, 00:31:11.121 { 00:31:11.121 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:31:11.121 "dma_device_type": 2 00:31:11.121 }, 00:31:11.121 { 00:31:11.121 "dma_device_id": "system", 00:31:11.121 "dma_device_type": 1 00:31:11.121 }, 00:31:11.121 { 00:31:11.121 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:31:11.121 "dma_device_type": 2 00:31:11.121 }, 00:31:11.121 { 00:31:11.121 "dma_device_id": "system", 00:31:11.121 "dma_device_type": 1 00:31:11.121 }, 00:31:11.121 { 00:31:11.121 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:31:11.121 "dma_device_type": 2 00:31:11.121 } 00:31:11.121 ], 00:31:11.121 "driver_specific": { 00:31:11.121 "raid": { 00:31:11.121 "uuid": "0b5b8c78-2868-4d88-b95e-ecab8ee81b76", 00:31:11.121 "strip_size_kb": 0, 00:31:11.121 "state": "online", 00:31:11.121 "raid_level": "raid1", 00:31:11.121 "superblock": false, 00:31:11.121 "num_base_bdevs": 3, 00:31:11.121 "num_base_bdevs_discovered": 3, 00:31:11.121 "num_base_bdevs_operational": 3, 00:31:11.121 "base_bdevs_list": [ 00:31:11.121 { 00:31:11.121 "name": "NewBaseBdev", 00:31:11.121 "uuid": "bf7a25f3-66c8-4506-9002-65f0b0a960e6", 00:31:11.121 "is_configured": true, 00:31:11.121 "data_offset": 0, 00:31:11.121 "data_size": 65536 00:31:11.121 }, 00:31:11.121 { 00:31:11.121 "name": "BaseBdev2", 00:31:11.121 "uuid": "0928a920-5302-435b-acb2-bd3716db5c41", 00:31:11.121 "is_configured": true, 00:31:11.121 "data_offset": 0, 00:31:11.121 "data_size": 65536 00:31:11.121 }, 00:31:11.121 { 00:31:11.121 "name": "BaseBdev3", 00:31:11.121 "uuid": "8b1f68ef-141d-4f2d-aee0-ebddb64f9ef5", 00:31:11.121 "is_configured": true, 00:31:11.121 "data_offset": 0, 00:31:11.121 "data_size": 65536 00:31:11.121 } 00:31:11.121 ] 00:31:11.121 } 00:31:11.121 } 00:31:11.121 }' 00:31:11.121 17:27:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:31:11.121 17:27:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:31:11.121 BaseBdev2 00:31:11.121 BaseBdev3' 00:31:11.121 17:27:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:31:11.121 17:27:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:31:11.121 17:27:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:31:11.121 17:27:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:31:11.121 17:27:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:31:11.121 17:27:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:11.121 17:27:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:31:11.121 17:27:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:11.121 17:27:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:31:11.121 17:27:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:31:11.121 17:27:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:31:11.121 17:27:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:31:11.121 17:27:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:11.121 17:27:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:31:11.121 17:27:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:31:11.121 17:27:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:11.122 17:27:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:31:11.122 17:27:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:31:11.122 17:27:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:31:11.122 17:27:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:31:11.122 17:27:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:31:11.122 17:27:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:11.122 17:27:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:31:11.122 17:27:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:11.381 17:27:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:31:11.381 17:27:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:31:11.381 17:27:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:31:11.381 17:27:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:11.381 17:27:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:31:11.381 [2024-11-26 17:27:48.571400] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:31:11.381 [2024-11-26 17:27:48.571435] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:31:11.381 [2024-11-26 17:27:48.571517] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:31:11.381 [2024-11-26 17:27:48.571800] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:31:11.381 [2024-11-26 17:27:48.571812] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name Existed_Raid, state offline 00:31:11.381 17:27:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:11.381 17:27:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@326 -- # killprocess 67809 00:31:11.381 17:27:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@954 -- # '[' -z 67809 ']' 00:31:11.381 17:27:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@958 -- # kill -0 67809 00:31:11.381 17:27:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@959 -- # uname 00:31:11.381 17:27:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:31:11.381 17:27:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 67809 00:31:11.381 killing process with pid 67809 00:31:11.381 17:27:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:31:11.381 17:27:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:31:11.381 17:27:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 67809' 00:31:11.381 17:27:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@973 -- # kill 67809 00:31:11.381 [2024-11-26 17:27:48.611660] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:31:11.381 17:27:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@978 -- # wait 67809 00:31:11.639 [2024-11-26 17:27:48.925121] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:31:13.033 17:27:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@328 -- # return 0 00:31:13.033 00:31:13.033 real 0m10.976s 00:31:13.033 user 0m17.499s 00:31:13.033 sys 0m2.020s 00:31:13.033 17:27:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:31:13.033 ************************************ 00:31:13.033 END TEST raid_state_function_test 00:31:13.033 ************************************ 00:31:13.033 17:27:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:31:13.033 17:27:50 bdev_raid -- bdev/bdev_raid.sh@969 -- # run_test raid_state_function_test_sb raid_state_function_test raid1 3 true 00:31:13.033 17:27:50 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:31:13.033 17:27:50 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:31:13.033 17:27:50 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:31:13.033 ************************************ 00:31:13.033 START TEST raid_state_function_test_sb 00:31:13.034 ************************************ 00:31:13.034 17:27:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1129 -- # raid_state_function_test raid1 3 true 00:31:13.034 17:27:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # local raid_level=raid1 00:31:13.034 17:27:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=3 00:31:13.034 17:27:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:31:13.034 17:27:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:31:13.034 17:27:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:31:13.034 17:27:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:31:13.034 17:27:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:31:13.034 17:27:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:31:13.034 17:27:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:31:13.034 17:27:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:31:13.034 17:27:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:31:13.034 17:27:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:31:13.034 17:27:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:31:13.034 17:27:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:31:13.034 17:27:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:31:13.034 17:27:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:31:13.034 17:27:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:31:13.034 17:27:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:31:13.034 17:27:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # local strip_size 00:31:13.034 17:27:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:31:13.034 17:27:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:31:13.034 17:27:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@215 -- # '[' raid1 '!=' raid1 ']' 00:31:13.034 17:27:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@219 -- # strip_size=0 00:31:13.034 17:27:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:31:13.034 17:27:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:31:13.034 Process raid pid: 68438 00:31:13.034 17:27:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@229 -- # raid_pid=68438 00:31:13.034 17:27:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 68438' 00:31:13.034 17:27:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:31:13.034 17:27:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # waitforlisten 68438 00:31:13.034 17:27:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@835 -- # '[' -z 68438 ']' 00:31:13.034 17:27:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:31:13.034 17:27:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@840 -- # local max_retries=100 00:31:13.034 17:27:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:31:13.034 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:31:13.034 17:27:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@844 -- # xtrace_disable 00:31:13.034 17:27:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:31:13.034 [2024-11-26 17:27:50.287208] Starting SPDK v25.01-pre git sha1 f7ce15267 / DPDK 24.03.0 initialization... 00:31:13.034 [2024-11-26 17:27:50.287602] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:31:13.293 [2024-11-26 17:27:50.485716] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:13.293 [2024-11-26 17:27:50.605836] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:31:13.551 [2024-11-26 17:27:50.823679] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:31:13.551 [2024-11-26 17:27:50.823726] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:31:14.117 17:27:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:31:14.117 17:27:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@868 -- # return 0 00:31:14.117 17:27:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:31:14.117 17:27:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:14.117 17:27:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:31:14.117 [2024-11-26 17:27:51.302246] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:31:14.117 [2024-11-26 17:27:51.302440] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:31:14.117 [2024-11-26 17:27:51.302469] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:31:14.117 [2024-11-26 17:27:51.302485] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:31:14.117 [2024-11-26 17:27:51.302493] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:31:14.117 [2024-11-26 17:27:51.302506] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:31:14.117 17:27:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:14.117 17:27:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:31:14.117 17:27:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:31:14.117 17:27:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:31:14.117 17:27:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:31:14.117 17:27:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:31:14.117 17:27:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:31:14.117 17:27:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:31:14.117 17:27:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:31:14.117 17:27:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:31:14.117 17:27:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:31:14.117 17:27:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:31:14.117 17:27:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:31:14.117 17:27:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:14.117 17:27:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:31:14.117 17:27:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:14.117 17:27:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:31:14.117 "name": "Existed_Raid", 00:31:14.117 "uuid": "acd428b3-7090-4554-a002-b6364a80472f", 00:31:14.117 "strip_size_kb": 0, 00:31:14.117 "state": "configuring", 00:31:14.117 "raid_level": "raid1", 00:31:14.117 "superblock": true, 00:31:14.117 "num_base_bdevs": 3, 00:31:14.117 "num_base_bdevs_discovered": 0, 00:31:14.117 "num_base_bdevs_operational": 3, 00:31:14.117 "base_bdevs_list": [ 00:31:14.117 { 00:31:14.117 "name": "BaseBdev1", 00:31:14.118 "uuid": "00000000-0000-0000-0000-000000000000", 00:31:14.118 "is_configured": false, 00:31:14.118 "data_offset": 0, 00:31:14.118 "data_size": 0 00:31:14.118 }, 00:31:14.118 { 00:31:14.118 "name": "BaseBdev2", 00:31:14.118 "uuid": "00000000-0000-0000-0000-000000000000", 00:31:14.118 "is_configured": false, 00:31:14.118 "data_offset": 0, 00:31:14.118 "data_size": 0 00:31:14.118 }, 00:31:14.118 { 00:31:14.118 "name": "BaseBdev3", 00:31:14.118 "uuid": "00000000-0000-0000-0000-000000000000", 00:31:14.118 "is_configured": false, 00:31:14.118 "data_offset": 0, 00:31:14.118 "data_size": 0 00:31:14.118 } 00:31:14.118 ] 00:31:14.118 }' 00:31:14.118 17:27:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:31:14.118 17:27:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:31:14.394 17:27:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:31:14.394 17:27:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:14.394 17:27:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:31:14.394 [2024-11-26 17:27:51.746304] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:31:14.394 [2024-11-26 17:27:51.746459] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:31:14.394 17:27:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:14.394 17:27:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:31:14.394 17:27:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:14.394 17:27:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:31:14.395 [2024-11-26 17:27:51.758291] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:31:14.395 [2024-11-26 17:27:51.758338] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:31:14.395 [2024-11-26 17:27:51.758348] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:31:14.395 [2024-11-26 17:27:51.758361] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:31:14.395 [2024-11-26 17:27:51.758369] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:31:14.395 [2024-11-26 17:27:51.758381] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:31:14.395 17:27:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:14.395 17:27:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:31:14.395 17:27:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:14.395 17:27:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:31:14.395 [2024-11-26 17:27:51.804513] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:31:14.395 BaseBdev1 00:31:14.395 17:27:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:14.395 17:27:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:31:14.395 17:27:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:31:14.395 17:27:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:31:14.395 17:27:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:31:14.395 17:27:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:31:14.395 17:27:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:31:14.395 17:27:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:31:14.395 17:27:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:14.395 17:27:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:31:14.395 17:27:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:14.395 17:27:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:31:14.395 17:27:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:14.395 17:27:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:31:14.395 [ 00:31:14.395 { 00:31:14.395 "name": "BaseBdev1", 00:31:14.395 "aliases": [ 00:31:14.395 "ca08e16e-3845-4304-bc95-2f83b1248b93" 00:31:14.395 ], 00:31:14.395 "product_name": "Malloc disk", 00:31:14.395 "block_size": 512, 00:31:14.395 "num_blocks": 65536, 00:31:14.395 "uuid": "ca08e16e-3845-4304-bc95-2f83b1248b93", 00:31:14.395 "assigned_rate_limits": { 00:31:14.395 "rw_ios_per_sec": 0, 00:31:14.395 "rw_mbytes_per_sec": 0, 00:31:14.395 "r_mbytes_per_sec": 0, 00:31:14.395 "w_mbytes_per_sec": 0 00:31:14.395 }, 00:31:14.395 "claimed": true, 00:31:14.395 "claim_type": "exclusive_write", 00:31:14.395 "zoned": false, 00:31:14.395 "supported_io_types": { 00:31:14.395 "read": true, 00:31:14.395 "write": true, 00:31:14.395 "unmap": true, 00:31:14.395 "flush": true, 00:31:14.395 "reset": true, 00:31:14.395 "nvme_admin": false, 00:31:14.395 "nvme_io": false, 00:31:14.395 "nvme_io_md": false, 00:31:14.395 "write_zeroes": true, 00:31:14.396 "zcopy": true, 00:31:14.664 "get_zone_info": false, 00:31:14.664 "zone_management": false, 00:31:14.664 "zone_append": false, 00:31:14.664 "compare": false, 00:31:14.664 "compare_and_write": false, 00:31:14.664 "abort": true, 00:31:14.664 "seek_hole": false, 00:31:14.664 "seek_data": false, 00:31:14.664 "copy": true, 00:31:14.664 "nvme_iov_md": false 00:31:14.664 }, 00:31:14.664 "memory_domains": [ 00:31:14.664 { 00:31:14.664 "dma_device_id": "system", 00:31:14.664 "dma_device_type": 1 00:31:14.664 }, 00:31:14.664 { 00:31:14.664 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:31:14.664 "dma_device_type": 2 00:31:14.664 } 00:31:14.664 ], 00:31:14.664 "driver_specific": {} 00:31:14.664 } 00:31:14.664 ] 00:31:14.664 17:27:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:14.664 17:27:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:31:14.664 17:27:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:31:14.664 17:27:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:31:14.664 17:27:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:31:14.664 17:27:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:31:14.664 17:27:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:31:14.664 17:27:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:31:14.664 17:27:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:31:14.664 17:27:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:31:14.664 17:27:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:31:14.664 17:27:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:31:14.664 17:27:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:31:14.664 17:27:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:14.664 17:27:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:31:14.664 17:27:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:31:14.664 17:27:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:14.664 17:27:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:31:14.664 "name": "Existed_Raid", 00:31:14.664 "uuid": "fd551602-d722-4384-97e1-f9594001b43a", 00:31:14.664 "strip_size_kb": 0, 00:31:14.664 "state": "configuring", 00:31:14.664 "raid_level": "raid1", 00:31:14.664 "superblock": true, 00:31:14.664 "num_base_bdevs": 3, 00:31:14.664 "num_base_bdevs_discovered": 1, 00:31:14.664 "num_base_bdevs_operational": 3, 00:31:14.664 "base_bdevs_list": [ 00:31:14.664 { 00:31:14.664 "name": "BaseBdev1", 00:31:14.664 "uuid": "ca08e16e-3845-4304-bc95-2f83b1248b93", 00:31:14.664 "is_configured": true, 00:31:14.664 "data_offset": 2048, 00:31:14.664 "data_size": 63488 00:31:14.664 }, 00:31:14.664 { 00:31:14.664 "name": "BaseBdev2", 00:31:14.664 "uuid": "00000000-0000-0000-0000-000000000000", 00:31:14.664 "is_configured": false, 00:31:14.664 "data_offset": 0, 00:31:14.664 "data_size": 0 00:31:14.664 }, 00:31:14.664 { 00:31:14.664 "name": "BaseBdev3", 00:31:14.664 "uuid": "00000000-0000-0000-0000-000000000000", 00:31:14.664 "is_configured": false, 00:31:14.664 "data_offset": 0, 00:31:14.664 "data_size": 0 00:31:14.664 } 00:31:14.664 ] 00:31:14.664 }' 00:31:14.664 17:27:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:31:14.664 17:27:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:31:14.923 17:27:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:31:14.923 17:27:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:14.923 17:27:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:31:14.923 [2024-11-26 17:27:52.248701] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:31:14.923 [2024-11-26 17:27:52.248889] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:31:14.923 17:27:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:14.923 17:27:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:31:14.923 17:27:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:14.923 17:27:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:31:14.923 [2024-11-26 17:27:52.260737] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:31:14.923 [2024-11-26 17:27:52.263135] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:31:14.923 [2024-11-26 17:27:52.263297] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:31:14.923 [2024-11-26 17:27:52.263425] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:31:14.923 [2024-11-26 17:27:52.263477] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:31:14.923 17:27:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:14.923 17:27:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:31:14.923 17:27:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:31:14.923 17:27:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:31:14.923 17:27:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:31:14.923 17:27:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:31:14.923 17:27:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:31:14.923 17:27:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:31:14.923 17:27:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:31:14.923 17:27:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:31:14.923 17:27:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:31:14.923 17:27:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:31:14.923 17:27:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:31:14.923 17:27:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:31:14.923 17:27:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:31:14.923 17:27:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:14.923 17:27:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:31:14.923 17:27:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:14.923 17:27:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:31:14.923 "name": "Existed_Raid", 00:31:14.923 "uuid": "db89b857-7aeb-4487-a39f-6dd090ab656d", 00:31:14.923 "strip_size_kb": 0, 00:31:14.923 "state": "configuring", 00:31:14.923 "raid_level": "raid1", 00:31:14.923 "superblock": true, 00:31:14.923 "num_base_bdevs": 3, 00:31:14.923 "num_base_bdevs_discovered": 1, 00:31:14.923 "num_base_bdevs_operational": 3, 00:31:14.923 "base_bdevs_list": [ 00:31:14.923 { 00:31:14.923 "name": "BaseBdev1", 00:31:14.923 "uuid": "ca08e16e-3845-4304-bc95-2f83b1248b93", 00:31:14.923 "is_configured": true, 00:31:14.923 "data_offset": 2048, 00:31:14.923 "data_size": 63488 00:31:14.923 }, 00:31:14.923 { 00:31:14.923 "name": "BaseBdev2", 00:31:14.923 "uuid": "00000000-0000-0000-0000-000000000000", 00:31:14.923 "is_configured": false, 00:31:14.923 "data_offset": 0, 00:31:14.923 "data_size": 0 00:31:14.923 }, 00:31:14.923 { 00:31:14.923 "name": "BaseBdev3", 00:31:14.923 "uuid": "00000000-0000-0000-0000-000000000000", 00:31:14.923 "is_configured": false, 00:31:14.923 "data_offset": 0, 00:31:14.923 "data_size": 0 00:31:14.923 } 00:31:14.923 ] 00:31:14.923 }' 00:31:14.923 17:27:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:31:14.923 17:27:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:31:15.491 17:27:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:31:15.491 17:27:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:15.491 17:27:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:31:15.491 [2024-11-26 17:27:52.737034] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:31:15.491 BaseBdev2 00:31:15.491 17:27:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:15.491 17:27:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:31:15.491 17:27:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:31:15.491 17:27:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:31:15.491 17:27:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:31:15.491 17:27:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:31:15.491 17:27:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:31:15.491 17:27:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:31:15.491 17:27:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:15.491 17:27:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:31:15.491 17:27:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:15.491 17:27:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:31:15.491 17:27:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:15.491 17:27:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:31:15.491 [ 00:31:15.491 { 00:31:15.491 "name": "BaseBdev2", 00:31:15.491 "aliases": [ 00:31:15.491 "9228f0a2-45cd-4c6e-a4a9-41d0d6d9ba3f" 00:31:15.491 ], 00:31:15.491 "product_name": "Malloc disk", 00:31:15.491 "block_size": 512, 00:31:15.491 "num_blocks": 65536, 00:31:15.491 "uuid": "9228f0a2-45cd-4c6e-a4a9-41d0d6d9ba3f", 00:31:15.491 "assigned_rate_limits": { 00:31:15.491 "rw_ios_per_sec": 0, 00:31:15.491 "rw_mbytes_per_sec": 0, 00:31:15.491 "r_mbytes_per_sec": 0, 00:31:15.491 "w_mbytes_per_sec": 0 00:31:15.491 }, 00:31:15.491 "claimed": true, 00:31:15.491 "claim_type": "exclusive_write", 00:31:15.491 "zoned": false, 00:31:15.491 "supported_io_types": { 00:31:15.491 "read": true, 00:31:15.491 "write": true, 00:31:15.491 "unmap": true, 00:31:15.491 "flush": true, 00:31:15.491 "reset": true, 00:31:15.491 "nvme_admin": false, 00:31:15.491 "nvme_io": false, 00:31:15.491 "nvme_io_md": false, 00:31:15.491 "write_zeroes": true, 00:31:15.491 "zcopy": true, 00:31:15.491 "get_zone_info": false, 00:31:15.491 "zone_management": false, 00:31:15.491 "zone_append": false, 00:31:15.491 "compare": false, 00:31:15.491 "compare_and_write": false, 00:31:15.491 "abort": true, 00:31:15.491 "seek_hole": false, 00:31:15.491 "seek_data": false, 00:31:15.491 "copy": true, 00:31:15.491 "nvme_iov_md": false 00:31:15.491 }, 00:31:15.491 "memory_domains": [ 00:31:15.491 { 00:31:15.491 "dma_device_id": "system", 00:31:15.491 "dma_device_type": 1 00:31:15.491 }, 00:31:15.491 { 00:31:15.491 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:31:15.491 "dma_device_type": 2 00:31:15.491 } 00:31:15.491 ], 00:31:15.491 "driver_specific": {} 00:31:15.491 } 00:31:15.491 ] 00:31:15.491 17:27:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:15.491 17:27:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:31:15.491 17:27:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:31:15.491 17:27:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:31:15.491 17:27:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:31:15.491 17:27:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:31:15.491 17:27:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:31:15.491 17:27:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:31:15.491 17:27:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:31:15.491 17:27:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:31:15.491 17:27:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:31:15.491 17:27:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:31:15.491 17:27:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:31:15.491 17:27:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:31:15.491 17:27:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:31:15.491 17:27:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:31:15.491 17:27:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:15.491 17:27:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:31:15.491 17:27:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:15.491 17:27:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:31:15.491 "name": "Existed_Raid", 00:31:15.491 "uuid": "db89b857-7aeb-4487-a39f-6dd090ab656d", 00:31:15.491 "strip_size_kb": 0, 00:31:15.491 "state": "configuring", 00:31:15.491 "raid_level": "raid1", 00:31:15.491 "superblock": true, 00:31:15.491 "num_base_bdevs": 3, 00:31:15.491 "num_base_bdevs_discovered": 2, 00:31:15.491 "num_base_bdevs_operational": 3, 00:31:15.491 "base_bdevs_list": [ 00:31:15.491 { 00:31:15.491 "name": "BaseBdev1", 00:31:15.491 "uuid": "ca08e16e-3845-4304-bc95-2f83b1248b93", 00:31:15.491 "is_configured": true, 00:31:15.491 "data_offset": 2048, 00:31:15.491 "data_size": 63488 00:31:15.491 }, 00:31:15.491 { 00:31:15.491 "name": "BaseBdev2", 00:31:15.491 "uuid": "9228f0a2-45cd-4c6e-a4a9-41d0d6d9ba3f", 00:31:15.491 "is_configured": true, 00:31:15.491 "data_offset": 2048, 00:31:15.491 "data_size": 63488 00:31:15.491 }, 00:31:15.491 { 00:31:15.491 "name": "BaseBdev3", 00:31:15.491 "uuid": "00000000-0000-0000-0000-000000000000", 00:31:15.491 "is_configured": false, 00:31:15.491 "data_offset": 0, 00:31:15.491 "data_size": 0 00:31:15.491 } 00:31:15.491 ] 00:31:15.491 }' 00:31:15.491 17:27:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:31:15.491 17:27:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:31:16.058 17:27:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:31:16.058 17:27:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:16.058 17:27:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:31:16.058 [2024-11-26 17:27:53.266840] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:31:16.058 [2024-11-26 17:27:53.267121] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:31:16.058 [2024-11-26 17:27:53.267148] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:31:16.058 BaseBdev3 00:31:16.058 [2024-11-26 17:27:53.267463] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:31:16.058 [2024-11-26 17:27:53.267631] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:31:16.058 [2024-11-26 17:27:53.267642] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:31:16.058 [2024-11-26 17:27:53.267801] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:31:16.058 17:27:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:16.058 17:27:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:31:16.058 17:27:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:31:16.058 17:27:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:31:16.058 17:27:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:31:16.058 17:27:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:31:16.058 17:27:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:31:16.058 17:27:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:31:16.058 17:27:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:16.058 17:27:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:31:16.058 17:27:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:16.058 17:27:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:31:16.058 17:27:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:16.058 17:27:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:31:16.058 [ 00:31:16.058 { 00:31:16.058 "name": "BaseBdev3", 00:31:16.058 "aliases": [ 00:31:16.058 "4a94968a-a697-47cc-a462-75fca2ec5ea3" 00:31:16.058 ], 00:31:16.058 "product_name": "Malloc disk", 00:31:16.058 "block_size": 512, 00:31:16.058 "num_blocks": 65536, 00:31:16.058 "uuid": "4a94968a-a697-47cc-a462-75fca2ec5ea3", 00:31:16.058 "assigned_rate_limits": { 00:31:16.058 "rw_ios_per_sec": 0, 00:31:16.058 "rw_mbytes_per_sec": 0, 00:31:16.058 "r_mbytes_per_sec": 0, 00:31:16.058 "w_mbytes_per_sec": 0 00:31:16.058 }, 00:31:16.058 "claimed": true, 00:31:16.058 "claim_type": "exclusive_write", 00:31:16.058 "zoned": false, 00:31:16.058 "supported_io_types": { 00:31:16.058 "read": true, 00:31:16.058 "write": true, 00:31:16.058 "unmap": true, 00:31:16.058 "flush": true, 00:31:16.058 "reset": true, 00:31:16.059 "nvme_admin": false, 00:31:16.059 "nvme_io": false, 00:31:16.059 "nvme_io_md": false, 00:31:16.059 "write_zeroes": true, 00:31:16.059 "zcopy": true, 00:31:16.059 "get_zone_info": false, 00:31:16.059 "zone_management": false, 00:31:16.059 "zone_append": false, 00:31:16.059 "compare": false, 00:31:16.059 "compare_and_write": false, 00:31:16.059 "abort": true, 00:31:16.059 "seek_hole": false, 00:31:16.059 "seek_data": false, 00:31:16.059 "copy": true, 00:31:16.059 "nvme_iov_md": false 00:31:16.059 }, 00:31:16.059 "memory_domains": [ 00:31:16.059 { 00:31:16.059 "dma_device_id": "system", 00:31:16.059 "dma_device_type": 1 00:31:16.059 }, 00:31:16.059 { 00:31:16.059 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:31:16.059 "dma_device_type": 2 00:31:16.059 } 00:31:16.059 ], 00:31:16.059 "driver_specific": {} 00:31:16.059 } 00:31:16.059 ] 00:31:16.059 17:27:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:16.059 17:27:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:31:16.059 17:27:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:31:16.059 17:27:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:31:16.059 17:27:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid1 0 3 00:31:16.059 17:27:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:31:16.059 17:27:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:31:16.059 17:27:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:31:16.059 17:27:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:31:16.059 17:27:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:31:16.059 17:27:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:31:16.059 17:27:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:31:16.059 17:27:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:31:16.059 17:27:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:31:16.059 17:27:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:31:16.059 17:27:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:31:16.059 17:27:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:16.059 17:27:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:31:16.059 17:27:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:16.059 17:27:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:31:16.059 "name": "Existed_Raid", 00:31:16.059 "uuid": "db89b857-7aeb-4487-a39f-6dd090ab656d", 00:31:16.059 "strip_size_kb": 0, 00:31:16.059 "state": "online", 00:31:16.059 "raid_level": "raid1", 00:31:16.059 "superblock": true, 00:31:16.059 "num_base_bdevs": 3, 00:31:16.059 "num_base_bdevs_discovered": 3, 00:31:16.059 "num_base_bdevs_operational": 3, 00:31:16.059 "base_bdevs_list": [ 00:31:16.059 { 00:31:16.059 "name": "BaseBdev1", 00:31:16.059 "uuid": "ca08e16e-3845-4304-bc95-2f83b1248b93", 00:31:16.059 "is_configured": true, 00:31:16.059 "data_offset": 2048, 00:31:16.059 "data_size": 63488 00:31:16.059 }, 00:31:16.059 { 00:31:16.059 "name": "BaseBdev2", 00:31:16.059 "uuid": "9228f0a2-45cd-4c6e-a4a9-41d0d6d9ba3f", 00:31:16.059 "is_configured": true, 00:31:16.059 "data_offset": 2048, 00:31:16.059 "data_size": 63488 00:31:16.059 }, 00:31:16.059 { 00:31:16.059 "name": "BaseBdev3", 00:31:16.059 "uuid": "4a94968a-a697-47cc-a462-75fca2ec5ea3", 00:31:16.059 "is_configured": true, 00:31:16.059 "data_offset": 2048, 00:31:16.059 "data_size": 63488 00:31:16.059 } 00:31:16.059 ] 00:31:16.059 }' 00:31:16.059 17:27:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:31:16.059 17:27:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:31:16.317 17:27:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:31:16.317 17:27:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:31:16.317 17:27:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:31:16.317 17:27:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:31:16.317 17:27:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:31:16.317 17:27:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:31:16.317 17:27:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:31:16.317 17:27:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:31:16.317 17:27:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:16.317 17:27:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:31:16.575 [2024-11-26 17:27:53.763341] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:31:16.575 17:27:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:16.575 17:27:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:31:16.575 "name": "Existed_Raid", 00:31:16.575 "aliases": [ 00:31:16.575 "db89b857-7aeb-4487-a39f-6dd090ab656d" 00:31:16.575 ], 00:31:16.575 "product_name": "Raid Volume", 00:31:16.575 "block_size": 512, 00:31:16.575 "num_blocks": 63488, 00:31:16.575 "uuid": "db89b857-7aeb-4487-a39f-6dd090ab656d", 00:31:16.575 "assigned_rate_limits": { 00:31:16.575 "rw_ios_per_sec": 0, 00:31:16.575 "rw_mbytes_per_sec": 0, 00:31:16.575 "r_mbytes_per_sec": 0, 00:31:16.575 "w_mbytes_per_sec": 0 00:31:16.575 }, 00:31:16.575 "claimed": false, 00:31:16.575 "zoned": false, 00:31:16.575 "supported_io_types": { 00:31:16.575 "read": true, 00:31:16.575 "write": true, 00:31:16.575 "unmap": false, 00:31:16.575 "flush": false, 00:31:16.575 "reset": true, 00:31:16.575 "nvme_admin": false, 00:31:16.575 "nvme_io": false, 00:31:16.575 "nvme_io_md": false, 00:31:16.575 "write_zeroes": true, 00:31:16.575 "zcopy": false, 00:31:16.575 "get_zone_info": false, 00:31:16.575 "zone_management": false, 00:31:16.575 "zone_append": false, 00:31:16.575 "compare": false, 00:31:16.575 "compare_and_write": false, 00:31:16.575 "abort": false, 00:31:16.575 "seek_hole": false, 00:31:16.575 "seek_data": false, 00:31:16.575 "copy": false, 00:31:16.575 "nvme_iov_md": false 00:31:16.575 }, 00:31:16.575 "memory_domains": [ 00:31:16.575 { 00:31:16.575 "dma_device_id": "system", 00:31:16.575 "dma_device_type": 1 00:31:16.575 }, 00:31:16.575 { 00:31:16.575 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:31:16.575 "dma_device_type": 2 00:31:16.575 }, 00:31:16.575 { 00:31:16.575 "dma_device_id": "system", 00:31:16.575 "dma_device_type": 1 00:31:16.575 }, 00:31:16.575 { 00:31:16.575 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:31:16.575 "dma_device_type": 2 00:31:16.575 }, 00:31:16.575 { 00:31:16.575 "dma_device_id": "system", 00:31:16.575 "dma_device_type": 1 00:31:16.575 }, 00:31:16.575 { 00:31:16.575 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:31:16.575 "dma_device_type": 2 00:31:16.575 } 00:31:16.575 ], 00:31:16.575 "driver_specific": { 00:31:16.575 "raid": { 00:31:16.575 "uuid": "db89b857-7aeb-4487-a39f-6dd090ab656d", 00:31:16.575 "strip_size_kb": 0, 00:31:16.575 "state": "online", 00:31:16.575 "raid_level": "raid1", 00:31:16.575 "superblock": true, 00:31:16.575 "num_base_bdevs": 3, 00:31:16.575 "num_base_bdevs_discovered": 3, 00:31:16.575 "num_base_bdevs_operational": 3, 00:31:16.575 "base_bdevs_list": [ 00:31:16.575 { 00:31:16.575 "name": "BaseBdev1", 00:31:16.575 "uuid": "ca08e16e-3845-4304-bc95-2f83b1248b93", 00:31:16.575 "is_configured": true, 00:31:16.575 "data_offset": 2048, 00:31:16.575 "data_size": 63488 00:31:16.575 }, 00:31:16.575 { 00:31:16.575 "name": "BaseBdev2", 00:31:16.575 "uuid": "9228f0a2-45cd-4c6e-a4a9-41d0d6d9ba3f", 00:31:16.575 "is_configured": true, 00:31:16.575 "data_offset": 2048, 00:31:16.575 "data_size": 63488 00:31:16.575 }, 00:31:16.575 { 00:31:16.575 "name": "BaseBdev3", 00:31:16.575 "uuid": "4a94968a-a697-47cc-a462-75fca2ec5ea3", 00:31:16.575 "is_configured": true, 00:31:16.575 "data_offset": 2048, 00:31:16.575 "data_size": 63488 00:31:16.575 } 00:31:16.575 ] 00:31:16.575 } 00:31:16.575 } 00:31:16.575 }' 00:31:16.575 17:27:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:31:16.575 17:27:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:31:16.575 BaseBdev2 00:31:16.575 BaseBdev3' 00:31:16.575 17:27:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:31:16.575 17:27:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:31:16.575 17:27:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:31:16.575 17:27:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:31:16.575 17:27:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:16.575 17:27:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:31:16.575 17:27:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:31:16.575 17:27:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:16.575 17:27:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:31:16.575 17:27:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:31:16.575 17:27:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:31:16.575 17:27:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:31:16.575 17:27:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:31:16.575 17:27:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:16.575 17:27:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:31:16.575 17:27:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:16.575 17:27:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:31:16.575 17:27:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:31:16.575 17:27:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:31:16.575 17:27:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:31:16.575 17:27:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:31:16.575 17:27:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:16.575 17:27:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:31:16.575 17:27:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:16.834 17:27:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:31:16.834 17:27:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:31:16.834 17:27:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:31:16.834 17:27:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:16.834 17:27:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:31:16.834 [2024-11-26 17:27:54.035167] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:31:16.834 17:27:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:16.834 17:27:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # local expected_state 00:31:16.834 17:27:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@261 -- # has_redundancy raid1 00:31:16.834 17:27:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # case $1 in 00:31:16.834 17:27:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@199 -- # return 0 00:31:16.834 17:27:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@264 -- # expected_state=online 00:31:16.834 17:27:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid online raid1 0 2 00:31:16.834 17:27:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:31:16.834 17:27:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:31:16.834 17:27:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:31:16.834 17:27:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:31:16.834 17:27:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:31:16.834 17:27:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:31:16.834 17:27:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:31:16.834 17:27:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:31:16.834 17:27:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:31:16.834 17:27:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:31:16.834 17:27:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:31:16.834 17:27:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:16.834 17:27:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:31:16.834 17:27:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:16.834 17:27:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:31:16.834 "name": "Existed_Raid", 00:31:16.834 "uuid": "db89b857-7aeb-4487-a39f-6dd090ab656d", 00:31:16.834 "strip_size_kb": 0, 00:31:16.834 "state": "online", 00:31:16.834 "raid_level": "raid1", 00:31:16.834 "superblock": true, 00:31:16.834 "num_base_bdevs": 3, 00:31:16.834 "num_base_bdevs_discovered": 2, 00:31:16.834 "num_base_bdevs_operational": 2, 00:31:16.834 "base_bdevs_list": [ 00:31:16.834 { 00:31:16.834 "name": null, 00:31:16.834 "uuid": "00000000-0000-0000-0000-000000000000", 00:31:16.834 "is_configured": false, 00:31:16.834 "data_offset": 0, 00:31:16.834 "data_size": 63488 00:31:16.834 }, 00:31:16.834 { 00:31:16.834 "name": "BaseBdev2", 00:31:16.834 "uuid": "9228f0a2-45cd-4c6e-a4a9-41d0d6d9ba3f", 00:31:16.834 "is_configured": true, 00:31:16.834 "data_offset": 2048, 00:31:16.834 "data_size": 63488 00:31:16.834 }, 00:31:16.834 { 00:31:16.834 "name": "BaseBdev3", 00:31:16.834 "uuid": "4a94968a-a697-47cc-a462-75fca2ec5ea3", 00:31:16.834 "is_configured": true, 00:31:16.834 "data_offset": 2048, 00:31:16.834 "data_size": 63488 00:31:16.834 } 00:31:16.834 ] 00:31:16.834 }' 00:31:16.834 17:27:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:31:16.834 17:27:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:31:17.402 17:27:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:31:17.402 17:27:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:31:17.402 17:27:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:31:17.402 17:27:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:31:17.402 17:27:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:17.402 17:27:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:31:17.402 17:27:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:17.402 17:27:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:31:17.402 17:27:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:31:17.402 17:27:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:31:17.402 17:27:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:17.402 17:27:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:31:17.402 [2024-11-26 17:27:54.642294] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:31:17.402 17:27:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:17.402 17:27:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:31:17.402 17:27:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:31:17.402 17:27:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:31:17.402 17:27:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:31:17.402 17:27:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:17.402 17:27:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:31:17.402 17:27:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:17.402 17:27:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:31:17.402 17:27:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:31:17.402 17:27:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:31:17.402 17:27:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:17.402 17:27:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:31:17.402 [2024-11-26 17:27:54.807397] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:31:17.402 [2024-11-26 17:27:54.807499] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:31:17.661 [2024-11-26 17:27:54.909912] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:31:17.661 [2024-11-26 17:27:54.909977] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:31:17.661 [2024-11-26 17:27:54.909992] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:31:17.661 17:27:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:17.661 17:27:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:31:17.661 17:27:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:31:17.661 17:27:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:31:17.661 17:27:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:31:17.661 17:27:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:17.661 17:27:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:31:17.661 17:27:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:17.661 17:27:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:31:17.661 17:27:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:31:17.661 17:27:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@284 -- # '[' 3 -gt 2 ']' 00:31:17.661 17:27:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:31:17.661 17:27:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:31:17.661 17:27:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:31:17.661 17:27:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:17.661 17:27:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:31:17.661 BaseBdev2 00:31:17.661 17:27:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:17.661 17:27:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:31:17.661 17:27:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:31:17.661 17:27:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:31:17.661 17:27:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:31:17.661 17:27:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:31:17.661 17:27:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:31:17.661 17:27:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:31:17.661 17:27:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:17.661 17:27:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:31:17.661 17:27:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:17.661 17:27:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:31:17.661 17:27:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:17.661 17:27:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:31:17.661 [ 00:31:17.661 { 00:31:17.661 "name": "BaseBdev2", 00:31:17.661 "aliases": [ 00:31:17.661 "902a259e-a2be-4864-bd77-5a871a1dc5ec" 00:31:17.661 ], 00:31:17.661 "product_name": "Malloc disk", 00:31:17.661 "block_size": 512, 00:31:17.661 "num_blocks": 65536, 00:31:17.661 "uuid": "902a259e-a2be-4864-bd77-5a871a1dc5ec", 00:31:17.661 "assigned_rate_limits": { 00:31:17.661 "rw_ios_per_sec": 0, 00:31:17.661 "rw_mbytes_per_sec": 0, 00:31:17.661 "r_mbytes_per_sec": 0, 00:31:17.661 "w_mbytes_per_sec": 0 00:31:17.661 }, 00:31:17.661 "claimed": false, 00:31:17.661 "zoned": false, 00:31:17.661 "supported_io_types": { 00:31:17.661 "read": true, 00:31:17.661 "write": true, 00:31:17.661 "unmap": true, 00:31:17.661 "flush": true, 00:31:17.661 "reset": true, 00:31:17.661 "nvme_admin": false, 00:31:17.661 "nvme_io": false, 00:31:17.661 "nvme_io_md": false, 00:31:17.661 "write_zeroes": true, 00:31:17.661 "zcopy": true, 00:31:17.661 "get_zone_info": false, 00:31:17.661 "zone_management": false, 00:31:17.661 "zone_append": false, 00:31:17.661 "compare": false, 00:31:17.661 "compare_and_write": false, 00:31:17.661 "abort": true, 00:31:17.661 "seek_hole": false, 00:31:17.661 "seek_data": false, 00:31:17.661 "copy": true, 00:31:17.661 "nvme_iov_md": false 00:31:17.661 }, 00:31:17.661 "memory_domains": [ 00:31:17.661 { 00:31:17.661 "dma_device_id": "system", 00:31:17.661 "dma_device_type": 1 00:31:17.661 }, 00:31:17.661 { 00:31:17.661 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:31:17.661 "dma_device_type": 2 00:31:17.661 } 00:31:17.661 ], 00:31:17.661 "driver_specific": {} 00:31:17.661 } 00:31:17.661 ] 00:31:17.661 17:27:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:17.661 17:27:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:31:17.661 17:27:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:31:17.661 17:27:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:31:17.661 17:27:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:31:17.661 17:27:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:17.661 17:27:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:31:17.661 BaseBdev3 00:31:17.661 17:27:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:17.661 17:27:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:31:17.661 17:27:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:31:17.661 17:27:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:31:17.661 17:27:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:31:17.661 17:27:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:31:17.661 17:27:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:31:17.661 17:27:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:31:17.661 17:27:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:17.661 17:27:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:31:17.661 17:27:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:17.661 17:27:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:31:17.661 17:27:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:17.661 17:27:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:31:17.661 [ 00:31:17.661 { 00:31:17.661 "name": "BaseBdev3", 00:31:17.661 "aliases": [ 00:31:17.661 "5f58aea6-1e45-40db-a162-3622e38dcdf0" 00:31:17.661 ], 00:31:17.661 "product_name": "Malloc disk", 00:31:17.661 "block_size": 512, 00:31:17.661 "num_blocks": 65536, 00:31:17.661 "uuid": "5f58aea6-1e45-40db-a162-3622e38dcdf0", 00:31:17.661 "assigned_rate_limits": { 00:31:17.661 "rw_ios_per_sec": 0, 00:31:17.661 "rw_mbytes_per_sec": 0, 00:31:17.661 "r_mbytes_per_sec": 0, 00:31:17.920 "w_mbytes_per_sec": 0 00:31:17.920 }, 00:31:17.920 "claimed": false, 00:31:17.920 "zoned": false, 00:31:17.920 "supported_io_types": { 00:31:17.920 "read": true, 00:31:17.920 "write": true, 00:31:17.920 "unmap": true, 00:31:17.920 "flush": true, 00:31:17.920 "reset": true, 00:31:17.920 "nvme_admin": false, 00:31:17.920 "nvme_io": false, 00:31:17.920 "nvme_io_md": false, 00:31:17.920 "write_zeroes": true, 00:31:17.920 "zcopy": true, 00:31:17.920 "get_zone_info": false, 00:31:17.920 "zone_management": false, 00:31:17.920 "zone_append": false, 00:31:17.920 "compare": false, 00:31:17.920 "compare_and_write": false, 00:31:17.920 "abort": true, 00:31:17.920 "seek_hole": false, 00:31:17.920 "seek_data": false, 00:31:17.920 "copy": true, 00:31:17.920 "nvme_iov_md": false 00:31:17.920 }, 00:31:17.920 "memory_domains": [ 00:31:17.920 { 00:31:17.920 "dma_device_id": "system", 00:31:17.920 "dma_device_type": 1 00:31:17.920 }, 00:31:17.920 { 00:31:17.920 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:31:17.920 "dma_device_type": 2 00:31:17.920 } 00:31:17.920 ], 00:31:17.920 "driver_specific": {} 00:31:17.920 } 00:31:17.920 ] 00:31:17.920 17:27:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:17.920 17:27:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:31:17.920 17:27:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:31:17.920 17:27:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:31:17.920 17:27:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:31:17.920 17:27:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:17.920 17:27:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:31:17.920 [2024-11-26 17:27:55.120628] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:31:17.920 [2024-11-26 17:27:55.120680] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:31:17.920 [2024-11-26 17:27:55.120718] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:31:17.920 [2024-11-26 17:27:55.123053] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:31:17.920 17:27:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:17.920 17:27:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:31:17.920 17:27:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:31:17.920 17:27:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:31:17.920 17:27:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:31:17.920 17:27:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:31:17.920 17:27:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:31:17.920 17:27:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:31:17.920 17:27:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:31:17.920 17:27:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:31:17.920 17:27:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:31:17.920 17:27:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:31:17.920 17:27:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:17.920 17:27:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:31:17.920 17:27:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:31:17.920 17:27:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:17.921 17:27:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:31:17.921 "name": "Existed_Raid", 00:31:17.921 "uuid": "5d33ed99-a2f2-40df-a106-ea9086b71885", 00:31:17.921 "strip_size_kb": 0, 00:31:17.921 "state": "configuring", 00:31:17.921 "raid_level": "raid1", 00:31:17.921 "superblock": true, 00:31:17.921 "num_base_bdevs": 3, 00:31:17.921 "num_base_bdevs_discovered": 2, 00:31:17.921 "num_base_bdevs_operational": 3, 00:31:17.921 "base_bdevs_list": [ 00:31:17.921 { 00:31:17.921 "name": "BaseBdev1", 00:31:17.921 "uuid": "00000000-0000-0000-0000-000000000000", 00:31:17.921 "is_configured": false, 00:31:17.921 "data_offset": 0, 00:31:17.921 "data_size": 0 00:31:17.921 }, 00:31:17.921 { 00:31:17.921 "name": "BaseBdev2", 00:31:17.921 "uuid": "902a259e-a2be-4864-bd77-5a871a1dc5ec", 00:31:17.921 "is_configured": true, 00:31:17.921 "data_offset": 2048, 00:31:17.921 "data_size": 63488 00:31:17.921 }, 00:31:17.921 { 00:31:17.921 "name": "BaseBdev3", 00:31:17.921 "uuid": "5f58aea6-1e45-40db-a162-3622e38dcdf0", 00:31:17.921 "is_configured": true, 00:31:17.921 "data_offset": 2048, 00:31:17.921 "data_size": 63488 00:31:17.921 } 00:31:17.921 ] 00:31:17.921 }' 00:31:17.921 17:27:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:31:17.921 17:27:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:31:18.179 17:27:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:31:18.179 17:27:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:18.179 17:27:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:31:18.179 [2024-11-26 17:27:55.572733] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:31:18.179 17:27:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:18.179 17:27:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:31:18.179 17:27:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:31:18.179 17:27:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:31:18.179 17:27:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:31:18.179 17:27:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:31:18.179 17:27:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:31:18.179 17:27:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:31:18.179 17:27:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:31:18.179 17:27:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:31:18.179 17:27:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:31:18.179 17:27:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:31:18.179 17:27:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:18.179 17:27:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:31:18.179 17:27:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:31:18.179 17:27:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:18.438 17:27:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:31:18.438 "name": "Existed_Raid", 00:31:18.438 "uuid": "5d33ed99-a2f2-40df-a106-ea9086b71885", 00:31:18.438 "strip_size_kb": 0, 00:31:18.438 "state": "configuring", 00:31:18.438 "raid_level": "raid1", 00:31:18.438 "superblock": true, 00:31:18.438 "num_base_bdevs": 3, 00:31:18.438 "num_base_bdevs_discovered": 1, 00:31:18.438 "num_base_bdevs_operational": 3, 00:31:18.438 "base_bdevs_list": [ 00:31:18.438 { 00:31:18.438 "name": "BaseBdev1", 00:31:18.438 "uuid": "00000000-0000-0000-0000-000000000000", 00:31:18.438 "is_configured": false, 00:31:18.438 "data_offset": 0, 00:31:18.438 "data_size": 0 00:31:18.438 }, 00:31:18.438 { 00:31:18.438 "name": null, 00:31:18.438 "uuid": "902a259e-a2be-4864-bd77-5a871a1dc5ec", 00:31:18.438 "is_configured": false, 00:31:18.438 "data_offset": 0, 00:31:18.438 "data_size": 63488 00:31:18.438 }, 00:31:18.438 { 00:31:18.438 "name": "BaseBdev3", 00:31:18.438 "uuid": "5f58aea6-1e45-40db-a162-3622e38dcdf0", 00:31:18.438 "is_configured": true, 00:31:18.438 "data_offset": 2048, 00:31:18.438 "data_size": 63488 00:31:18.438 } 00:31:18.438 ] 00:31:18.438 }' 00:31:18.438 17:27:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:31:18.438 17:27:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:31:18.697 17:27:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:31:18.697 17:27:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:18.697 17:27:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:31:18.697 17:27:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:31:18.697 17:27:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:18.697 17:27:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:31:18.697 17:27:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:31:18.697 17:27:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:18.697 17:27:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:31:18.697 [2024-11-26 17:27:56.138581] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:31:18.697 BaseBdev1 00:31:18.697 17:27:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:18.697 17:27:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:31:18.697 17:27:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:31:18.697 17:27:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:31:18.697 17:27:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:31:18.697 17:27:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:31:18.697 17:27:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:31:18.697 17:27:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:31:18.697 17:27:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:18.697 17:27:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:31:19.025 17:27:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:19.025 17:27:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:31:19.025 17:27:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:19.025 17:27:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:31:19.025 [ 00:31:19.025 { 00:31:19.025 "name": "BaseBdev1", 00:31:19.025 "aliases": [ 00:31:19.025 "4586f982-8a3f-44dd-8143-c117d18142fd" 00:31:19.025 ], 00:31:19.025 "product_name": "Malloc disk", 00:31:19.025 "block_size": 512, 00:31:19.025 "num_blocks": 65536, 00:31:19.025 "uuid": "4586f982-8a3f-44dd-8143-c117d18142fd", 00:31:19.025 "assigned_rate_limits": { 00:31:19.025 "rw_ios_per_sec": 0, 00:31:19.025 "rw_mbytes_per_sec": 0, 00:31:19.025 "r_mbytes_per_sec": 0, 00:31:19.025 "w_mbytes_per_sec": 0 00:31:19.025 }, 00:31:19.025 "claimed": true, 00:31:19.025 "claim_type": "exclusive_write", 00:31:19.025 "zoned": false, 00:31:19.025 "supported_io_types": { 00:31:19.025 "read": true, 00:31:19.025 "write": true, 00:31:19.025 "unmap": true, 00:31:19.025 "flush": true, 00:31:19.025 "reset": true, 00:31:19.025 "nvme_admin": false, 00:31:19.025 "nvme_io": false, 00:31:19.025 "nvme_io_md": false, 00:31:19.025 "write_zeroes": true, 00:31:19.025 "zcopy": true, 00:31:19.025 "get_zone_info": false, 00:31:19.025 "zone_management": false, 00:31:19.025 "zone_append": false, 00:31:19.025 "compare": false, 00:31:19.025 "compare_and_write": false, 00:31:19.025 "abort": true, 00:31:19.025 "seek_hole": false, 00:31:19.025 "seek_data": false, 00:31:19.025 "copy": true, 00:31:19.025 "nvme_iov_md": false 00:31:19.025 }, 00:31:19.025 "memory_domains": [ 00:31:19.025 { 00:31:19.025 "dma_device_id": "system", 00:31:19.025 "dma_device_type": 1 00:31:19.025 }, 00:31:19.025 { 00:31:19.025 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:31:19.026 "dma_device_type": 2 00:31:19.026 } 00:31:19.026 ], 00:31:19.026 "driver_specific": {} 00:31:19.026 } 00:31:19.026 ] 00:31:19.026 17:27:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:19.026 17:27:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:31:19.026 17:27:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:31:19.026 17:27:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:31:19.026 17:27:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:31:19.026 17:27:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:31:19.026 17:27:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:31:19.026 17:27:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:31:19.026 17:27:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:31:19.026 17:27:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:31:19.026 17:27:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:31:19.026 17:27:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:31:19.026 17:27:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:31:19.026 17:27:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:31:19.026 17:27:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:19.026 17:27:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:31:19.026 17:27:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:19.026 17:27:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:31:19.026 "name": "Existed_Raid", 00:31:19.026 "uuid": "5d33ed99-a2f2-40df-a106-ea9086b71885", 00:31:19.026 "strip_size_kb": 0, 00:31:19.026 "state": "configuring", 00:31:19.026 "raid_level": "raid1", 00:31:19.026 "superblock": true, 00:31:19.026 "num_base_bdevs": 3, 00:31:19.026 "num_base_bdevs_discovered": 2, 00:31:19.026 "num_base_bdevs_operational": 3, 00:31:19.026 "base_bdevs_list": [ 00:31:19.026 { 00:31:19.026 "name": "BaseBdev1", 00:31:19.026 "uuid": "4586f982-8a3f-44dd-8143-c117d18142fd", 00:31:19.026 "is_configured": true, 00:31:19.026 "data_offset": 2048, 00:31:19.026 "data_size": 63488 00:31:19.026 }, 00:31:19.026 { 00:31:19.026 "name": null, 00:31:19.026 "uuid": "902a259e-a2be-4864-bd77-5a871a1dc5ec", 00:31:19.026 "is_configured": false, 00:31:19.026 "data_offset": 0, 00:31:19.026 "data_size": 63488 00:31:19.026 }, 00:31:19.026 { 00:31:19.026 "name": "BaseBdev3", 00:31:19.026 "uuid": "5f58aea6-1e45-40db-a162-3622e38dcdf0", 00:31:19.026 "is_configured": true, 00:31:19.026 "data_offset": 2048, 00:31:19.026 "data_size": 63488 00:31:19.026 } 00:31:19.026 ] 00:31:19.026 }' 00:31:19.026 17:27:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:31:19.026 17:27:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:31:19.302 17:27:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:31:19.302 17:27:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:19.302 17:27:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:31:19.302 17:27:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:31:19.302 17:27:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:19.302 17:27:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:31:19.302 17:27:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:31:19.302 17:27:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:19.302 17:27:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:31:19.302 [2024-11-26 17:27:56.670752] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:31:19.302 17:27:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:19.302 17:27:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:31:19.302 17:27:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:31:19.302 17:27:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:31:19.302 17:27:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:31:19.302 17:27:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:31:19.302 17:27:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:31:19.302 17:27:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:31:19.302 17:27:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:31:19.302 17:27:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:31:19.302 17:27:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:31:19.302 17:27:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:31:19.302 17:27:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:31:19.302 17:27:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:19.302 17:27:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:31:19.302 17:27:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:19.302 17:27:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:31:19.302 "name": "Existed_Raid", 00:31:19.302 "uuid": "5d33ed99-a2f2-40df-a106-ea9086b71885", 00:31:19.302 "strip_size_kb": 0, 00:31:19.302 "state": "configuring", 00:31:19.302 "raid_level": "raid1", 00:31:19.302 "superblock": true, 00:31:19.302 "num_base_bdevs": 3, 00:31:19.302 "num_base_bdevs_discovered": 1, 00:31:19.302 "num_base_bdevs_operational": 3, 00:31:19.302 "base_bdevs_list": [ 00:31:19.302 { 00:31:19.302 "name": "BaseBdev1", 00:31:19.302 "uuid": "4586f982-8a3f-44dd-8143-c117d18142fd", 00:31:19.302 "is_configured": true, 00:31:19.302 "data_offset": 2048, 00:31:19.302 "data_size": 63488 00:31:19.302 }, 00:31:19.302 { 00:31:19.302 "name": null, 00:31:19.302 "uuid": "902a259e-a2be-4864-bd77-5a871a1dc5ec", 00:31:19.302 "is_configured": false, 00:31:19.302 "data_offset": 0, 00:31:19.302 "data_size": 63488 00:31:19.302 }, 00:31:19.302 { 00:31:19.302 "name": null, 00:31:19.302 "uuid": "5f58aea6-1e45-40db-a162-3622e38dcdf0", 00:31:19.302 "is_configured": false, 00:31:19.302 "data_offset": 0, 00:31:19.302 "data_size": 63488 00:31:19.302 } 00:31:19.302 ] 00:31:19.302 }' 00:31:19.302 17:27:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:31:19.302 17:27:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:31:19.870 17:27:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:31:19.870 17:27:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:19.870 17:27:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:31:19.870 17:27:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:31:19.870 17:27:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:19.870 17:27:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:31:19.870 17:27:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:31:19.870 17:27:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:19.870 17:27:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:31:19.870 [2024-11-26 17:27:57.178918] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:31:19.870 17:27:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:19.870 17:27:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:31:19.870 17:27:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:31:19.870 17:27:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:31:19.870 17:27:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:31:19.870 17:27:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:31:19.870 17:27:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:31:19.870 17:27:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:31:19.870 17:27:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:31:19.870 17:27:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:31:19.870 17:27:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:31:19.870 17:27:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:31:19.870 17:27:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:19.870 17:27:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:31:19.870 17:27:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:31:19.870 17:27:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:19.870 17:27:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:31:19.870 "name": "Existed_Raid", 00:31:19.870 "uuid": "5d33ed99-a2f2-40df-a106-ea9086b71885", 00:31:19.870 "strip_size_kb": 0, 00:31:19.870 "state": "configuring", 00:31:19.870 "raid_level": "raid1", 00:31:19.870 "superblock": true, 00:31:19.870 "num_base_bdevs": 3, 00:31:19.870 "num_base_bdevs_discovered": 2, 00:31:19.870 "num_base_bdevs_operational": 3, 00:31:19.870 "base_bdevs_list": [ 00:31:19.870 { 00:31:19.870 "name": "BaseBdev1", 00:31:19.870 "uuid": "4586f982-8a3f-44dd-8143-c117d18142fd", 00:31:19.870 "is_configured": true, 00:31:19.870 "data_offset": 2048, 00:31:19.870 "data_size": 63488 00:31:19.870 }, 00:31:19.870 { 00:31:19.870 "name": null, 00:31:19.870 "uuid": "902a259e-a2be-4864-bd77-5a871a1dc5ec", 00:31:19.870 "is_configured": false, 00:31:19.870 "data_offset": 0, 00:31:19.870 "data_size": 63488 00:31:19.870 }, 00:31:19.870 { 00:31:19.870 "name": "BaseBdev3", 00:31:19.870 "uuid": "5f58aea6-1e45-40db-a162-3622e38dcdf0", 00:31:19.870 "is_configured": true, 00:31:19.870 "data_offset": 2048, 00:31:19.870 "data_size": 63488 00:31:19.870 } 00:31:19.870 ] 00:31:19.870 }' 00:31:19.870 17:27:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:31:19.870 17:27:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:31:20.437 17:27:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:31:20.437 17:27:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:31:20.437 17:27:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:20.437 17:27:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:31:20.437 17:27:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:20.437 17:27:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:31:20.437 17:27:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:31:20.437 17:27:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:20.437 17:27:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:31:20.437 [2024-11-26 17:27:57.679024] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:31:20.437 17:27:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:20.437 17:27:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:31:20.437 17:27:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:31:20.437 17:27:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:31:20.438 17:27:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:31:20.438 17:27:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:31:20.438 17:27:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:31:20.438 17:27:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:31:20.438 17:27:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:31:20.438 17:27:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:31:20.438 17:27:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:31:20.438 17:27:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:31:20.438 17:27:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:20.438 17:27:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:31:20.438 17:27:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:31:20.438 17:27:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:20.438 17:27:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:31:20.438 "name": "Existed_Raid", 00:31:20.438 "uuid": "5d33ed99-a2f2-40df-a106-ea9086b71885", 00:31:20.438 "strip_size_kb": 0, 00:31:20.438 "state": "configuring", 00:31:20.438 "raid_level": "raid1", 00:31:20.438 "superblock": true, 00:31:20.438 "num_base_bdevs": 3, 00:31:20.438 "num_base_bdevs_discovered": 1, 00:31:20.438 "num_base_bdevs_operational": 3, 00:31:20.438 "base_bdevs_list": [ 00:31:20.438 { 00:31:20.438 "name": null, 00:31:20.438 "uuid": "4586f982-8a3f-44dd-8143-c117d18142fd", 00:31:20.438 "is_configured": false, 00:31:20.438 "data_offset": 0, 00:31:20.438 "data_size": 63488 00:31:20.438 }, 00:31:20.438 { 00:31:20.438 "name": null, 00:31:20.438 "uuid": "902a259e-a2be-4864-bd77-5a871a1dc5ec", 00:31:20.438 "is_configured": false, 00:31:20.438 "data_offset": 0, 00:31:20.438 "data_size": 63488 00:31:20.438 }, 00:31:20.438 { 00:31:20.438 "name": "BaseBdev3", 00:31:20.438 "uuid": "5f58aea6-1e45-40db-a162-3622e38dcdf0", 00:31:20.438 "is_configured": true, 00:31:20.438 "data_offset": 2048, 00:31:20.438 "data_size": 63488 00:31:20.438 } 00:31:20.438 ] 00:31:20.438 }' 00:31:20.438 17:27:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:31:20.438 17:27:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:31:21.005 17:27:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:31:21.005 17:27:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:31:21.005 17:27:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:21.005 17:27:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:31:21.005 17:27:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:21.005 17:27:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:31:21.005 17:27:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:31:21.005 17:27:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:21.005 17:27:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:31:21.005 [2024-11-26 17:27:58.284356] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:31:21.005 17:27:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:21.005 17:27:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:31:21.005 17:27:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:31:21.005 17:27:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:31:21.005 17:27:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:31:21.005 17:27:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:31:21.005 17:27:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:31:21.005 17:27:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:31:21.005 17:27:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:31:21.005 17:27:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:31:21.006 17:27:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:31:21.006 17:27:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:31:21.006 17:27:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:31:21.006 17:27:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:21.006 17:27:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:31:21.006 17:27:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:21.006 17:27:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:31:21.006 "name": "Existed_Raid", 00:31:21.006 "uuid": "5d33ed99-a2f2-40df-a106-ea9086b71885", 00:31:21.006 "strip_size_kb": 0, 00:31:21.006 "state": "configuring", 00:31:21.006 "raid_level": "raid1", 00:31:21.006 "superblock": true, 00:31:21.006 "num_base_bdevs": 3, 00:31:21.006 "num_base_bdevs_discovered": 2, 00:31:21.006 "num_base_bdevs_operational": 3, 00:31:21.006 "base_bdevs_list": [ 00:31:21.006 { 00:31:21.006 "name": null, 00:31:21.006 "uuid": "4586f982-8a3f-44dd-8143-c117d18142fd", 00:31:21.006 "is_configured": false, 00:31:21.006 "data_offset": 0, 00:31:21.006 "data_size": 63488 00:31:21.006 }, 00:31:21.006 { 00:31:21.006 "name": "BaseBdev2", 00:31:21.006 "uuid": "902a259e-a2be-4864-bd77-5a871a1dc5ec", 00:31:21.006 "is_configured": true, 00:31:21.006 "data_offset": 2048, 00:31:21.006 "data_size": 63488 00:31:21.006 }, 00:31:21.006 { 00:31:21.006 "name": "BaseBdev3", 00:31:21.006 "uuid": "5f58aea6-1e45-40db-a162-3622e38dcdf0", 00:31:21.006 "is_configured": true, 00:31:21.006 "data_offset": 2048, 00:31:21.006 "data_size": 63488 00:31:21.006 } 00:31:21.006 ] 00:31:21.006 }' 00:31:21.006 17:27:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:31:21.006 17:27:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:31:21.574 17:27:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:31:21.574 17:27:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:21.574 17:27:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:31:21.574 17:27:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:31:21.574 17:27:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:21.574 17:27:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:31:21.574 17:27:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:31:21.574 17:27:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:31:21.574 17:27:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:21.574 17:27:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:31:21.574 17:27:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:21.574 17:27:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u 4586f982-8a3f-44dd-8143-c117d18142fd 00:31:21.574 17:27:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:21.574 17:27:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:31:21.574 [2024-11-26 17:27:58.864697] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:31:21.574 [2024-11-26 17:27:58.865106] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:31:21.574 [2024-11-26 17:27:58.865126] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:31:21.574 [2024-11-26 17:27:58.865394] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:31:21.574 NewBaseBdev 00:31:21.574 [2024-11-26 17:27:58.865536] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:31:21.574 [2024-11-26 17:27:58.865549] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000008200 00:31:21.574 [2024-11-26 17:27:58.865675] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:31:21.574 17:27:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:21.574 17:27:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:31:21.574 17:27:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=NewBaseBdev 00:31:21.574 17:27:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:31:21.574 17:27:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:31:21.574 17:27:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:31:21.574 17:27:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:31:21.574 17:27:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:31:21.574 17:27:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:21.574 17:27:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:31:21.574 17:27:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:21.574 17:27:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:31:21.574 17:27:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:21.574 17:27:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:31:21.574 [ 00:31:21.574 { 00:31:21.574 "name": "NewBaseBdev", 00:31:21.574 "aliases": [ 00:31:21.574 "4586f982-8a3f-44dd-8143-c117d18142fd" 00:31:21.574 ], 00:31:21.574 "product_name": "Malloc disk", 00:31:21.574 "block_size": 512, 00:31:21.574 "num_blocks": 65536, 00:31:21.574 "uuid": "4586f982-8a3f-44dd-8143-c117d18142fd", 00:31:21.574 "assigned_rate_limits": { 00:31:21.574 "rw_ios_per_sec": 0, 00:31:21.574 "rw_mbytes_per_sec": 0, 00:31:21.574 "r_mbytes_per_sec": 0, 00:31:21.574 "w_mbytes_per_sec": 0 00:31:21.574 }, 00:31:21.574 "claimed": true, 00:31:21.574 "claim_type": "exclusive_write", 00:31:21.574 "zoned": false, 00:31:21.574 "supported_io_types": { 00:31:21.574 "read": true, 00:31:21.574 "write": true, 00:31:21.574 "unmap": true, 00:31:21.574 "flush": true, 00:31:21.574 "reset": true, 00:31:21.574 "nvme_admin": false, 00:31:21.574 "nvme_io": false, 00:31:21.574 "nvme_io_md": false, 00:31:21.574 "write_zeroes": true, 00:31:21.574 "zcopy": true, 00:31:21.574 "get_zone_info": false, 00:31:21.574 "zone_management": false, 00:31:21.574 "zone_append": false, 00:31:21.574 "compare": false, 00:31:21.574 "compare_and_write": false, 00:31:21.574 "abort": true, 00:31:21.574 "seek_hole": false, 00:31:21.574 "seek_data": false, 00:31:21.574 "copy": true, 00:31:21.574 "nvme_iov_md": false 00:31:21.574 }, 00:31:21.574 "memory_domains": [ 00:31:21.574 { 00:31:21.574 "dma_device_id": "system", 00:31:21.575 "dma_device_type": 1 00:31:21.575 }, 00:31:21.575 { 00:31:21.575 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:31:21.575 "dma_device_type": 2 00:31:21.575 } 00:31:21.575 ], 00:31:21.575 "driver_specific": {} 00:31:21.575 } 00:31:21.575 ] 00:31:21.575 17:27:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:21.575 17:27:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:31:21.575 17:27:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online raid1 0 3 00:31:21.575 17:27:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:31:21.575 17:27:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:31:21.575 17:27:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:31:21.575 17:27:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:31:21.575 17:27:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:31:21.575 17:27:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:31:21.575 17:27:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:31:21.575 17:27:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:31:21.575 17:27:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:31:21.575 17:27:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:31:21.575 17:27:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:21.575 17:27:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:31:21.575 17:27:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:31:21.575 17:27:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:21.575 17:27:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:31:21.575 "name": "Existed_Raid", 00:31:21.575 "uuid": "5d33ed99-a2f2-40df-a106-ea9086b71885", 00:31:21.575 "strip_size_kb": 0, 00:31:21.575 "state": "online", 00:31:21.575 "raid_level": "raid1", 00:31:21.575 "superblock": true, 00:31:21.575 "num_base_bdevs": 3, 00:31:21.575 "num_base_bdevs_discovered": 3, 00:31:21.575 "num_base_bdevs_operational": 3, 00:31:21.575 "base_bdevs_list": [ 00:31:21.575 { 00:31:21.575 "name": "NewBaseBdev", 00:31:21.575 "uuid": "4586f982-8a3f-44dd-8143-c117d18142fd", 00:31:21.575 "is_configured": true, 00:31:21.575 "data_offset": 2048, 00:31:21.575 "data_size": 63488 00:31:21.575 }, 00:31:21.575 { 00:31:21.575 "name": "BaseBdev2", 00:31:21.575 "uuid": "902a259e-a2be-4864-bd77-5a871a1dc5ec", 00:31:21.575 "is_configured": true, 00:31:21.575 "data_offset": 2048, 00:31:21.575 "data_size": 63488 00:31:21.575 }, 00:31:21.575 { 00:31:21.575 "name": "BaseBdev3", 00:31:21.575 "uuid": "5f58aea6-1e45-40db-a162-3622e38dcdf0", 00:31:21.575 "is_configured": true, 00:31:21.575 "data_offset": 2048, 00:31:21.575 "data_size": 63488 00:31:21.575 } 00:31:21.575 ] 00:31:21.575 }' 00:31:21.575 17:27:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:31:21.575 17:27:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:31:22.144 17:27:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:31:22.144 17:27:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:31:22.144 17:27:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:31:22.144 17:27:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:31:22.144 17:27:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:31:22.144 17:27:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:31:22.144 17:27:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:31:22.144 17:27:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:31:22.144 17:27:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:22.144 17:27:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:31:22.144 [2024-11-26 17:27:59.349241] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:31:22.144 17:27:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:22.144 17:27:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:31:22.144 "name": "Existed_Raid", 00:31:22.144 "aliases": [ 00:31:22.144 "5d33ed99-a2f2-40df-a106-ea9086b71885" 00:31:22.144 ], 00:31:22.144 "product_name": "Raid Volume", 00:31:22.144 "block_size": 512, 00:31:22.144 "num_blocks": 63488, 00:31:22.144 "uuid": "5d33ed99-a2f2-40df-a106-ea9086b71885", 00:31:22.144 "assigned_rate_limits": { 00:31:22.144 "rw_ios_per_sec": 0, 00:31:22.144 "rw_mbytes_per_sec": 0, 00:31:22.144 "r_mbytes_per_sec": 0, 00:31:22.144 "w_mbytes_per_sec": 0 00:31:22.144 }, 00:31:22.144 "claimed": false, 00:31:22.144 "zoned": false, 00:31:22.144 "supported_io_types": { 00:31:22.144 "read": true, 00:31:22.144 "write": true, 00:31:22.144 "unmap": false, 00:31:22.144 "flush": false, 00:31:22.144 "reset": true, 00:31:22.144 "nvme_admin": false, 00:31:22.144 "nvme_io": false, 00:31:22.144 "nvme_io_md": false, 00:31:22.144 "write_zeroes": true, 00:31:22.144 "zcopy": false, 00:31:22.144 "get_zone_info": false, 00:31:22.144 "zone_management": false, 00:31:22.144 "zone_append": false, 00:31:22.144 "compare": false, 00:31:22.144 "compare_and_write": false, 00:31:22.144 "abort": false, 00:31:22.144 "seek_hole": false, 00:31:22.144 "seek_data": false, 00:31:22.144 "copy": false, 00:31:22.144 "nvme_iov_md": false 00:31:22.144 }, 00:31:22.144 "memory_domains": [ 00:31:22.144 { 00:31:22.144 "dma_device_id": "system", 00:31:22.144 "dma_device_type": 1 00:31:22.144 }, 00:31:22.144 { 00:31:22.144 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:31:22.144 "dma_device_type": 2 00:31:22.144 }, 00:31:22.144 { 00:31:22.144 "dma_device_id": "system", 00:31:22.144 "dma_device_type": 1 00:31:22.144 }, 00:31:22.144 { 00:31:22.144 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:31:22.144 "dma_device_type": 2 00:31:22.144 }, 00:31:22.144 { 00:31:22.144 "dma_device_id": "system", 00:31:22.144 "dma_device_type": 1 00:31:22.144 }, 00:31:22.144 { 00:31:22.144 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:31:22.144 "dma_device_type": 2 00:31:22.144 } 00:31:22.144 ], 00:31:22.144 "driver_specific": { 00:31:22.144 "raid": { 00:31:22.144 "uuid": "5d33ed99-a2f2-40df-a106-ea9086b71885", 00:31:22.144 "strip_size_kb": 0, 00:31:22.145 "state": "online", 00:31:22.145 "raid_level": "raid1", 00:31:22.145 "superblock": true, 00:31:22.145 "num_base_bdevs": 3, 00:31:22.145 "num_base_bdevs_discovered": 3, 00:31:22.145 "num_base_bdevs_operational": 3, 00:31:22.145 "base_bdevs_list": [ 00:31:22.145 { 00:31:22.145 "name": "NewBaseBdev", 00:31:22.145 "uuid": "4586f982-8a3f-44dd-8143-c117d18142fd", 00:31:22.145 "is_configured": true, 00:31:22.145 "data_offset": 2048, 00:31:22.145 "data_size": 63488 00:31:22.145 }, 00:31:22.145 { 00:31:22.145 "name": "BaseBdev2", 00:31:22.145 "uuid": "902a259e-a2be-4864-bd77-5a871a1dc5ec", 00:31:22.145 "is_configured": true, 00:31:22.145 "data_offset": 2048, 00:31:22.145 "data_size": 63488 00:31:22.145 }, 00:31:22.145 { 00:31:22.145 "name": "BaseBdev3", 00:31:22.145 "uuid": "5f58aea6-1e45-40db-a162-3622e38dcdf0", 00:31:22.145 "is_configured": true, 00:31:22.145 "data_offset": 2048, 00:31:22.145 "data_size": 63488 00:31:22.145 } 00:31:22.145 ] 00:31:22.145 } 00:31:22.145 } 00:31:22.145 }' 00:31:22.145 17:27:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:31:22.145 17:27:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:31:22.145 BaseBdev2 00:31:22.145 BaseBdev3' 00:31:22.145 17:27:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:31:22.145 17:27:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:31:22.145 17:27:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:31:22.145 17:27:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:31:22.145 17:27:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:31:22.145 17:27:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:22.145 17:27:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:31:22.145 17:27:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:22.145 17:27:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:31:22.145 17:27:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:31:22.145 17:27:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:31:22.145 17:27:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:31:22.145 17:27:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:22.145 17:27:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:31:22.145 17:27:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:31:22.145 17:27:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:22.145 17:27:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:31:22.145 17:27:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:31:22.145 17:27:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:31:22.145 17:27:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:31:22.145 17:27:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:22.145 17:27:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:31:22.145 17:27:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:31:22.145 17:27:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:22.405 17:27:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:31:22.405 17:27:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:31:22.405 17:27:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:31:22.405 17:27:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:22.405 17:27:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:31:22.405 [2024-11-26 17:27:59.596920] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:31:22.405 [2024-11-26 17:27:59.597089] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:31:22.405 [2024-11-26 17:27:59.597180] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:31:22.405 [2024-11-26 17:27:59.597480] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:31:22.405 [2024-11-26 17:27:59.597494] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name Existed_Raid, state offline 00:31:22.405 17:27:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:22.405 17:27:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@326 -- # killprocess 68438 00:31:22.405 17:27:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@954 -- # '[' -z 68438 ']' 00:31:22.405 17:27:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@958 -- # kill -0 68438 00:31:22.405 17:27:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@959 -- # uname 00:31:22.405 17:27:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:31:22.405 17:27:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 68438 00:31:22.405 killing process with pid 68438 00:31:22.405 17:27:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:31:22.405 17:27:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:31:22.405 17:27:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@972 -- # echo 'killing process with pid 68438' 00:31:22.405 17:27:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@973 -- # kill 68438 00:31:22.405 [2024-11-26 17:27:59.641620] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:31:22.405 17:27:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@978 -- # wait 68438 00:31:22.664 [2024-11-26 17:27:59.956590] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:31:24.049 17:28:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@328 -- # return 0 00:31:24.049 00:31:24.049 real 0m10.963s 00:31:24.049 user 0m17.471s 00:31:24.049 sys 0m2.001s 00:31:24.049 17:28:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1130 -- # xtrace_disable 00:31:24.049 17:28:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:31:24.049 ************************************ 00:31:24.049 END TEST raid_state_function_test_sb 00:31:24.049 ************************************ 00:31:24.049 17:28:01 bdev_raid -- bdev/bdev_raid.sh@970 -- # run_test raid_superblock_test raid_superblock_test raid1 3 00:31:24.049 17:28:01 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:31:24.049 17:28:01 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:31:24.049 17:28:01 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:31:24.049 ************************************ 00:31:24.049 START TEST raid_superblock_test 00:31:24.049 ************************************ 00:31:24.049 17:28:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1129 -- # raid_superblock_test raid1 3 00:31:24.049 17:28:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@393 -- # local raid_level=raid1 00:31:24.049 17:28:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=3 00:31:24.049 17:28:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:31:24.049 17:28:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:31:24.049 17:28:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:31:24.049 17:28:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:31:24.049 17:28:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:31:24.049 17:28:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:31:24.049 17:28:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:31:24.049 17:28:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size 00:31:24.049 17:28:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:31:24.049 17:28:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:31:24.049 17:28:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:31:24.049 17:28:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@404 -- # '[' raid1 '!=' raid1 ']' 00:31:24.049 17:28:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@408 -- # strip_size=0 00:31:24.049 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:31:24.049 17:28:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@412 -- # raid_pid=69067 00:31:24.049 17:28:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@413 -- # waitforlisten 69067 00:31:24.049 17:28:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:31:24.049 17:28:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@835 -- # '[' -z 69067 ']' 00:31:24.049 17:28:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:31:24.049 17:28:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:31:24.049 17:28:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:31:24.049 17:28:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:31:24.049 17:28:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:31:24.049 [2024-11-26 17:28:01.317346] Starting SPDK v25.01-pre git sha1 f7ce15267 / DPDK 24.03.0 initialization... 00:31:24.049 [2024-11-26 17:28:01.317795] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69067 ] 00:31:24.308 [2024-11-26 17:28:01.513209] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:24.308 [2024-11-26 17:28:01.629845] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:31:24.578 [2024-11-26 17:28:01.830107] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:31:24.578 [2024-11-26 17:28:01.830158] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:31:24.860 17:28:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:31:24.860 17:28:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@868 -- # return 0 00:31:24.860 17:28:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:31:24.860 17:28:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:31:24.860 17:28:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:31:24.860 17:28:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:31:24.860 17:28:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:31:24.861 17:28:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:31:24.861 17:28:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:31:24.861 17:28:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:31:24.861 17:28:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc1 00:31:24.861 17:28:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:24.861 17:28:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:31:24.861 malloc1 00:31:24.861 17:28:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:24.861 17:28:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:31:24.861 17:28:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:24.861 17:28:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:31:24.861 [2024-11-26 17:28:02.299019] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:31:24.861 [2024-11-26 17:28:02.299104] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:31:24.861 [2024-11-26 17:28:02.299129] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:31:24.861 [2024-11-26 17:28:02.299142] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:31:24.861 [2024-11-26 17:28:02.301691] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:31:24.861 [2024-11-26 17:28:02.301745] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:31:25.120 pt1 00:31:25.120 17:28:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:25.120 17:28:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:31:25.120 17:28:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:31:25.120 17:28:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:31:25.120 17:28:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:31:25.120 17:28:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:31:25.120 17:28:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:31:25.120 17:28:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:31:25.120 17:28:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:31:25.120 17:28:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc2 00:31:25.120 17:28:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:25.120 17:28:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:31:25.120 malloc2 00:31:25.120 17:28:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:25.120 17:28:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:31:25.120 17:28:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:25.120 17:28:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:31:25.120 [2024-11-26 17:28:02.348073] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:31:25.120 [2024-11-26 17:28:02.348141] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:31:25.120 [2024-11-26 17:28:02.348172] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:31:25.120 [2024-11-26 17:28:02.348183] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:31:25.120 [2024-11-26 17:28:02.350523] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:31:25.120 [2024-11-26 17:28:02.350557] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:31:25.120 pt2 00:31:25.120 17:28:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:25.120 17:28:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:31:25.120 17:28:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:31:25.120 17:28:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc3 00:31:25.120 17:28:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt3 00:31:25.120 17:28:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000003 00:31:25.120 17:28:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:31:25.120 17:28:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:31:25.120 17:28:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:31:25.120 17:28:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc3 00:31:25.120 17:28:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:25.120 17:28:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:31:25.120 malloc3 00:31:25.120 17:28:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:25.120 17:28:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:31:25.120 17:28:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:25.120 17:28:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:31:25.120 [2024-11-26 17:28:02.415015] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:31:25.120 [2024-11-26 17:28:02.415089] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:31:25.120 [2024-11-26 17:28:02.415118] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:31:25.120 [2024-11-26 17:28:02.415131] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:31:25.120 [2024-11-26 17:28:02.417628] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:31:25.120 [2024-11-26 17:28:02.417665] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:31:25.120 pt3 00:31:25.120 17:28:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:25.120 17:28:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:31:25.120 17:28:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:31:25.120 17:28:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''pt1 pt2 pt3'\''' -n raid_bdev1 -s 00:31:25.120 17:28:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:25.120 17:28:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:31:25.120 [2024-11-26 17:28:02.427086] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:31:25.120 [2024-11-26 17:28:02.429145] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:31:25.120 [2024-11-26 17:28:02.429214] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:31:25.120 [2024-11-26 17:28:02.429371] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:31:25.120 [2024-11-26 17:28:02.429391] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:31:25.120 [2024-11-26 17:28:02.429639] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:31:25.120 [2024-11-26 17:28:02.429830] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:31:25.120 [2024-11-26 17:28:02.429848] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:31:25.120 [2024-11-26 17:28:02.430023] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:31:25.120 17:28:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:25.120 17:28:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:31:25.120 17:28:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:31:25.120 17:28:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:31:25.120 17:28:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:31:25.120 17:28:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:31:25.120 17:28:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:31:25.120 17:28:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:31:25.120 17:28:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:31:25.120 17:28:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:31:25.120 17:28:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:31:25.120 17:28:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:31:25.120 17:28:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:25.120 17:28:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:31:25.120 17:28:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:31:25.120 17:28:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:25.120 17:28:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:31:25.120 "name": "raid_bdev1", 00:31:25.120 "uuid": "46181549-ae8d-4163-85b7-7193ce9559e1", 00:31:25.120 "strip_size_kb": 0, 00:31:25.120 "state": "online", 00:31:25.120 "raid_level": "raid1", 00:31:25.120 "superblock": true, 00:31:25.120 "num_base_bdevs": 3, 00:31:25.120 "num_base_bdevs_discovered": 3, 00:31:25.120 "num_base_bdevs_operational": 3, 00:31:25.120 "base_bdevs_list": [ 00:31:25.120 { 00:31:25.120 "name": "pt1", 00:31:25.120 "uuid": "00000000-0000-0000-0000-000000000001", 00:31:25.120 "is_configured": true, 00:31:25.121 "data_offset": 2048, 00:31:25.121 "data_size": 63488 00:31:25.121 }, 00:31:25.121 { 00:31:25.121 "name": "pt2", 00:31:25.121 "uuid": "00000000-0000-0000-0000-000000000002", 00:31:25.121 "is_configured": true, 00:31:25.121 "data_offset": 2048, 00:31:25.121 "data_size": 63488 00:31:25.121 }, 00:31:25.121 { 00:31:25.121 "name": "pt3", 00:31:25.121 "uuid": "00000000-0000-0000-0000-000000000003", 00:31:25.121 "is_configured": true, 00:31:25.121 "data_offset": 2048, 00:31:25.121 "data_size": 63488 00:31:25.121 } 00:31:25.121 ] 00:31:25.121 }' 00:31:25.121 17:28:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:31:25.121 17:28:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:31:25.688 17:28:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:31:25.688 17:28:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:31:25.688 17:28:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:31:25.688 17:28:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:31:25.688 17:28:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:31:25.688 17:28:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:31:25.688 17:28:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:31:25.688 17:28:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:31:25.688 17:28:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:25.688 17:28:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:31:25.688 [2024-11-26 17:28:02.875484] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:31:25.688 17:28:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:25.688 17:28:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:31:25.688 "name": "raid_bdev1", 00:31:25.688 "aliases": [ 00:31:25.689 "46181549-ae8d-4163-85b7-7193ce9559e1" 00:31:25.689 ], 00:31:25.689 "product_name": "Raid Volume", 00:31:25.689 "block_size": 512, 00:31:25.689 "num_blocks": 63488, 00:31:25.689 "uuid": "46181549-ae8d-4163-85b7-7193ce9559e1", 00:31:25.689 "assigned_rate_limits": { 00:31:25.689 "rw_ios_per_sec": 0, 00:31:25.689 "rw_mbytes_per_sec": 0, 00:31:25.689 "r_mbytes_per_sec": 0, 00:31:25.689 "w_mbytes_per_sec": 0 00:31:25.689 }, 00:31:25.689 "claimed": false, 00:31:25.689 "zoned": false, 00:31:25.689 "supported_io_types": { 00:31:25.689 "read": true, 00:31:25.689 "write": true, 00:31:25.689 "unmap": false, 00:31:25.689 "flush": false, 00:31:25.689 "reset": true, 00:31:25.689 "nvme_admin": false, 00:31:25.689 "nvme_io": false, 00:31:25.689 "nvme_io_md": false, 00:31:25.689 "write_zeroes": true, 00:31:25.689 "zcopy": false, 00:31:25.689 "get_zone_info": false, 00:31:25.689 "zone_management": false, 00:31:25.689 "zone_append": false, 00:31:25.689 "compare": false, 00:31:25.689 "compare_and_write": false, 00:31:25.689 "abort": false, 00:31:25.689 "seek_hole": false, 00:31:25.689 "seek_data": false, 00:31:25.689 "copy": false, 00:31:25.689 "nvme_iov_md": false 00:31:25.689 }, 00:31:25.689 "memory_domains": [ 00:31:25.689 { 00:31:25.689 "dma_device_id": "system", 00:31:25.689 "dma_device_type": 1 00:31:25.689 }, 00:31:25.689 { 00:31:25.689 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:31:25.689 "dma_device_type": 2 00:31:25.689 }, 00:31:25.689 { 00:31:25.689 "dma_device_id": "system", 00:31:25.689 "dma_device_type": 1 00:31:25.689 }, 00:31:25.689 { 00:31:25.689 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:31:25.689 "dma_device_type": 2 00:31:25.689 }, 00:31:25.689 { 00:31:25.689 "dma_device_id": "system", 00:31:25.689 "dma_device_type": 1 00:31:25.689 }, 00:31:25.689 { 00:31:25.689 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:31:25.689 "dma_device_type": 2 00:31:25.689 } 00:31:25.689 ], 00:31:25.689 "driver_specific": { 00:31:25.689 "raid": { 00:31:25.689 "uuid": "46181549-ae8d-4163-85b7-7193ce9559e1", 00:31:25.689 "strip_size_kb": 0, 00:31:25.689 "state": "online", 00:31:25.689 "raid_level": "raid1", 00:31:25.689 "superblock": true, 00:31:25.689 "num_base_bdevs": 3, 00:31:25.689 "num_base_bdevs_discovered": 3, 00:31:25.689 "num_base_bdevs_operational": 3, 00:31:25.689 "base_bdevs_list": [ 00:31:25.689 { 00:31:25.689 "name": "pt1", 00:31:25.689 "uuid": "00000000-0000-0000-0000-000000000001", 00:31:25.689 "is_configured": true, 00:31:25.689 "data_offset": 2048, 00:31:25.689 "data_size": 63488 00:31:25.689 }, 00:31:25.689 { 00:31:25.689 "name": "pt2", 00:31:25.689 "uuid": "00000000-0000-0000-0000-000000000002", 00:31:25.689 "is_configured": true, 00:31:25.689 "data_offset": 2048, 00:31:25.689 "data_size": 63488 00:31:25.689 }, 00:31:25.689 { 00:31:25.689 "name": "pt3", 00:31:25.689 "uuid": "00000000-0000-0000-0000-000000000003", 00:31:25.689 "is_configured": true, 00:31:25.689 "data_offset": 2048, 00:31:25.689 "data_size": 63488 00:31:25.689 } 00:31:25.689 ] 00:31:25.689 } 00:31:25.689 } 00:31:25.689 }' 00:31:25.689 17:28:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:31:25.689 17:28:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:31:25.689 pt2 00:31:25.689 pt3' 00:31:25.689 17:28:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:31:25.689 17:28:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:31:25.689 17:28:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:31:25.689 17:28:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:31:25.689 17:28:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:31:25.689 17:28:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:25.689 17:28:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:31:25.689 17:28:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:25.689 17:28:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:31:25.689 17:28:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:31:25.689 17:28:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:31:25.689 17:28:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:31:25.689 17:28:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:25.689 17:28:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:31:25.689 17:28:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:31:25.689 17:28:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:25.689 17:28:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:31:25.689 17:28:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:31:25.689 17:28:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:31:25.689 17:28:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:31:25.689 17:28:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:31:25.689 17:28:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:25.689 17:28:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:31:25.689 17:28:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:25.948 17:28:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:31:25.948 17:28:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:31:25.948 17:28:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:31:25.948 17:28:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:31:25.948 17:28:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:25.948 17:28:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:31:25.948 [2024-11-26 17:28:03.143486] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:31:25.948 17:28:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:25.948 17:28:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=46181549-ae8d-4163-85b7-7193ce9559e1 00:31:25.948 17:28:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@436 -- # '[' -z 46181549-ae8d-4163-85b7-7193ce9559e1 ']' 00:31:25.948 17:28:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:31:25.948 17:28:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:25.948 17:28:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:31:25.948 [2024-11-26 17:28:03.183224] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:31:25.948 [2024-11-26 17:28:03.183260] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:31:25.948 [2024-11-26 17:28:03.183346] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:31:25.948 [2024-11-26 17:28:03.183430] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:31:25.948 [2024-11-26 17:28:03.183443] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:31:25.948 17:28:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:25.948 17:28:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:31:25.948 17:28:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:31:25.949 17:28:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:25.949 17:28:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:31:25.949 17:28:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:25.949 17:28:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:31:25.949 17:28:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:31:25.949 17:28:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:31:25.949 17:28:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:31:25.949 17:28:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:25.949 17:28:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:31:25.949 17:28:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:25.949 17:28:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:31:25.949 17:28:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:31:25.949 17:28:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:25.949 17:28:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:31:25.949 17:28:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:25.949 17:28:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:31:25.949 17:28:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt3 00:31:25.949 17:28:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:25.949 17:28:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:31:25.949 17:28:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:25.949 17:28:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:31:25.949 17:28:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:25.949 17:28:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:31:25.949 17:28:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:31:25.949 17:28:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:25.949 17:28:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:31:25.949 17:28:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2 malloc3'\''' -n raid_bdev1 00:31:25.949 17:28:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@652 -- # local es=0 00:31:25.949 17:28:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2 malloc3'\''' -n raid_bdev1 00:31:25.949 17:28:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:31:25.949 17:28:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:31:25.949 17:28:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:31:25.949 17:28:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:31:25.949 17:28:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2 malloc3'\''' -n raid_bdev1 00:31:25.949 17:28:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:25.949 17:28:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:31:25.949 [2024-11-26 17:28:03.315307] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:31:25.949 [2024-11-26 17:28:03.317726] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:31:25.949 [2024-11-26 17:28:03.317797] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc3 is claimed 00:31:25.949 [2024-11-26 17:28:03.317870] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:31:25.949 [2024-11-26 17:28:03.317934] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:31:25.949 [2024-11-26 17:28:03.317958] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc3 00:31:25.949 [2024-11-26 17:28:03.317991] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:31:25.949 [2024-11-26 17:28:03.318004] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state configuring 00:31:25.949 request: 00:31:25.949 { 00:31:25.949 "name": "raid_bdev1", 00:31:25.949 "raid_level": "raid1", 00:31:25.949 "base_bdevs": [ 00:31:25.949 "malloc1", 00:31:25.949 "malloc2", 00:31:25.949 "malloc3" 00:31:25.949 ], 00:31:25.949 "superblock": false, 00:31:25.949 "method": "bdev_raid_create", 00:31:25.949 "req_id": 1 00:31:25.949 } 00:31:25.949 Got JSON-RPC error response 00:31:25.949 response: 00:31:25.949 { 00:31:25.949 "code": -17, 00:31:25.949 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:31:25.949 } 00:31:25.949 17:28:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:31:25.949 17:28:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@655 -- # es=1 00:31:25.949 17:28:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:31:25.949 17:28:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:31:25.949 17:28:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:31:25.949 17:28:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:31:25.949 17:28:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:25.949 17:28:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:31:25.949 17:28:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:31:25.949 17:28:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:25.949 17:28:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:31:25.949 17:28:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:31:25.949 17:28:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:31:25.949 17:28:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:25.949 17:28:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:31:25.949 [2024-11-26 17:28:03.371245] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:31:25.949 [2024-11-26 17:28:03.371298] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:31:25.949 [2024-11-26 17:28:03.371322] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:31:25.949 [2024-11-26 17:28:03.371333] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:31:25.949 [2024-11-26 17:28:03.373785] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:31:25.949 [2024-11-26 17:28:03.373822] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:31:25.949 [2024-11-26 17:28:03.373903] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:31:25.949 [2024-11-26 17:28:03.373953] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:31:25.949 pt1 00:31:25.949 17:28:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:25.949 17:28:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 3 00:31:25.949 17:28:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:31:25.949 17:28:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:31:25.949 17:28:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:31:25.949 17:28:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:31:25.949 17:28:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:31:25.949 17:28:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:31:25.949 17:28:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:31:25.949 17:28:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:31:25.949 17:28:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:31:25.949 17:28:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:31:25.949 17:28:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:31:25.949 17:28:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:25.949 17:28:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:31:26.208 17:28:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:26.208 17:28:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:31:26.208 "name": "raid_bdev1", 00:31:26.208 "uuid": "46181549-ae8d-4163-85b7-7193ce9559e1", 00:31:26.208 "strip_size_kb": 0, 00:31:26.208 "state": "configuring", 00:31:26.208 "raid_level": "raid1", 00:31:26.208 "superblock": true, 00:31:26.208 "num_base_bdevs": 3, 00:31:26.208 "num_base_bdevs_discovered": 1, 00:31:26.208 "num_base_bdevs_operational": 3, 00:31:26.208 "base_bdevs_list": [ 00:31:26.208 { 00:31:26.208 "name": "pt1", 00:31:26.208 "uuid": "00000000-0000-0000-0000-000000000001", 00:31:26.208 "is_configured": true, 00:31:26.208 "data_offset": 2048, 00:31:26.208 "data_size": 63488 00:31:26.208 }, 00:31:26.208 { 00:31:26.208 "name": null, 00:31:26.208 "uuid": "00000000-0000-0000-0000-000000000002", 00:31:26.208 "is_configured": false, 00:31:26.208 "data_offset": 2048, 00:31:26.208 "data_size": 63488 00:31:26.208 }, 00:31:26.208 { 00:31:26.208 "name": null, 00:31:26.208 "uuid": "00000000-0000-0000-0000-000000000003", 00:31:26.208 "is_configured": false, 00:31:26.208 "data_offset": 2048, 00:31:26.208 "data_size": 63488 00:31:26.208 } 00:31:26.208 ] 00:31:26.208 }' 00:31:26.208 17:28:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:31:26.208 17:28:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:31:26.467 17:28:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@470 -- # '[' 3 -gt 2 ']' 00:31:26.467 17:28:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@472 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:31:26.467 17:28:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:26.467 17:28:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:31:26.467 [2024-11-26 17:28:03.827400] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:31:26.467 [2024-11-26 17:28:03.827478] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:31:26.467 [2024-11-26 17:28:03.827507] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009c80 00:31:26.467 [2024-11-26 17:28:03.827520] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:31:26.467 [2024-11-26 17:28:03.828006] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:31:26.467 [2024-11-26 17:28:03.828034] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:31:26.467 [2024-11-26 17:28:03.828147] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:31:26.467 [2024-11-26 17:28:03.828173] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:31:26.467 pt2 00:31:26.467 17:28:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:26.467 17:28:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@473 -- # rpc_cmd bdev_passthru_delete pt2 00:31:26.467 17:28:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:26.467 17:28:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:31:26.467 [2024-11-26 17:28:03.839400] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: pt2 00:31:26.467 17:28:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:26.467 17:28:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@474 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 3 00:31:26.467 17:28:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:31:26.467 17:28:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:31:26.467 17:28:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:31:26.467 17:28:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:31:26.467 17:28:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:31:26.467 17:28:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:31:26.467 17:28:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:31:26.467 17:28:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:31:26.467 17:28:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:31:26.467 17:28:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:31:26.467 17:28:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:26.467 17:28:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:31:26.467 17:28:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:31:26.467 17:28:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:26.467 17:28:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:31:26.467 "name": "raid_bdev1", 00:31:26.467 "uuid": "46181549-ae8d-4163-85b7-7193ce9559e1", 00:31:26.467 "strip_size_kb": 0, 00:31:26.467 "state": "configuring", 00:31:26.467 "raid_level": "raid1", 00:31:26.467 "superblock": true, 00:31:26.467 "num_base_bdevs": 3, 00:31:26.467 "num_base_bdevs_discovered": 1, 00:31:26.467 "num_base_bdevs_operational": 3, 00:31:26.467 "base_bdevs_list": [ 00:31:26.467 { 00:31:26.467 "name": "pt1", 00:31:26.467 "uuid": "00000000-0000-0000-0000-000000000001", 00:31:26.467 "is_configured": true, 00:31:26.467 "data_offset": 2048, 00:31:26.467 "data_size": 63488 00:31:26.467 }, 00:31:26.467 { 00:31:26.467 "name": null, 00:31:26.467 "uuid": "00000000-0000-0000-0000-000000000002", 00:31:26.467 "is_configured": false, 00:31:26.467 "data_offset": 0, 00:31:26.467 "data_size": 63488 00:31:26.467 }, 00:31:26.467 { 00:31:26.467 "name": null, 00:31:26.467 "uuid": "00000000-0000-0000-0000-000000000003", 00:31:26.467 "is_configured": false, 00:31:26.467 "data_offset": 2048, 00:31:26.467 "data_size": 63488 00:31:26.467 } 00:31:26.467 ] 00:31:26.467 }' 00:31:26.467 17:28:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:31:26.467 17:28:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:31:27.034 17:28:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:31:27.034 17:28:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:31:27.034 17:28:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:31:27.034 17:28:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:27.034 17:28:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:31:27.034 [2024-11-26 17:28:04.303479] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:31:27.034 [2024-11-26 17:28:04.303564] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:31:27.034 [2024-11-26 17:28:04.303589] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009f80 00:31:27.034 [2024-11-26 17:28:04.303603] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:31:27.034 [2024-11-26 17:28:04.304097] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:31:27.034 [2024-11-26 17:28:04.304121] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:31:27.034 [2024-11-26 17:28:04.304208] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:31:27.034 [2024-11-26 17:28:04.304245] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:31:27.034 pt2 00:31:27.034 17:28:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:27.034 17:28:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:31:27.034 17:28:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:31:27.034 17:28:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:31:27.034 17:28:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:27.034 17:28:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:31:27.034 [2024-11-26 17:28:04.315463] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:31:27.034 [2024-11-26 17:28:04.315517] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:31:27.034 [2024-11-26 17:28:04.315535] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:31:27.034 [2024-11-26 17:28:04.315548] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:31:27.034 [2024-11-26 17:28:04.315966] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:31:27.034 [2024-11-26 17:28:04.315996] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:31:27.034 [2024-11-26 17:28:04.316076] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:31:27.034 [2024-11-26 17:28:04.316100] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:31:27.034 [2024-11-26 17:28:04.316220] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:31:27.034 [2024-11-26 17:28:04.316240] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:31:27.034 [2024-11-26 17:28:04.316495] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:31:27.034 [2024-11-26 17:28:04.316642] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:31:27.034 [2024-11-26 17:28:04.316660] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:31:27.034 [2024-11-26 17:28:04.316820] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:31:27.034 pt3 00:31:27.034 17:28:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:27.034 17:28:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:31:27.034 17:28:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:31:27.034 17:28:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:31:27.034 17:28:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:31:27.034 17:28:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:31:27.034 17:28:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:31:27.034 17:28:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:31:27.034 17:28:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:31:27.034 17:28:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:31:27.034 17:28:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:31:27.034 17:28:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:31:27.034 17:28:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:31:27.034 17:28:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:31:27.034 17:28:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:31:27.034 17:28:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:27.034 17:28:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:31:27.034 17:28:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:27.034 17:28:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:31:27.034 "name": "raid_bdev1", 00:31:27.034 "uuid": "46181549-ae8d-4163-85b7-7193ce9559e1", 00:31:27.034 "strip_size_kb": 0, 00:31:27.034 "state": "online", 00:31:27.034 "raid_level": "raid1", 00:31:27.034 "superblock": true, 00:31:27.034 "num_base_bdevs": 3, 00:31:27.034 "num_base_bdevs_discovered": 3, 00:31:27.034 "num_base_bdevs_operational": 3, 00:31:27.034 "base_bdevs_list": [ 00:31:27.034 { 00:31:27.034 "name": "pt1", 00:31:27.034 "uuid": "00000000-0000-0000-0000-000000000001", 00:31:27.034 "is_configured": true, 00:31:27.034 "data_offset": 2048, 00:31:27.034 "data_size": 63488 00:31:27.034 }, 00:31:27.034 { 00:31:27.034 "name": "pt2", 00:31:27.034 "uuid": "00000000-0000-0000-0000-000000000002", 00:31:27.034 "is_configured": true, 00:31:27.034 "data_offset": 2048, 00:31:27.034 "data_size": 63488 00:31:27.034 }, 00:31:27.034 { 00:31:27.034 "name": "pt3", 00:31:27.034 "uuid": "00000000-0000-0000-0000-000000000003", 00:31:27.034 "is_configured": true, 00:31:27.034 "data_offset": 2048, 00:31:27.034 "data_size": 63488 00:31:27.034 } 00:31:27.034 ] 00:31:27.034 }' 00:31:27.034 17:28:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:31:27.034 17:28:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:31:27.602 17:28:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:31:27.602 17:28:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:31:27.602 17:28:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:31:27.602 17:28:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:31:27.602 17:28:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:31:27.602 17:28:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:31:27.602 17:28:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:31:27.602 17:28:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:31:27.602 17:28:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:27.602 17:28:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:31:27.602 [2024-11-26 17:28:04.799938] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:31:27.602 17:28:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:27.602 17:28:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:31:27.602 "name": "raid_bdev1", 00:31:27.602 "aliases": [ 00:31:27.602 "46181549-ae8d-4163-85b7-7193ce9559e1" 00:31:27.602 ], 00:31:27.602 "product_name": "Raid Volume", 00:31:27.602 "block_size": 512, 00:31:27.602 "num_blocks": 63488, 00:31:27.602 "uuid": "46181549-ae8d-4163-85b7-7193ce9559e1", 00:31:27.602 "assigned_rate_limits": { 00:31:27.602 "rw_ios_per_sec": 0, 00:31:27.602 "rw_mbytes_per_sec": 0, 00:31:27.602 "r_mbytes_per_sec": 0, 00:31:27.602 "w_mbytes_per_sec": 0 00:31:27.602 }, 00:31:27.602 "claimed": false, 00:31:27.602 "zoned": false, 00:31:27.602 "supported_io_types": { 00:31:27.602 "read": true, 00:31:27.602 "write": true, 00:31:27.602 "unmap": false, 00:31:27.602 "flush": false, 00:31:27.602 "reset": true, 00:31:27.602 "nvme_admin": false, 00:31:27.602 "nvme_io": false, 00:31:27.602 "nvme_io_md": false, 00:31:27.602 "write_zeroes": true, 00:31:27.602 "zcopy": false, 00:31:27.602 "get_zone_info": false, 00:31:27.602 "zone_management": false, 00:31:27.602 "zone_append": false, 00:31:27.602 "compare": false, 00:31:27.602 "compare_and_write": false, 00:31:27.602 "abort": false, 00:31:27.602 "seek_hole": false, 00:31:27.602 "seek_data": false, 00:31:27.602 "copy": false, 00:31:27.602 "nvme_iov_md": false 00:31:27.602 }, 00:31:27.602 "memory_domains": [ 00:31:27.602 { 00:31:27.602 "dma_device_id": "system", 00:31:27.602 "dma_device_type": 1 00:31:27.602 }, 00:31:27.602 { 00:31:27.602 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:31:27.602 "dma_device_type": 2 00:31:27.602 }, 00:31:27.602 { 00:31:27.602 "dma_device_id": "system", 00:31:27.602 "dma_device_type": 1 00:31:27.602 }, 00:31:27.602 { 00:31:27.602 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:31:27.602 "dma_device_type": 2 00:31:27.602 }, 00:31:27.602 { 00:31:27.602 "dma_device_id": "system", 00:31:27.602 "dma_device_type": 1 00:31:27.602 }, 00:31:27.602 { 00:31:27.602 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:31:27.602 "dma_device_type": 2 00:31:27.602 } 00:31:27.602 ], 00:31:27.602 "driver_specific": { 00:31:27.602 "raid": { 00:31:27.602 "uuid": "46181549-ae8d-4163-85b7-7193ce9559e1", 00:31:27.602 "strip_size_kb": 0, 00:31:27.602 "state": "online", 00:31:27.602 "raid_level": "raid1", 00:31:27.602 "superblock": true, 00:31:27.602 "num_base_bdevs": 3, 00:31:27.602 "num_base_bdevs_discovered": 3, 00:31:27.602 "num_base_bdevs_operational": 3, 00:31:27.602 "base_bdevs_list": [ 00:31:27.602 { 00:31:27.602 "name": "pt1", 00:31:27.602 "uuid": "00000000-0000-0000-0000-000000000001", 00:31:27.602 "is_configured": true, 00:31:27.602 "data_offset": 2048, 00:31:27.602 "data_size": 63488 00:31:27.602 }, 00:31:27.602 { 00:31:27.602 "name": "pt2", 00:31:27.602 "uuid": "00000000-0000-0000-0000-000000000002", 00:31:27.602 "is_configured": true, 00:31:27.603 "data_offset": 2048, 00:31:27.603 "data_size": 63488 00:31:27.603 }, 00:31:27.603 { 00:31:27.603 "name": "pt3", 00:31:27.603 "uuid": "00000000-0000-0000-0000-000000000003", 00:31:27.603 "is_configured": true, 00:31:27.603 "data_offset": 2048, 00:31:27.603 "data_size": 63488 00:31:27.603 } 00:31:27.603 ] 00:31:27.603 } 00:31:27.603 } 00:31:27.603 }' 00:31:27.603 17:28:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:31:27.603 17:28:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:31:27.603 pt2 00:31:27.603 pt3' 00:31:27.603 17:28:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:31:27.603 17:28:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:31:27.603 17:28:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:31:27.603 17:28:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:31:27.603 17:28:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:31:27.603 17:28:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:27.603 17:28:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:31:27.603 17:28:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:27.603 17:28:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:31:27.603 17:28:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:31:27.603 17:28:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:31:27.603 17:28:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:31:27.603 17:28:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:27.603 17:28:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:31:27.603 17:28:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:31:27.603 17:28:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:27.603 17:28:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:31:27.603 17:28:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:31:27.603 17:28:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:31:27.603 17:28:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:31:27.603 17:28:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:27.603 17:28:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:31:27.603 17:28:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:31:27.603 17:28:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:27.862 17:28:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:31:27.862 17:28:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:31:27.862 17:28:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:31:27.862 17:28:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:27.862 17:28:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:31:27.862 17:28:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:31:27.862 [2024-11-26 17:28:05.071975] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:31:27.862 17:28:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:27.862 17:28:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # '[' 46181549-ae8d-4163-85b7-7193ce9559e1 '!=' 46181549-ae8d-4163-85b7-7193ce9559e1 ']' 00:31:27.862 17:28:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@491 -- # has_redundancy raid1 00:31:27.862 17:28:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:31:27.862 17:28:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@199 -- # return 0 00:31:27.862 17:28:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@493 -- # rpc_cmd bdev_passthru_delete pt1 00:31:27.862 17:28:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:27.862 17:28:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:31:27.862 [2024-11-26 17:28:05.107730] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: pt1 00:31:27.862 17:28:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:27.862 17:28:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@496 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:31:27.862 17:28:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:31:27.862 17:28:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:31:27.862 17:28:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:31:27.862 17:28:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:31:27.862 17:28:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:31:27.862 17:28:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:31:27.862 17:28:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:31:27.862 17:28:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:31:27.862 17:28:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:31:27.862 17:28:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:31:27.862 17:28:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:27.862 17:28:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:31:27.862 17:28:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:31:27.862 17:28:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:27.862 17:28:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:31:27.862 "name": "raid_bdev1", 00:31:27.862 "uuid": "46181549-ae8d-4163-85b7-7193ce9559e1", 00:31:27.862 "strip_size_kb": 0, 00:31:27.862 "state": "online", 00:31:27.862 "raid_level": "raid1", 00:31:27.862 "superblock": true, 00:31:27.862 "num_base_bdevs": 3, 00:31:27.862 "num_base_bdevs_discovered": 2, 00:31:27.862 "num_base_bdevs_operational": 2, 00:31:27.862 "base_bdevs_list": [ 00:31:27.862 { 00:31:27.862 "name": null, 00:31:27.862 "uuid": "00000000-0000-0000-0000-000000000000", 00:31:27.862 "is_configured": false, 00:31:27.862 "data_offset": 0, 00:31:27.862 "data_size": 63488 00:31:27.862 }, 00:31:27.862 { 00:31:27.862 "name": "pt2", 00:31:27.862 "uuid": "00000000-0000-0000-0000-000000000002", 00:31:27.862 "is_configured": true, 00:31:27.862 "data_offset": 2048, 00:31:27.862 "data_size": 63488 00:31:27.862 }, 00:31:27.862 { 00:31:27.862 "name": "pt3", 00:31:27.862 "uuid": "00000000-0000-0000-0000-000000000003", 00:31:27.862 "is_configured": true, 00:31:27.862 "data_offset": 2048, 00:31:27.862 "data_size": 63488 00:31:27.862 } 00:31:27.862 ] 00:31:27.862 }' 00:31:27.862 17:28:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:31:27.862 17:28:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:31:28.430 17:28:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@499 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:31:28.430 17:28:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:28.430 17:28:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:31:28.430 [2024-11-26 17:28:05.575762] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:31:28.430 [2024-11-26 17:28:05.575803] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:31:28.430 [2024-11-26 17:28:05.575884] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:31:28.430 [2024-11-26 17:28:05.575945] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:31:28.431 [2024-11-26 17:28:05.575963] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:31:28.431 17:28:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:28.431 17:28:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@500 -- # rpc_cmd bdev_raid_get_bdevs all 00:31:28.431 17:28:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@500 -- # jq -r '.[]' 00:31:28.431 17:28:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:28.431 17:28:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:31:28.431 17:28:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:28.431 17:28:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@500 -- # raid_bdev= 00:31:28.431 17:28:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@501 -- # '[' -n '' ']' 00:31:28.431 17:28:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i = 1 )) 00:31:28.431 17:28:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:31:28.431 17:28:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt2 00:31:28.431 17:28:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:28.431 17:28:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:31:28.431 17:28:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:28.431 17:28:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:31:28.431 17:28:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:31:28.431 17:28:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt3 00:31:28.431 17:28:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:28.431 17:28:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:31:28.431 17:28:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:28.431 17:28:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:31:28.431 17:28:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:31:28.431 17:28:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i = 1 )) 00:31:28.431 17:28:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:31:28.431 17:28:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@512 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:31:28.431 17:28:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:28.431 17:28:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:31:28.431 [2024-11-26 17:28:05.647728] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:31:28.431 [2024-11-26 17:28:05.647787] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:31:28.431 [2024-11-26 17:28:05.647805] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a580 00:31:28.431 [2024-11-26 17:28:05.647819] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:31:28.431 [2024-11-26 17:28:05.650378] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:31:28.431 [2024-11-26 17:28:05.650422] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:31:28.431 [2024-11-26 17:28:05.650504] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:31:28.431 [2024-11-26 17:28:05.650557] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:31:28.431 pt2 00:31:28.431 17:28:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:28.431 17:28:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@515 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 2 00:31:28.431 17:28:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:31:28.431 17:28:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:31:28.431 17:28:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:31:28.431 17:28:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:31:28.431 17:28:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:31:28.431 17:28:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:31:28.431 17:28:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:31:28.431 17:28:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:31:28.431 17:28:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:31:28.431 17:28:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:31:28.431 17:28:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:31:28.431 17:28:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:28.431 17:28:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:31:28.431 17:28:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:28.431 17:28:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:31:28.431 "name": "raid_bdev1", 00:31:28.431 "uuid": "46181549-ae8d-4163-85b7-7193ce9559e1", 00:31:28.431 "strip_size_kb": 0, 00:31:28.431 "state": "configuring", 00:31:28.431 "raid_level": "raid1", 00:31:28.431 "superblock": true, 00:31:28.431 "num_base_bdevs": 3, 00:31:28.431 "num_base_bdevs_discovered": 1, 00:31:28.431 "num_base_bdevs_operational": 2, 00:31:28.431 "base_bdevs_list": [ 00:31:28.431 { 00:31:28.431 "name": null, 00:31:28.431 "uuid": "00000000-0000-0000-0000-000000000000", 00:31:28.431 "is_configured": false, 00:31:28.431 "data_offset": 2048, 00:31:28.431 "data_size": 63488 00:31:28.431 }, 00:31:28.431 { 00:31:28.431 "name": "pt2", 00:31:28.431 "uuid": "00000000-0000-0000-0000-000000000002", 00:31:28.431 "is_configured": true, 00:31:28.431 "data_offset": 2048, 00:31:28.431 "data_size": 63488 00:31:28.431 }, 00:31:28.431 { 00:31:28.431 "name": null, 00:31:28.431 "uuid": "00000000-0000-0000-0000-000000000003", 00:31:28.431 "is_configured": false, 00:31:28.431 "data_offset": 2048, 00:31:28.431 "data_size": 63488 00:31:28.431 } 00:31:28.431 ] 00:31:28.431 }' 00:31:28.431 17:28:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:31:28.431 17:28:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:31:28.690 17:28:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i++ )) 00:31:28.690 17:28:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:31:28.690 17:28:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@519 -- # i=2 00:31:28.690 17:28:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@520 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:31:28.690 17:28:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:28.690 17:28:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:31:28.690 [2024-11-26 17:28:06.071888] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:31:28.690 [2024-11-26 17:28:06.071969] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:31:28.690 [2024-11-26 17:28:06.071995] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ab80 00:31:28.690 [2024-11-26 17:28:06.072011] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:31:28.690 [2024-11-26 17:28:06.072538] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:31:28.690 [2024-11-26 17:28:06.072564] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:31:28.690 [2024-11-26 17:28:06.072663] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:31:28.690 [2024-11-26 17:28:06.072694] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:31:28.690 [2024-11-26 17:28:06.072813] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:31:28.690 [2024-11-26 17:28:06.072827] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:31:28.690 [2024-11-26 17:28:06.073146] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:31:28.690 [2024-11-26 17:28:06.073319] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:31:28.690 [2024-11-26 17:28:06.073337] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008200 00:31:28.690 [2024-11-26 17:28:06.073483] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:31:28.690 pt3 00:31:28.690 17:28:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:28.690 17:28:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@523 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:31:28.690 17:28:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:31:28.690 17:28:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:31:28.690 17:28:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:31:28.690 17:28:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:31:28.691 17:28:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:31:28.691 17:28:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:31:28.691 17:28:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:31:28.691 17:28:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:31:28.691 17:28:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:31:28.691 17:28:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:31:28.691 17:28:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:28.691 17:28:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:31:28.691 17:28:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:31:28.691 17:28:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:28.691 17:28:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:31:28.691 "name": "raid_bdev1", 00:31:28.691 "uuid": "46181549-ae8d-4163-85b7-7193ce9559e1", 00:31:28.691 "strip_size_kb": 0, 00:31:28.691 "state": "online", 00:31:28.691 "raid_level": "raid1", 00:31:28.691 "superblock": true, 00:31:28.691 "num_base_bdevs": 3, 00:31:28.691 "num_base_bdevs_discovered": 2, 00:31:28.691 "num_base_bdevs_operational": 2, 00:31:28.691 "base_bdevs_list": [ 00:31:28.691 { 00:31:28.691 "name": null, 00:31:28.691 "uuid": "00000000-0000-0000-0000-000000000000", 00:31:28.691 "is_configured": false, 00:31:28.691 "data_offset": 2048, 00:31:28.691 "data_size": 63488 00:31:28.691 }, 00:31:28.691 { 00:31:28.691 "name": "pt2", 00:31:28.691 "uuid": "00000000-0000-0000-0000-000000000002", 00:31:28.691 "is_configured": true, 00:31:28.691 "data_offset": 2048, 00:31:28.691 "data_size": 63488 00:31:28.691 }, 00:31:28.691 { 00:31:28.691 "name": "pt3", 00:31:28.691 "uuid": "00000000-0000-0000-0000-000000000003", 00:31:28.691 "is_configured": true, 00:31:28.691 "data_offset": 2048, 00:31:28.691 "data_size": 63488 00:31:28.691 } 00:31:28.691 ] 00:31:28.691 }' 00:31:28.691 17:28:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:31:28.691 17:28:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:31:29.259 17:28:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@526 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:31:29.259 17:28:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:29.259 17:28:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:31:29.259 [2024-11-26 17:28:06.507936] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:31:29.259 [2024-11-26 17:28:06.507973] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:31:29.259 [2024-11-26 17:28:06.508064] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:31:29.259 [2024-11-26 17:28:06.508130] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:31:29.259 [2024-11-26 17:28:06.508142] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name raid_bdev1, state offline 00:31:29.259 17:28:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:29.259 17:28:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@527 -- # rpc_cmd bdev_raid_get_bdevs all 00:31:29.259 17:28:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:29.259 17:28:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:31:29.259 17:28:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@527 -- # jq -r '.[]' 00:31:29.259 17:28:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:29.259 17:28:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@527 -- # raid_bdev= 00:31:29.259 17:28:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@528 -- # '[' -n '' ']' 00:31:29.259 17:28:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@532 -- # '[' 3 -gt 2 ']' 00:31:29.259 17:28:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@534 -- # i=2 00:31:29.259 17:28:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@535 -- # rpc_cmd bdev_passthru_delete pt3 00:31:29.259 17:28:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:29.259 17:28:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:31:29.259 17:28:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:29.259 17:28:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@540 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:31:29.259 17:28:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:29.259 17:28:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:31:29.259 [2024-11-26 17:28:06.563966] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:31:29.259 [2024-11-26 17:28:06.564022] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:31:29.259 [2024-11-26 17:28:06.564061] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ae80 00:31:29.259 [2024-11-26 17:28:06.564074] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:31:29.259 [2024-11-26 17:28:06.566580] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:31:29.259 [2024-11-26 17:28:06.566618] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:31:29.259 [2024-11-26 17:28:06.566698] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:31:29.259 [2024-11-26 17:28:06.566744] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:31:29.259 [2024-11-26 17:28:06.566889] bdev_raid.c:3685:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev pt2 (4) greater than existing raid bdev raid_bdev1 (2) 00:31:29.259 [2024-11-26 17:28:06.566917] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:31:29.259 [2024-11-26 17:28:06.566935] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008580 name raid_bdev1, state configuring 00:31:29.259 [2024-11-26 17:28:06.567003] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:31:29.259 pt1 00:31:29.259 17:28:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:29.259 17:28:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@542 -- # '[' 3 -gt 2 ']' 00:31:29.259 17:28:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@545 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 2 00:31:29.260 17:28:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:31:29.260 17:28:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:31:29.260 17:28:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:31:29.260 17:28:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:31:29.260 17:28:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:31:29.260 17:28:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:31:29.260 17:28:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:31:29.260 17:28:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:31:29.260 17:28:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:31:29.260 17:28:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:31:29.260 17:28:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:31:29.260 17:28:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:29.260 17:28:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:31:29.260 17:28:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:29.260 17:28:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:31:29.260 "name": "raid_bdev1", 00:31:29.260 "uuid": "46181549-ae8d-4163-85b7-7193ce9559e1", 00:31:29.260 "strip_size_kb": 0, 00:31:29.260 "state": "configuring", 00:31:29.260 "raid_level": "raid1", 00:31:29.260 "superblock": true, 00:31:29.260 "num_base_bdevs": 3, 00:31:29.260 "num_base_bdevs_discovered": 1, 00:31:29.260 "num_base_bdevs_operational": 2, 00:31:29.260 "base_bdevs_list": [ 00:31:29.260 { 00:31:29.260 "name": null, 00:31:29.260 "uuid": "00000000-0000-0000-0000-000000000000", 00:31:29.260 "is_configured": false, 00:31:29.260 "data_offset": 2048, 00:31:29.260 "data_size": 63488 00:31:29.260 }, 00:31:29.260 { 00:31:29.260 "name": "pt2", 00:31:29.260 "uuid": "00000000-0000-0000-0000-000000000002", 00:31:29.260 "is_configured": true, 00:31:29.260 "data_offset": 2048, 00:31:29.260 "data_size": 63488 00:31:29.260 }, 00:31:29.260 { 00:31:29.260 "name": null, 00:31:29.260 "uuid": "00000000-0000-0000-0000-000000000003", 00:31:29.260 "is_configured": false, 00:31:29.260 "data_offset": 2048, 00:31:29.260 "data_size": 63488 00:31:29.260 } 00:31:29.260 ] 00:31:29.260 }' 00:31:29.260 17:28:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:31:29.260 17:28:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:31:29.826 17:28:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@546 -- # rpc_cmd bdev_raid_get_bdevs configuring 00:31:29.826 17:28:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:29.826 17:28:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@546 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:31:29.826 17:28:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:31:29.826 17:28:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:29.826 17:28:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@546 -- # [[ false == \f\a\l\s\e ]] 00:31:29.826 17:28:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@549 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:31:29.826 17:28:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:29.826 17:28:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:31:29.826 [2024-11-26 17:28:07.052111] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:31:29.826 [2024-11-26 17:28:07.052537] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:31:29.826 [2024-11-26 17:28:07.052631] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b480 00:31:29.826 [2024-11-26 17:28:07.052696] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:31:29.826 [2024-11-26 17:28:07.053299] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:31:29.826 [2024-11-26 17:28:07.053406] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:31:29.826 [2024-11-26 17:28:07.053554] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:31:29.826 [2024-11-26 17:28:07.053582] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:31:29.826 [2024-11-26 17:28:07.053718] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008900 00:31:29.826 [2024-11-26 17:28:07.053729] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:31:29.826 [2024-11-26 17:28:07.054021] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006080 00:31:29.826 [2024-11-26 17:28:07.054185] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008900 00:31:29.826 [2024-11-26 17:28:07.054205] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008900 00:31:29.826 [2024-11-26 17:28:07.054379] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:31:29.826 pt3 00:31:29.827 17:28:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:29.827 17:28:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@554 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:31:29.827 17:28:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:31:29.827 17:28:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:31:29.827 17:28:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:31:29.827 17:28:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:31:29.827 17:28:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:31:29.827 17:28:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:31:29.827 17:28:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:31:29.827 17:28:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:31:29.827 17:28:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:31:29.827 17:28:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:31:29.827 17:28:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:31:29.827 17:28:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:29.827 17:28:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:31:29.827 17:28:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:29.827 17:28:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:31:29.827 "name": "raid_bdev1", 00:31:29.827 "uuid": "46181549-ae8d-4163-85b7-7193ce9559e1", 00:31:29.827 "strip_size_kb": 0, 00:31:29.827 "state": "online", 00:31:29.827 "raid_level": "raid1", 00:31:29.827 "superblock": true, 00:31:29.827 "num_base_bdevs": 3, 00:31:29.827 "num_base_bdevs_discovered": 2, 00:31:29.827 "num_base_bdevs_operational": 2, 00:31:29.827 "base_bdevs_list": [ 00:31:29.827 { 00:31:29.827 "name": null, 00:31:29.827 "uuid": "00000000-0000-0000-0000-000000000000", 00:31:29.827 "is_configured": false, 00:31:29.827 "data_offset": 2048, 00:31:29.827 "data_size": 63488 00:31:29.827 }, 00:31:29.827 { 00:31:29.827 "name": "pt2", 00:31:29.827 "uuid": "00000000-0000-0000-0000-000000000002", 00:31:29.827 "is_configured": true, 00:31:29.827 "data_offset": 2048, 00:31:29.827 "data_size": 63488 00:31:29.827 }, 00:31:29.827 { 00:31:29.827 "name": "pt3", 00:31:29.827 "uuid": "00000000-0000-0000-0000-000000000003", 00:31:29.827 "is_configured": true, 00:31:29.827 "data_offset": 2048, 00:31:29.827 "data_size": 63488 00:31:29.827 } 00:31:29.827 ] 00:31:29.827 }' 00:31:29.827 17:28:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:31:29.827 17:28:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:31:30.086 17:28:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@555 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:31:30.086 17:28:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@555 -- # rpc_cmd bdev_raid_get_bdevs online 00:31:30.086 17:28:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:30.086 17:28:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:31:30.086 17:28:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:30.086 17:28:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@555 -- # [[ false == \f\a\l\s\e ]] 00:31:30.086 17:28:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@558 -- # jq -r '.[] | .uuid' 00:31:30.086 17:28:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@558 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:31:30.086 17:28:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:30.086 17:28:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:31:30.086 [2024-11-26 17:28:07.460426] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:31:30.086 17:28:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:30.086 17:28:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@558 -- # '[' 46181549-ae8d-4163-85b7-7193ce9559e1 '!=' 46181549-ae8d-4163-85b7-7193ce9559e1 ']' 00:31:30.086 17:28:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@563 -- # killprocess 69067 00:31:30.086 17:28:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@954 -- # '[' -z 69067 ']' 00:31:30.086 17:28:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@958 -- # kill -0 69067 00:31:30.086 17:28:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@959 -- # uname 00:31:30.086 17:28:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:31:30.086 17:28:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 69067 00:31:30.345 killing process with pid 69067 00:31:30.345 17:28:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:31:30.345 17:28:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:31:30.345 17:28:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 69067' 00:31:30.345 17:28:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@973 -- # kill 69067 00:31:30.345 [2024-11-26 17:28:07.536379] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:31:30.345 [2024-11-26 17:28:07.536469] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:31:30.345 17:28:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@978 -- # wait 69067 00:31:30.345 [2024-11-26 17:28:07.536530] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:31:30.345 [2024-11-26 17:28:07.536545] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008900 name raid_bdev1, state offline 00:31:30.604 [2024-11-26 17:28:07.847807] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:31:31.565 17:28:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@565 -- # return 0 00:31:31.565 00:31:31.565 real 0m7.808s 00:31:31.565 user 0m12.269s 00:31:31.565 sys 0m1.474s 00:31:31.565 17:28:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:31:31.565 17:28:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:31:31.565 ************************************ 00:31:31.565 END TEST raid_superblock_test 00:31:31.565 ************************************ 00:31:31.825 17:28:09 bdev_raid -- bdev/bdev_raid.sh@971 -- # run_test raid_read_error_test raid_io_error_test raid1 3 read 00:31:31.825 17:28:09 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:31:31.825 17:28:09 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:31:31.825 17:28:09 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:31:31.825 ************************************ 00:31:31.825 START TEST raid_read_error_test 00:31:31.825 ************************************ 00:31:31.825 17:28:09 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1129 -- # raid_io_error_test raid1 3 read 00:31:31.825 17:28:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=raid1 00:31:31.825 17:28:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=3 00:31:31.825 17:28:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=read 00:31:31.825 17:28:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:31:31.825 17:28:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:31:31.825 17:28:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:31:31.825 17:28:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:31:31.825 17:28:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:31:31.825 17:28:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:31:31.825 17:28:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:31:31.825 17:28:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:31:31.825 17:28:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev3 00:31:31.825 17:28:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:31:31.825 17:28:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:31:31.825 17:28:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:31:31.825 17:28:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:31:31.825 17:28:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:31:31.825 17:28:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:31:31.825 17:28:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:31:31.825 17:28:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:31:31.825 17:28:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:31:31.825 17:28:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@800 -- # '[' raid1 '!=' raid1 ']' 00:31:31.825 17:28:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@804 -- # strip_size=0 00:31:31.825 17:28:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:31:31.825 17:28:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.EH3xfkQGIO 00:31:31.825 17:28:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=69510 00:31:31.825 17:28:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 69510 00:31:31.825 17:28:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:31:31.825 17:28:09 bdev_raid.raid_read_error_test -- common/autotest_common.sh@835 -- # '[' -z 69510 ']' 00:31:31.826 17:28:09 bdev_raid.raid_read_error_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:31:31.826 17:28:09 bdev_raid.raid_read_error_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:31:31.826 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:31:31.826 17:28:09 bdev_raid.raid_read_error_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:31:31.826 17:28:09 bdev_raid.raid_read_error_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:31:31.826 17:28:09 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:31:31.826 [2024-11-26 17:28:09.213155] Starting SPDK v25.01-pre git sha1 f7ce15267 / DPDK 24.03.0 initialization... 00:31:31.826 [2024-11-26 17:28:09.213329] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69510 ] 00:31:32.085 [2024-11-26 17:28:09.405645] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:32.085 [2024-11-26 17:28:09.521424] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:31:32.344 [2024-11-26 17:28:09.731864] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:31:32.344 [2024-11-26 17:28:09.731914] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:31:32.912 17:28:10 bdev_raid.raid_read_error_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:31:32.912 17:28:10 bdev_raid.raid_read_error_test -- common/autotest_common.sh@868 -- # return 0 00:31:32.912 17:28:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:31:32.912 17:28:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:31:32.912 17:28:10 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:32.912 17:28:10 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:31:32.912 BaseBdev1_malloc 00:31:32.912 17:28:10 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:32.912 17:28:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:31:32.912 17:28:10 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:32.912 17:28:10 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:31:32.912 true 00:31:32.912 17:28:10 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:32.912 17:28:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:31:32.912 17:28:10 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:32.912 17:28:10 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:31:32.912 [2024-11-26 17:28:10.178086] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:31:32.912 [2024-11-26 17:28:10.178138] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:31:32.912 [2024-11-26 17:28:10.178161] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:31:32.912 [2024-11-26 17:28:10.178175] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:31:32.912 [2024-11-26 17:28:10.180708] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:31:32.912 [2024-11-26 17:28:10.180747] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:31:32.912 BaseBdev1 00:31:32.912 17:28:10 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:32.912 17:28:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:31:32.912 17:28:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:31:32.912 17:28:10 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:32.912 17:28:10 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:31:32.912 BaseBdev2_malloc 00:31:32.912 17:28:10 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:32.912 17:28:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:31:32.912 17:28:10 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:32.912 17:28:10 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:31:32.912 true 00:31:32.912 17:28:10 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:32.912 17:28:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:31:32.912 17:28:10 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:32.912 17:28:10 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:31:32.912 [2024-11-26 17:28:10.235519] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:31:32.912 [2024-11-26 17:28:10.235589] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:31:32.912 [2024-11-26 17:28:10.235610] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:31:32.912 [2024-11-26 17:28:10.235624] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:31:32.912 [2024-11-26 17:28:10.238145] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:31:32.912 [2024-11-26 17:28:10.238182] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:31:32.912 BaseBdev2 00:31:32.912 17:28:10 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:32.912 17:28:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:31:32.912 17:28:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:31:32.912 17:28:10 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:32.912 17:28:10 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:31:32.912 BaseBdev3_malloc 00:31:32.912 17:28:10 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:32.912 17:28:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev3_malloc 00:31:32.912 17:28:10 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:32.912 17:28:10 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:31:32.912 true 00:31:32.912 17:28:10 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:32.912 17:28:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:31:32.912 17:28:10 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:32.912 17:28:10 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:31:32.912 [2024-11-26 17:28:10.304175] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:31:32.912 [2024-11-26 17:28:10.304225] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:31:32.912 [2024-11-26 17:28:10.304244] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:31:32.912 [2024-11-26 17:28:10.304258] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:31:32.912 [2024-11-26 17:28:10.306627] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:31:32.912 [2024-11-26 17:28:10.306665] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:31:32.912 BaseBdev3 00:31:32.912 17:28:10 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:32.912 17:28:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n raid_bdev1 -s 00:31:32.912 17:28:10 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:32.912 17:28:10 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:31:32.913 [2024-11-26 17:28:10.312252] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:31:32.913 [2024-11-26 17:28:10.314389] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:31:32.913 [2024-11-26 17:28:10.314466] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:31:32.913 [2024-11-26 17:28:10.314673] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:31:32.913 [2024-11-26 17:28:10.314693] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:31:32.913 [2024-11-26 17:28:10.314948] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006560 00:31:32.913 [2024-11-26 17:28:10.315123] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:31:32.913 [2024-11-26 17:28:10.315144] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008200 00:31:32.913 [2024-11-26 17:28:10.315292] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:31:32.913 17:28:10 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:32.913 17:28:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:31:32.913 17:28:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:31:32.913 17:28:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:31:32.913 17:28:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:31:32.913 17:28:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:31:32.913 17:28:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:31:32.913 17:28:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:31:32.913 17:28:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:31:32.913 17:28:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:31:32.913 17:28:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:31:32.913 17:28:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:31:32.913 17:28:10 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:32.913 17:28:10 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:31:32.913 17:28:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:31:32.913 17:28:10 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:33.171 17:28:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:31:33.171 "name": "raid_bdev1", 00:31:33.171 "uuid": "1ec71be8-49b4-4943-b7ac-779b27d2481f", 00:31:33.171 "strip_size_kb": 0, 00:31:33.171 "state": "online", 00:31:33.171 "raid_level": "raid1", 00:31:33.171 "superblock": true, 00:31:33.171 "num_base_bdevs": 3, 00:31:33.171 "num_base_bdevs_discovered": 3, 00:31:33.171 "num_base_bdevs_operational": 3, 00:31:33.171 "base_bdevs_list": [ 00:31:33.171 { 00:31:33.171 "name": "BaseBdev1", 00:31:33.171 "uuid": "8ad9cd04-c529-53d4-beae-17b6ab1c3367", 00:31:33.171 "is_configured": true, 00:31:33.171 "data_offset": 2048, 00:31:33.171 "data_size": 63488 00:31:33.171 }, 00:31:33.171 { 00:31:33.171 "name": "BaseBdev2", 00:31:33.171 "uuid": "038b8db2-227e-520d-8695-b27a530b0506", 00:31:33.171 "is_configured": true, 00:31:33.171 "data_offset": 2048, 00:31:33.171 "data_size": 63488 00:31:33.171 }, 00:31:33.171 { 00:31:33.171 "name": "BaseBdev3", 00:31:33.171 "uuid": "410a516c-13eb-5676-8dab-fa21de0ef2ff", 00:31:33.171 "is_configured": true, 00:31:33.171 "data_offset": 2048, 00:31:33.171 "data_size": 63488 00:31:33.171 } 00:31:33.171 ] 00:31:33.171 }' 00:31:33.171 17:28:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:31:33.172 17:28:10 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:31:33.430 17:28:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:31:33.430 17:28:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:31:33.689 [2024-11-26 17:28:10.906113] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006700 00:31:34.623 17:28:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc read failure 00:31:34.623 17:28:11 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:34.623 17:28:11 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:31:34.624 17:28:11 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:34.624 17:28:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:31:34.624 17:28:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@832 -- # [[ raid1 = \r\a\i\d\1 ]] 00:31:34.624 17:28:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@832 -- # [[ read = \w\r\i\t\e ]] 00:31:34.624 17:28:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=3 00:31:34.624 17:28:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:31:34.624 17:28:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:31:34.624 17:28:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:31:34.624 17:28:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:31:34.624 17:28:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:31:34.624 17:28:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:31:34.624 17:28:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:31:34.624 17:28:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:31:34.624 17:28:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:31:34.624 17:28:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:31:34.624 17:28:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:31:34.624 17:28:11 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:34.624 17:28:11 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:31:34.624 17:28:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:31:34.624 17:28:11 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:34.624 17:28:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:31:34.624 "name": "raid_bdev1", 00:31:34.624 "uuid": "1ec71be8-49b4-4943-b7ac-779b27d2481f", 00:31:34.624 "strip_size_kb": 0, 00:31:34.624 "state": "online", 00:31:34.624 "raid_level": "raid1", 00:31:34.624 "superblock": true, 00:31:34.624 "num_base_bdevs": 3, 00:31:34.624 "num_base_bdevs_discovered": 3, 00:31:34.624 "num_base_bdevs_operational": 3, 00:31:34.624 "base_bdevs_list": [ 00:31:34.624 { 00:31:34.624 "name": "BaseBdev1", 00:31:34.624 "uuid": "8ad9cd04-c529-53d4-beae-17b6ab1c3367", 00:31:34.624 "is_configured": true, 00:31:34.624 "data_offset": 2048, 00:31:34.624 "data_size": 63488 00:31:34.624 }, 00:31:34.624 { 00:31:34.624 "name": "BaseBdev2", 00:31:34.624 "uuid": "038b8db2-227e-520d-8695-b27a530b0506", 00:31:34.624 "is_configured": true, 00:31:34.624 "data_offset": 2048, 00:31:34.624 "data_size": 63488 00:31:34.624 }, 00:31:34.624 { 00:31:34.624 "name": "BaseBdev3", 00:31:34.624 "uuid": "410a516c-13eb-5676-8dab-fa21de0ef2ff", 00:31:34.624 "is_configured": true, 00:31:34.624 "data_offset": 2048, 00:31:34.624 "data_size": 63488 00:31:34.624 } 00:31:34.624 ] 00:31:34.624 }' 00:31:34.624 17:28:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:31:34.624 17:28:11 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:31:34.883 17:28:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:31:34.883 17:28:12 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:34.883 17:28:12 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:31:34.883 [2024-11-26 17:28:12.207485] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:31:34.883 [2024-11-26 17:28:12.207522] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:31:34.883 [2024-11-26 17:28:12.210458] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:31:34.883 [2024-11-26 17:28:12.210515] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:31:34.883 [2024-11-26 17:28:12.210639] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:31:34.883 [2024-11-26 17:28:12.210653] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name raid_bdev1, state offline 00:31:34.883 { 00:31:34.883 "results": [ 00:31:34.883 { 00:31:34.883 "job": "raid_bdev1", 00:31:34.883 "core_mask": "0x1", 00:31:34.883 "workload": "randrw", 00:31:34.883 "percentage": 50, 00:31:34.883 "status": "finished", 00:31:34.883 "queue_depth": 1, 00:31:34.883 "io_size": 131072, 00:31:34.883 "runtime": 1.299217, 00:31:34.883 "iops": 12886.992704067143, 00:31:34.883 "mibps": 1610.8740880083928, 00:31:34.883 "io_failed": 0, 00:31:34.883 "io_timeout": 0, 00:31:34.883 "avg_latency_us": 74.74795471028405, 00:31:34.883 "min_latency_us": 24.86857142857143, 00:31:34.883 "max_latency_us": 1419.9466666666667 00:31:34.883 } 00:31:34.883 ], 00:31:34.883 "core_count": 1 00:31:34.883 } 00:31:34.883 17:28:12 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:34.883 17:28:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 69510 00:31:34.883 17:28:12 bdev_raid.raid_read_error_test -- common/autotest_common.sh@954 -- # '[' -z 69510 ']' 00:31:34.883 17:28:12 bdev_raid.raid_read_error_test -- common/autotest_common.sh@958 -- # kill -0 69510 00:31:34.883 17:28:12 bdev_raid.raid_read_error_test -- common/autotest_common.sh@959 -- # uname 00:31:34.883 17:28:12 bdev_raid.raid_read_error_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:31:34.883 17:28:12 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 69510 00:31:34.883 killing process with pid 69510 00:31:34.883 17:28:12 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:31:34.883 17:28:12 bdev_raid.raid_read_error_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:31:34.883 17:28:12 bdev_raid.raid_read_error_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 69510' 00:31:34.883 17:28:12 bdev_raid.raid_read_error_test -- common/autotest_common.sh@973 -- # kill 69510 00:31:34.883 [2024-11-26 17:28:12.245983] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:31:34.883 17:28:12 bdev_raid.raid_read_error_test -- common/autotest_common.sh@978 -- # wait 69510 00:31:35.142 [2024-11-26 17:28:12.487325] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:31:36.596 17:28:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.EH3xfkQGIO 00:31:36.596 17:28:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:31:36.596 17:28:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:31:36.596 17:28:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.00 00:31:36.596 17:28:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy raid1 00:31:36.596 17:28:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:31:36.596 17:28:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@199 -- # return 0 00:31:36.596 17:28:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@847 -- # [[ 0.00 = \0\.\0\0 ]] 00:31:36.596 00:31:36.596 real 0m4.676s 00:31:36.596 user 0m5.614s 00:31:36.596 sys 0m0.609s 00:31:36.596 17:28:13 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:31:36.596 ************************************ 00:31:36.596 END TEST raid_read_error_test 00:31:36.596 ************************************ 00:31:36.596 17:28:13 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:31:36.596 17:28:13 bdev_raid -- bdev/bdev_raid.sh@972 -- # run_test raid_write_error_test raid_io_error_test raid1 3 write 00:31:36.596 17:28:13 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:31:36.596 17:28:13 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:31:36.596 17:28:13 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:31:36.596 ************************************ 00:31:36.596 START TEST raid_write_error_test 00:31:36.596 ************************************ 00:31:36.596 17:28:13 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1129 -- # raid_io_error_test raid1 3 write 00:31:36.596 17:28:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=raid1 00:31:36.596 17:28:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=3 00:31:36.596 17:28:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=write 00:31:36.596 17:28:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:31:36.596 17:28:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:31:36.596 17:28:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:31:36.596 17:28:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:31:36.596 17:28:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:31:36.596 17:28:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:31:36.596 17:28:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:31:36.596 17:28:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:31:36.596 17:28:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev3 00:31:36.596 17:28:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:31:36.596 17:28:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:31:36.596 17:28:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:31:36.596 17:28:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:31:36.596 17:28:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:31:36.596 17:28:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:31:36.596 17:28:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:31:36.596 17:28:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:31:36.596 17:28:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:31:36.596 17:28:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@800 -- # '[' raid1 '!=' raid1 ']' 00:31:36.596 17:28:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@804 -- # strip_size=0 00:31:36.596 17:28:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:31:36.596 17:28:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.rM48hdGQpc 00:31:36.596 17:28:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=69660 00:31:36.596 17:28:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:31:36.596 17:28:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 69660 00:31:36.596 17:28:13 bdev_raid.raid_write_error_test -- common/autotest_common.sh@835 -- # '[' -z 69660 ']' 00:31:36.596 17:28:13 bdev_raid.raid_write_error_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:31:36.596 17:28:13 bdev_raid.raid_write_error_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:31:36.596 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:31:36.596 17:28:13 bdev_raid.raid_write_error_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:31:36.596 17:28:13 bdev_raid.raid_write_error_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:31:36.596 17:28:13 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:31:36.596 [2024-11-26 17:28:13.893959] Starting SPDK v25.01-pre git sha1 f7ce15267 / DPDK 24.03.0 initialization... 00:31:36.596 [2024-11-26 17:28:13.894118] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69660 ] 00:31:36.854 [2024-11-26 17:28:14.078451] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:36.854 [2024-11-26 17:28:14.200115] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:31:37.111 [2024-11-26 17:28:14.424651] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:31:37.111 [2024-11-26 17:28:14.424943] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:31:37.370 17:28:14 bdev_raid.raid_write_error_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:31:37.370 17:28:14 bdev_raid.raid_write_error_test -- common/autotest_common.sh@868 -- # return 0 00:31:37.370 17:28:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:31:37.370 17:28:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:31:37.370 17:28:14 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:37.370 17:28:14 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:31:37.629 BaseBdev1_malloc 00:31:37.629 17:28:14 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:37.629 17:28:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:31:37.629 17:28:14 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:37.629 17:28:14 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:31:37.629 true 00:31:37.629 17:28:14 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:37.629 17:28:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:31:37.629 17:28:14 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:37.629 17:28:14 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:31:37.629 [2024-11-26 17:28:14.862661] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:31:37.629 [2024-11-26 17:28:14.862723] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:31:37.629 [2024-11-26 17:28:14.862748] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:31:37.629 [2024-11-26 17:28:14.862764] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:31:37.629 [2024-11-26 17:28:14.865418] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:31:37.629 [2024-11-26 17:28:14.865572] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:31:37.629 BaseBdev1 00:31:37.629 17:28:14 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:37.629 17:28:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:31:37.629 17:28:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:31:37.629 17:28:14 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:37.629 17:28:14 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:31:37.629 BaseBdev2_malloc 00:31:37.629 17:28:14 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:37.629 17:28:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:31:37.629 17:28:14 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:37.629 17:28:14 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:31:37.629 true 00:31:37.629 17:28:14 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:37.629 17:28:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:31:37.629 17:28:14 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:37.629 17:28:14 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:31:37.629 [2024-11-26 17:28:14.924687] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:31:37.629 [2024-11-26 17:28:14.924870] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:31:37.629 [2024-11-26 17:28:14.924917] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:31:37.629 [2024-11-26 17:28:14.924933] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:31:37.629 [2024-11-26 17:28:14.927644] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:31:37.629 [2024-11-26 17:28:14.927693] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:31:37.629 BaseBdev2 00:31:37.629 17:28:14 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:37.629 17:28:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:31:37.629 17:28:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:31:37.629 17:28:14 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:37.629 17:28:14 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:31:37.629 BaseBdev3_malloc 00:31:37.629 17:28:14 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:37.629 17:28:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev3_malloc 00:31:37.629 17:28:14 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:37.629 17:28:14 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:31:37.629 true 00:31:37.629 17:28:14 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:37.629 17:28:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:31:37.629 17:28:14 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:37.629 17:28:15 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:31:37.629 [2024-11-26 17:28:15.007983] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:31:37.629 [2024-11-26 17:28:15.008041] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:31:37.629 [2024-11-26 17:28:15.008078] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:31:37.629 [2024-11-26 17:28:15.008093] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:31:37.629 [2024-11-26 17:28:15.010750] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:31:37.629 [2024-11-26 17:28:15.010914] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:31:37.629 BaseBdev3 00:31:37.629 17:28:15 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:37.629 17:28:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n raid_bdev1 -s 00:31:37.629 17:28:15 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:37.629 17:28:15 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:31:37.629 [2024-11-26 17:28:15.016074] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:31:37.629 [2024-11-26 17:28:15.018297] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:31:37.629 [2024-11-26 17:28:15.018404] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:31:37.629 [2024-11-26 17:28:15.018726] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:31:37.629 [2024-11-26 17:28:15.018827] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:31:37.629 [2024-11-26 17:28:15.019165] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006560 00:31:37.629 [2024-11-26 17:28:15.019440] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:31:37.629 [2024-11-26 17:28:15.019540] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008200 00:31:37.629 [2024-11-26 17:28:15.019800] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:31:37.629 17:28:15 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:37.629 17:28:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:31:37.629 17:28:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:31:37.629 17:28:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:31:37.629 17:28:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:31:37.629 17:28:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:31:37.629 17:28:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:31:37.629 17:28:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:31:37.629 17:28:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:31:37.629 17:28:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:31:37.629 17:28:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:31:37.629 17:28:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:31:37.629 17:28:15 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:37.629 17:28:15 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:31:37.629 17:28:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:31:37.629 17:28:15 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:37.629 17:28:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:31:37.629 "name": "raid_bdev1", 00:31:37.629 "uuid": "62109d71-390c-485f-96c2-327a4dbc6566", 00:31:37.629 "strip_size_kb": 0, 00:31:37.629 "state": "online", 00:31:37.629 "raid_level": "raid1", 00:31:37.629 "superblock": true, 00:31:37.629 "num_base_bdevs": 3, 00:31:37.629 "num_base_bdevs_discovered": 3, 00:31:37.629 "num_base_bdevs_operational": 3, 00:31:37.629 "base_bdevs_list": [ 00:31:37.629 { 00:31:37.629 "name": "BaseBdev1", 00:31:37.629 "uuid": "f16e6676-762a-5e2e-8a6b-c16ef3233339", 00:31:37.629 "is_configured": true, 00:31:37.629 "data_offset": 2048, 00:31:37.629 "data_size": 63488 00:31:37.629 }, 00:31:37.629 { 00:31:37.629 "name": "BaseBdev2", 00:31:37.629 "uuid": "d365eab1-d29e-5a63-be8d-7524a0f841d5", 00:31:37.629 "is_configured": true, 00:31:37.629 "data_offset": 2048, 00:31:37.629 "data_size": 63488 00:31:37.629 }, 00:31:37.629 { 00:31:37.629 "name": "BaseBdev3", 00:31:37.629 "uuid": "a49a68f0-011f-503d-bc99-c52e15f5aa2f", 00:31:37.629 "is_configured": true, 00:31:37.629 "data_offset": 2048, 00:31:37.629 "data_size": 63488 00:31:37.629 } 00:31:37.629 ] 00:31:37.629 }' 00:31:37.629 17:28:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:31:37.629 17:28:15 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:31:38.195 17:28:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:31:38.195 17:28:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:31:38.195 [2024-11-26 17:28:15.581707] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006700 00:31:39.129 17:28:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc write failure 00:31:39.129 17:28:16 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:39.129 17:28:16 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:31:39.129 [2024-11-26 17:28:16.469949] bdev_raid.c:2276:_raid_bdev_fail_base_bdev: *NOTICE*: Failing base bdev in slot 0 ('BaseBdev1') of raid bdev 'raid_bdev1' 00:31:39.129 [2024-11-26 17:28:16.470023] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:31:39.129 [2024-11-26 17:28:16.470272] bdev_raid.c:1974:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 0 raid_ch: 0x60d000006700 00:31:39.129 17:28:16 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:39.129 17:28:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:31:39.129 17:28:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@832 -- # [[ raid1 = \r\a\i\d\1 ]] 00:31:39.129 17:28:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@832 -- # [[ write = \w\r\i\t\e ]] 00:31:39.129 17:28:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@833 -- # expected_num_base_bdevs=2 00:31:39.129 17:28:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:31:39.129 17:28:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:31:39.129 17:28:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:31:39.129 17:28:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:31:39.129 17:28:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:31:39.129 17:28:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:31:39.129 17:28:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:31:39.129 17:28:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:31:39.129 17:28:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:31:39.129 17:28:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:31:39.129 17:28:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:31:39.129 17:28:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:31:39.129 17:28:16 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:39.129 17:28:16 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:31:39.129 17:28:16 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:39.129 17:28:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:31:39.129 "name": "raid_bdev1", 00:31:39.129 "uuid": "62109d71-390c-485f-96c2-327a4dbc6566", 00:31:39.129 "strip_size_kb": 0, 00:31:39.129 "state": "online", 00:31:39.129 "raid_level": "raid1", 00:31:39.129 "superblock": true, 00:31:39.129 "num_base_bdevs": 3, 00:31:39.129 "num_base_bdevs_discovered": 2, 00:31:39.129 "num_base_bdevs_operational": 2, 00:31:39.129 "base_bdevs_list": [ 00:31:39.129 { 00:31:39.129 "name": null, 00:31:39.129 "uuid": "00000000-0000-0000-0000-000000000000", 00:31:39.129 "is_configured": false, 00:31:39.129 "data_offset": 0, 00:31:39.129 "data_size": 63488 00:31:39.129 }, 00:31:39.129 { 00:31:39.129 "name": "BaseBdev2", 00:31:39.129 "uuid": "d365eab1-d29e-5a63-be8d-7524a0f841d5", 00:31:39.129 "is_configured": true, 00:31:39.129 "data_offset": 2048, 00:31:39.129 "data_size": 63488 00:31:39.129 }, 00:31:39.129 { 00:31:39.129 "name": "BaseBdev3", 00:31:39.129 "uuid": "a49a68f0-011f-503d-bc99-c52e15f5aa2f", 00:31:39.129 "is_configured": true, 00:31:39.129 "data_offset": 2048, 00:31:39.129 "data_size": 63488 00:31:39.129 } 00:31:39.129 ] 00:31:39.129 }' 00:31:39.129 17:28:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:31:39.129 17:28:16 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:31:39.697 17:28:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:31:39.697 17:28:16 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:39.697 17:28:16 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:31:39.697 [2024-11-26 17:28:16.933657] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:31:39.697 [2024-11-26 17:28:16.933876] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:31:39.697 [2024-11-26 17:28:16.937183] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:31:39.697 [2024-11-26 17:28:16.937241] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:31:39.697 [2024-11-26 17:28:16.937323] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:31:39.697 [2024-11-26 17:28:16.937344] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name raid_bdev1, state offline 00:31:39.697 { 00:31:39.697 "results": [ 00:31:39.697 { 00:31:39.697 "job": "raid_bdev1", 00:31:39.697 "core_mask": "0x1", 00:31:39.697 "workload": "randrw", 00:31:39.697 "percentage": 50, 00:31:39.697 "status": "finished", 00:31:39.697 "queue_depth": 1, 00:31:39.697 "io_size": 131072, 00:31:39.697 "runtime": 1.349888, 00:31:39.697 "iops": 14298.223260003793, 00:31:39.697 "mibps": 1787.2779075004742, 00:31:39.697 "io_failed": 0, 00:31:39.697 "io_timeout": 0, 00:31:39.697 "avg_latency_us": 67.05208444664848, 00:31:39.697 "min_latency_us": 24.99047619047619, 00:31:39.697 "max_latency_us": 1560.3809523809523 00:31:39.697 } 00:31:39.697 ], 00:31:39.697 "core_count": 1 00:31:39.697 } 00:31:39.697 17:28:16 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:39.697 17:28:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 69660 00:31:39.697 17:28:16 bdev_raid.raid_write_error_test -- common/autotest_common.sh@954 -- # '[' -z 69660 ']' 00:31:39.697 17:28:16 bdev_raid.raid_write_error_test -- common/autotest_common.sh@958 -- # kill -0 69660 00:31:39.697 17:28:16 bdev_raid.raid_write_error_test -- common/autotest_common.sh@959 -- # uname 00:31:39.697 17:28:16 bdev_raid.raid_write_error_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:31:39.697 17:28:16 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 69660 00:31:39.697 17:28:16 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:31:39.697 17:28:16 bdev_raid.raid_write_error_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:31:39.697 17:28:16 bdev_raid.raid_write_error_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 69660' 00:31:39.697 killing process with pid 69660 00:31:39.697 17:28:16 bdev_raid.raid_write_error_test -- common/autotest_common.sh@973 -- # kill 69660 00:31:39.697 [2024-11-26 17:28:16.978915] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:31:39.697 17:28:16 bdev_raid.raid_write_error_test -- common/autotest_common.sh@978 -- # wait 69660 00:31:39.956 [2024-11-26 17:28:17.222850] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:31:41.334 17:28:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.rM48hdGQpc 00:31:41.334 17:28:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:31:41.334 17:28:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:31:41.334 17:28:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.00 00:31:41.334 17:28:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy raid1 00:31:41.334 17:28:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:31:41.334 17:28:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@199 -- # return 0 00:31:41.334 17:28:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@847 -- # [[ 0.00 = \0\.\0\0 ]] 00:31:41.334 00:31:41.334 real 0m4.663s 00:31:41.334 user 0m5.575s 00:31:41.334 sys 0m0.600s 00:31:41.334 ************************************ 00:31:41.334 END TEST raid_write_error_test 00:31:41.334 ************************************ 00:31:41.334 17:28:18 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:31:41.334 17:28:18 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:31:41.334 17:28:18 bdev_raid -- bdev/bdev_raid.sh@966 -- # for n in {2..4} 00:31:41.334 17:28:18 bdev_raid -- bdev/bdev_raid.sh@967 -- # for level in raid0 concat raid1 00:31:41.334 17:28:18 bdev_raid -- bdev/bdev_raid.sh@968 -- # run_test raid_state_function_test raid_state_function_test raid0 4 false 00:31:41.334 17:28:18 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:31:41.334 17:28:18 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:31:41.334 17:28:18 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:31:41.334 ************************************ 00:31:41.334 START TEST raid_state_function_test 00:31:41.334 ************************************ 00:31:41.334 17:28:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1129 -- # raid_state_function_test raid0 4 false 00:31:41.334 17:28:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # local raid_level=raid0 00:31:41.334 17:28:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=4 00:31:41.334 17:28:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # local superblock=false 00:31:41.334 17:28:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:31:41.334 17:28:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:31:41.334 17:28:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:31:41.334 17:28:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:31:41.334 17:28:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:31:41.334 17:28:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:31:41.334 17:28:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:31:41.334 17:28:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:31:41.334 17:28:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:31:41.334 17:28:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:31:41.334 17:28:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:31:41.334 17:28:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:31:41.334 17:28:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev4 00:31:41.334 17:28:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:31:41.334 17:28:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:31:41.334 17:28:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:31:41.334 17:28:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:31:41.334 17:28:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:31:41.334 17:28:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # local strip_size 00:31:41.334 17:28:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:31:41.334 17:28:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:31:41.334 17:28:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@215 -- # '[' raid0 '!=' raid1 ']' 00:31:41.334 17:28:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:31:41.334 17:28:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:31:41.334 17:28:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@222 -- # '[' false = true ']' 00:31:41.334 Process raid pid: 69806 00:31:41.334 17:28:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # superblock_create_arg= 00:31:41.334 17:28:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@229 -- # raid_pid=69806 00:31:41.334 17:28:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 69806' 00:31:41.334 17:28:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@231 -- # waitforlisten 69806 00:31:41.334 17:28:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:31:41.335 17:28:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@835 -- # '[' -z 69806 ']' 00:31:41.335 17:28:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:31:41.335 17:28:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:31:41.335 17:28:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:31:41.335 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:31:41.335 17:28:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:31:41.335 17:28:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:31:41.335 [2024-11-26 17:28:18.648993] Starting SPDK v25.01-pre git sha1 f7ce15267 / DPDK 24.03.0 initialization... 00:31:41.335 [2024-11-26 17:28:18.649427] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:31:41.620 [2024-11-26 17:28:18.841742] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:41.620 [2024-11-26 17:28:18.963266] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:31:41.878 [2024-11-26 17:28:19.186895] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:31:41.878 [2024-11-26 17:28:19.186932] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:31:42.446 17:28:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:31:42.446 17:28:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@868 -- # return 0 00:31:42.446 17:28:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:31:42.446 17:28:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:42.446 17:28:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:31:42.446 [2024-11-26 17:28:19.598862] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:31:42.446 [2024-11-26 17:28:19.599083] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:31:42.446 [2024-11-26 17:28:19.599125] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:31:42.446 [2024-11-26 17:28:19.599140] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:31:42.446 [2024-11-26 17:28:19.599149] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:31:42.446 [2024-11-26 17:28:19.599174] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:31:42.446 [2024-11-26 17:28:19.599182] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:31:42.446 [2024-11-26 17:28:19.599193] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:31:42.446 17:28:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:42.446 17:28:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:31:42.446 17:28:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:31:42.446 17:28:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:31:42.446 17:28:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:31:42.446 17:28:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:31:42.446 17:28:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:31:42.446 17:28:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:31:42.446 17:28:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:31:42.446 17:28:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:31:42.447 17:28:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:31:42.447 17:28:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:31:42.447 17:28:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:42.447 17:28:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:31:42.447 17:28:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:31:42.447 17:28:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:42.447 17:28:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:31:42.447 "name": "Existed_Raid", 00:31:42.447 "uuid": "00000000-0000-0000-0000-000000000000", 00:31:42.447 "strip_size_kb": 64, 00:31:42.447 "state": "configuring", 00:31:42.447 "raid_level": "raid0", 00:31:42.447 "superblock": false, 00:31:42.447 "num_base_bdevs": 4, 00:31:42.447 "num_base_bdevs_discovered": 0, 00:31:42.447 "num_base_bdevs_operational": 4, 00:31:42.447 "base_bdevs_list": [ 00:31:42.447 { 00:31:42.447 "name": "BaseBdev1", 00:31:42.447 "uuid": "00000000-0000-0000-0000-000000000000", 00:31:42.447 "is_configured": false, 00:31:42.447 "data_offset": 0, 00:31:42.447 "data_size": 0 00:31:42.447 }, 00:31:42.447 { 00:31:42.447 "name": "BaseBdev2", 00:31:42.447 "uuid": "00000000-0000-0000-0000-000000000000", 00:31:42.447 "is_configured": false, 00:31:42.447 "data_offset": 0, 00:31:42.447 "data_size": 0 00:31:42.447 }, 00:31:42.447 { 00:31:42.447 "name": "BaseBdev3", 00:31:42.447 "uuid": "00000000-0000-0000-0000-000000000000", 00:31:42.447 "is_configured": false, 00:31:42.447 "data_offset": 0, 00:31:42.447 "data_size": 0 00:31:42.447 }, 00:31:42.447 { 00:31:42.447 "name": "BaseBdev4", 00:31:42.447 "uuid": "00000000-0000-0000-0000-000000000000", 00:31:42.447 "is_configured": false, 00:31:42.447 "data_offset": 0, 00:31:42.447 "data_size": 0 00:31:42.447 } 00:31:42.447 ] 00:31:42.447 }' 00:31:42.447 17:28:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:31:42.447 17:28:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:31:42.705 17:28:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:31:42.705 17:28:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:42.705 17:28:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:31:42.705 [2024-11-26 17:28:20.066947] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:31:42.705 [2024-11-26 17:28:20.067181] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:31:42.705 17:28:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:42.705 17:28:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:31:42.705 17:28:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:42.705 17:28:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:31:42.705 [2024-11-26 17:28:20.078939] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:31:42.705 [2024-11-26 17:28:20.079115] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:31:42.705 [2024-11-26 17:28:20.079138] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:31:42.706 [2024-11-26 17:28:20.079153] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:31:42.706 [2024-11-26 17:28:20.079162] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:31:42.706 [2024-11-26 17:28:20.079175] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:31:42.706 [2024-11-26 17:28:20.079183] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:31:42.706 [2024-11-26 17:28:20.079196] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:31:42.706 17:28:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:42.706 17:28:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:31:42.706 17:28:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:42.706 17:28:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:31:42.706 [2024-11-26 17:28:20.127595] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:31:42.706 BaseBdev1 00:31:42.706 17:28:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:42.706 17:28:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:31:42.706 17:28:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:31:42.706 17:28:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:31:42.706 17:28:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:31:42.706 17:28:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:31:42.706 17:28:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:31:42.706 17:28:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:31:42.706 17:28:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:42.706 17:28:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:31:42.706 17:28:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:42.706 17:28:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:31:42.706 17:28:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:42.706 17:28:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:31:42.965 [ 00:31:42.965 { 00:31:42.965 "name": "BaseBdev1", 00:31:42.965 "aliases": [ 00:31:42.965 "e8dfeba4-7239-4d10-8d98-50fa2f57218f" 00:31:42.965 ], 00:31:42.965 "product_name": "Malloc disk", 00:31:42.965 "block_size": 512, 00:31:42.965 "num_blocks": 65536, 00:31:42.965 "uuid": "e8dfeba4-7239-4d10-8d98-50fa2f57218f", 00:31:42.965 "assigned_rate_limits": { 00:31:42.965 "rw_ios_per_sec": 0, 00:31:42.965 "rw_mbytes_per_sec": 0, 00:31:42.965 "r_mbytes_per_sec": 0, 00:31:42.965 "w_mbytes_per_sec": 0 00:31:42.965 }, 00:31:42.965 "claimed": true, 00:31:42.965 "claim_type": "exclusive_write", 00:31:42.965 "zoned": false, 00:31:42.965 "supported_io_types": { 00:31:42.965 "read": true, 00:31:42.965 "write": true, 00:31:42.965 "unmap": true, 00:31:42.965 "flush": true, 00:31:42.965 "reset": true, 00:31:42.965 "nvme_admin": false, 00:31:42.965 "nvme_io": false, 00:31:42.965 "nvme_io_md": false, 00:31:42.965 "write_zeroes": true, 00:31:42.965 "zcopy": true, 00:31:42.965 "get_zone_info": false, 00:31:42.965 "zone_management": false, 00:31:42.965 "zone_append": false, 00:31:42.965 "compare": false, 00:31:42.965 "compare_and_write": false, 00:31:42.965 "abort": true, 00:31:42.965 "seek_hole": false, 00:31:42.965 "seek_data": false, 00:31:42.965 "copy": true, 00:31:42.965 "nvme_iov_md": false 00:31:42.965 }, 00:31:42.965 "memory_domains": [ 00:31:42.965 { 00:31:42.965 "dma_device_id": "system", 00:31:42.965 "dma_device_type": 1 00:31:42.965 }, 00:31:42.965 { 00:31:42.965 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:31:42.965 "dma_device_type": 2 00:31:42.965 } 00:31:42.965 ], 00:31:42.965 "driver_specific": {} 00:31:42.965 } 00:31:42.965 ] 00:31:42.965 17:28:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:42.965 17:28:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:31:42.965 17:28:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:31:42.965 17:28:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:31:42.965 17:28:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:31:42.965 17:28:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:31:42.965 17:28:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:31:42.965 17:28:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:31:42.965 17:28:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:31:42.965 17:28:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:31:42.965 17:28:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:31:42.965 17:28:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:31:42.965 17:28:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:31:42.965 17:28:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:31:42.965 17:28:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:42.965 17:28:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:31:42.965 17:28:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:42.965 17:28:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:31:42.965 "name": "Existed_Raid", 00:31:42.966 "uuid": "00000000-0000-0000-0000-000000000000", 00:31:42.966 "strip_size_kb": 64, 00:31:42.966 "state": "configuring", 00:31:42.966 "raid_level": "raid0", 00:31:42.966 "superblock": false, 00:31:42.966 "num_base_bdevs": 4, 00:31:42.966 "num_base_bdevs_discovered": 1, 00:31:42.966 "num_base_bdevs_operational": 4, 00:31:42.966 "base_bdevs_list": [ 00:31:42.966 { 00:31:42.966 "name": "BaseBdev1", 00:31:42.966 "uuid": "e8dfeba4-7239-4d10-8d98-50fa2f57218f", 00:31:42.966 "is_configured": true, 00:31:42.966 "data_offset": 0, 00:31:42.966 "data_size": 65536 00:31:42.966 }, 00:31:42.966 { 00:31:42.966 "name": "BaseBdev2", 00:31:42.966 "uuid": "00000000-0000-0000-0000-000000000000", 00:31:42.966 "is_configured": false, 00:31:42.966 "data_offset": 0, 00:31:42.966 "data_size": 0 00:31:42.966 }, 00:31:42.966 { 00:31:42.966 "name": "BaseBdev3", 00:31:42.966 "uuid": "00000000-0000-0000-0000-000000000000", 00:31:42.966 "is_configured": false, 00:31:42.966 "data_offset": 0, 00:31:42.966 "data_size": 0 00:31:42.966 }, 00:31:42.966 { 00:31:42.966 "name": "BaseBdev4", 00:31:42.966 "uuid": "00000000-0000-0000-0000-000000000000", 00:31:42.966 "is_configured": false, 00:31:42.966 "data_offset": 0, 00:31:42.966 "data_size": 0 00:31:42.966 } 00:31:42.966 ] 00:31:42.966 }' 00:31:42.966 17:28:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:31:42.966 17:28:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:31:43.225 17:28:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:31:43.225 17:28:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:43.225 17:28:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:31:43.225 [2024-11-26 17:28:20.623747] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:31:43.225 [2024-11-26 17:28:20.623805] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:31:43.225 17:28:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:43.225 17:28:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:31:43.225 17:28:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:43.225 17:28:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:31:43.225 [2024-11-26 17:28:20.631801] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:31:43.225 [2024-11-26 17:28:20.633862] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:31:43.225 [2024-11-26 17:28:20.633909] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:31:43.225 [2024-11-26 17:28:20.633921] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:31:43.225 [2024-11-26 17:28:20.633936] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:31:43.225 [2024-11-26 17:28:20.633944] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:31:43.225 [2024-11-26 17:28:20.633955] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:31:43.225 17:28:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:43.225 17:28:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:31:43.225 17:28:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:31:43.225 17:28:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:31:43.225 17:28:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:31:43.225 17:28:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:31:43.225 17:28:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:31:43.225 17:28:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:31:43.225 17:28:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:31:43.225 17:28:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:31:43.225 17:28:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:31:43.225 17:28:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:31:43.225 17:28:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:31:43.225 17:28:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:31:43.225 17:28:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:31:43.225 17:28:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:43.225 17:28:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:31:43.226 17:28:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:43.484 17:28:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:31:43.484 "name": "Existed_Raid", 00:31:43.484 "uuid": "00000000-0000-0000-0000-000000000000", 00:31:43.484 "strip_size_kb": 64, 00:31:43.484 "state": "configuring", 00:31:43.484 "raid_level": "raid0", 00:31:43.484 "superblock": false, 00:31:43.484 "num_base_bdevs": 4, 00:31:43.484 "num_base_bdevs_discovered": 1, 00:31:43.484 "num_base_bdevs_operational": 4, 00:31:43.484 "base_bdevs_list": [ 00:31:43.484 { 00:31:43.484 "name": "BaseBdev1", 00:31:43.484 "uuid": "e8dfeba4-7239-4d10-8d98-50fa2f57218f", 00:31:43.484 "is_configured": true, 00:31:43.484 "data_offset": 0, 00:31:43.484 "data_size": 65536 00:31:43.484 }, 00:31:43.484 { 00:31:43.484 "name": "BaseBdev2", 00:31:43.484 "uuid": "00000000-0000-0000-0000-000000000000", 00:31:43.484 "is_configured": false, 00:31:43.484 "data_offset": 0, 00:31:43.484 "data_size": 0 00:31:43.484 }, 00:31:43.484 { 00:31:43.484 "name": "BaseBdev3", 00:31:43.484 "uuid": "00000000-0000-0000-0000-000000000000", 00:31:43.484 "is_configured": false, 00:31:43.484 "data_offset": 0, 00:31:43.484 "data_size": 0 00:31:43.484 }, 00:31:43.484 { 00:31:43.484 "name": "BaseBdev4", 00:31:43.484 "uuid": "00000000-0000-0000-0000-000000000000", 00:31:43.484 "is_configured": false, 00:31:43.484 "data_offset": 0, 00:31:43.484 "data_size": 0 00:31:43.484 } 00:31:43.484 ] 00:31:43.484 }' 00:31:43.484 17:28:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:31:43.484 17:28:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:31:43.743 17:28:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:31:43.743 17:28:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:43.743 17:28:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:31:43.743 [2024-11-26 17:28:21.124849] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:31:43.743 BaseBdev2 00:31:43.743 17:28:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:43.743 17:28:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:31:43.743 17:28:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:31:43.743 17:28:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:31:43.743 17:28:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:31:43.743 17:28:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:31:43.743 17:28:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:31:43.743 17:28:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:31:43.743 17:28:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:43.743 17:28:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:31:43.743 17:28:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:43.743 17:28:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:31:43.743 17:28:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:43.743 17:28:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:31:43.743 [ 00:31:43.743 { 00:31:43.743 "name": "BaseBdev2", 00:31:43.743 "aliases": [ 00:31:43.743 "503f807e-b4d2-4bd1-8e35-490f8c6561db" 00:31:43.743 ], 00:31:43.743 "product_name": "Malloc disk", 00:31:43.743 "block_size": 512, 00:31:43.743 "num_blocks": 65536, 00:31:43.743 "uuid": "503f807e-b4d2-4bd1-8e35-490f8c6561db", 00:31:43.743 "assigned_rate_limits": { 00:31:43.743 "rw_ios_per_sec": 0, 00:31:43.743 "rw_mbytes_per_sec": 0, 00:31:43.743 "r_mbytes_per_sec": 0, 00:31:43.743 "w_mbytes_per_sec": 0 00:31:43.743 }, 00:31:43.743 "claimed": true, 00:31:43.743 "claim_type": "exclusive_write", 00:31:43.743 "zoned": false, 00:31:43.743 "supported_io_types": { 00:31:43.743 "read": true, 00:31:43.743 "write": true, 00:31:43.743 "unmap": true, 00:31:43.743 "flush": true, 00:31:43.743 "reset": true, 00:31:43.743 "nvme_admin": false, 00:31:43.743 "nvme_io": false, 00:31:43.743 "nvme_io_md": false, 00:31:43.743 "write_zeroes": true, 00:31:43.743 "zcopy": true, 00:31:43.743 "get_zone_info": false, 00:31:43.743 "zone_management": false, 00:31:43.743 "zone_append": false, 00:31:43.743 "compare": false, 00:31:43.743 "compare_and_write": false, 00:31:43.743 "abort": true, 00:31:43.743 "seek_hole": false, 00:31:43.743 "seek_data": false, 00:31:43.743 "copy": true, 00:31:43.743 "nvme_iov_md": false 00:31:43.743 }, 00:31:43.743 "memory_domains": [ 00:31:43.743 { 00:31:43.743 "dma_device_id": "system", 00:31:43.743 "dma_device_type": 1 00:31:43.743 }, 00:31:43.743 { 00:31:43.743 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:31:43.743 "dma_device_type": 2 00:31:43.743 } 00:31:43.743 ], 00:31:43.743 "driver_specific": {} 00:31:43.743 } 00:31:43.743 ] 00:31:43.743 17:28:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:43.743 17:28:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:31:43.743 17:28:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:31:43.743 17:28:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:31:43.743 17:28:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:31:43.743 17:28:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:31:43.743 17:28:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:31:43.743 17:28:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:31:43.743 17:28:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:31:43.743 17:28:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:31:43.743 17:28:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:31:43.743 17:28:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:31:43.743 17:28:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:31:43.743 17:28:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:31:43.743 17:28:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:31:43.743 17:28:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:43.743 17:28:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:31:43.743 17:28:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:31:43.743 17:28:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:44.002 17:28:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:31:44.002 "name": "Existed_Raid", 00:31:44.002 "uuid": "00000000-0000-0000-0000-000000000000", 00:31:44.002 "strip_size_kb": 64, 00:31:44.002 "state": "configuring", 00:31:44.002 "raid_level": "raid0", 00:31:44.002 "superblock": false, 00:31:44.002 "num_base_bdevs": 4, 00:31:44.002 "num_base_bdevs_discovered": 2, 00:31:44.002 "num_base_bdevs_operational": 4, 00:31:44.002 "base_bdevs_list": [ 00:31:44.002 { 00:31:44.002 "name": "BaseBdev1", 00:31:44.002 "uuid": "e8dfeba4-7239-4d10-8d98-50fa2f57218f", 00:31:44.002 "is_configured": true, 00:31:44.002 "data_offset": 0, 00:31:44.002 "data_size": 65536 00:31:44.002 }, 00:31:44.002 { 00:31:44.002 "name": "BaseBdev2", 00:31:44.002 "uuid": "503f807e-b4d2-4bd1-8e35-490f8c6561db", 00:31:44.002 "is_configured": true, 00:31:44.002 "data_offset": 0, 00:31:44.002 "data_size": 65536 00:31:44.002 }, 00:31:44.002 { 00:31:44.002 "name": "BaseBdev3", 00:31:44.002 "uuid": "00000000-0000-0000-0000-000000000000", 00:31:44.002 "is_configured": false, 00:31:44.002 "data_offset": 0, 00:31:44.002 "data_size": 0 00:31:44.002 }, 00:31:44.002 { 00:31:44.002 "name": "BaseBdev4", 00:31:44.002 "uuid": "00000000-0000-0000-0000-000000000000", 00:31:44.002 "is_configured": false, 00:31:44.002 "data_offset": 0, 00:31:44.002 "data_size": 0 00:31:44.002 } 00:31:44.002 ] 00:31:44.002 }' 00:31:44.002 17:28:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:31:44.002 17:28:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:31:44.261 17:28:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:31:44.261 17:28:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:44.261 17:28:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:31:44.261 [2024-11-26 17:28:21.654499] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:31:44.261 BaseBdev3 00:31:44.261 17:28:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:44.261 17:28:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:31:44.261 17:28:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:31:44.261 17:28:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:31:44.261 17:28:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:31:44.261 17:28:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:31:44.261 17:28:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:31:44.261 17:28:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:31:44.261 17:28:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:44.261 17:28:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:31:44.261 17:28:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:44.261 17:28:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:31:44.261 17:28:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:44.261 17:28:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:31:44.261 [ 00:31:44.261 { 00:31:44.261 "name": "BaseBdev3", 00:31:44.261 "aliases": [ 00:31:44.261 "5269c958-3b2a-4ed8-a648-f38724759940" 00:31:44.261 ], 00:31:44.261 "product_name": "Malloc disk", 00:31:44.261 "block_size": 512, 00:31:44.261 "num_blocks": 65536, 00:31:44.261 "uuid": "5269c958-3b2a-4ed8-a648-f38724759940", 00:31:44.261 "assigned_rate_limits": { 00:31:44.261 "rw_ios_per_sec": 0, 00:31:44.261 "rw_mbytes_per_sec": 0, 00:31:44.261 "r_mbytes_per_sec": 0, 00:31:44.261 "w_mbytes_per_sec": 0 00:31:44.261 }, 00:31:44.261 "claimed": true, 00:31:44.261 "claim_type": "exclusive_write", 00:31:44.261 "zoned": false, 00:31:44.261 "supported_io_types": { 00:31:44.261 "read": true, 00:31:44.261 "write": true, 00:31:44.261 "unmap": true, 00:31:44.261 "flush": true, 00:31:44.261 "reset": true, 00:31:44.261 "nvme_admin": false, 00:31:44.261 "nvme_io": false, 00:31:44.261 "nvme_io_md": false, 00:31:44.261 "write_zeroes": true, 00:31:44.261 "zcopy": true, 00:31:44.261 "get_zone_info": false, 00:31:44.261 "zone_management": false, 00:31:44.261 "zone_append": false, 00:31:44.261 "compare": false, 00:31:44.261 "compare_and_write": false, 00:31:44.261 "abort": true, 00:31:44.261 "seek_hole": false, 00:31:44.261 "seek_data": false, 00:31:44.261 "copy": true, 00:31:44.261 "nvme_iov_md": false 00:31:44.261 }, 00:31:44.261 "memory_domains": [ 00:31:44.261 { 00:31:44.261 "dma_device_id": "system", 00:31:44.261 "dma_device_type": 1 00:31:44.261 }, 00:31:44.261 { 00:31:44.261 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:31:44.261 "dma_device_type": 2 00:31:44.261 } 00:31:44.261 ], 00:31:44.262 "driver_specific": {} 00:31:44.262 } 00:31:44.262 ] 00:31:44.262 17:28:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:44.262 17:28:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:31:44.262 17:28:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:31:44.262 17:28:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:31:44.262 17:28:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:31:44.262 17:28:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:31:44.262 17:28:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:31:44.262 17:28:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:31:44.262 17:28:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:31:44.262 17:28:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:31:44.262 17:28:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:31:44.262 17:28:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:31:44.262 17:28:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:31:44.262 17:28:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:31:44.262 17:28:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:31:44.262 17:28:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:31:44.262 17:28:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:44.262 17:28:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:31:44.520 17:28:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:44.520 17:28:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:31:44.520 "name": "Existed_Raid", 00:31:44.520 "uuid": "00000000-0000-0000-0000-000000000000", 00:31:44.520 "strip_size_kb": 64, 00:31:44.520 "state": "configuring", 00:31:44.520 "raid_level": "raid0", 00:31:44.520 "superblock": false, 00:31:44.520 "num_base_bdevs": 4, 00:31:44.520 "num_base_bdevs_discovered": 3, 00:31:44.520 "num_base_bdevs_operational": 4, 00:31:44.520 "base_bdevs_list": [ 00:31:44.520 { 00:31:44.520 "name": "BaseBdev1", 00:31:44.520 "uuid": "e8dfeba4-7239-4d10-8d98-50fa2f57218f", 00:31:44.520 "is_configured": true, 00:31:44.520 "data_offset": 0, 00:31:44.520 "data_size": 65536 00:31:44.520 }, 00:31:44.520 { 00:31:44.520 "name": "BaseBdev2", 00:31:44.520 "uuid": "503f807e-b4d2-4bd1-8e35-490f8c6561db", 00:31:44.520 "is_configured": true, 00:31:44.520 "data_offset": 0, 00:31:44.520 "data_size": 65536 00:31:44.520 }, 00:31:44.520 { 00:31:44.520 "name": "BaseBdev3", 00:31:44.520 "uuid": "5269c958-3b2a-4ed8-a648-f38724759940", 00:31:44.520 "is_configured": true, 00:31:44.520 "data_offset": 0, 00:31:44.520 "data_size": 65536 00:31:44.520 }, 00:31:44.520 { 00:31:44.520 "name": "BaseBdev4", 00:31:44.520 "uuid": "00000000-0000-0000-0000-000000000000", 00:31:44.520 "is_configured": false, 00:31:44.520 "data_offset": 0, 00:31:44.520 "data_size": 0 00:31:44.520 } 00:31:44.520 ] 00:31:44.520 }' 00:31:44.520 17:28:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:31:44.520 17:28:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:31:44.779 17:28:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:31:44.779 17:28:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:44.779 17:28:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:31:44.779 [2024-11-26 17:28:22.172818] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:31:44.780 [2024-11-26 17:28:22.173082] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:31:44.780 [2024-11-26 17:28:22.173105] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 262144, blocklen 512 00:31:44.780 [2024-11-26 17:28:22.173448] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:31:44.780 [2024-11-26 17:28:22.173615] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:31:44.780 [2024-11-26 17:28:22.173630] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:31:44.780 [2024-11-26 17:28:22.173925] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:31:44.780 BaseBdev4 00:31:44.780 17:28:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:44.780 17:28:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev4 00:31:44.780 17:28:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev4 00:31:44.780 17:28:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:31:44.780 17:28:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:31:44.780 17:28:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:31:44.780 17:28:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:31:44.780 17:28:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:31:44.780 17:28:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:44.780 17:28:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:31:44.780 17:28:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:44.780 17:28:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:31:44.780 17:28:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:44.780 17:28:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:31:44.780 [ 00:31:44.780 { 00:31:44.780 "name": "BaseBdev4", 00:31:44.780 "aliases": [ 00:31:44.780 "fa5d07d3-44a7-4bc5-b2e8-56493fd0e8ed" 00:31:44.780 ], 00:31:44.780 "product_name": "Malloc disk", 00:31:44.780 "block_size": 512, 00:31:44.780 "num_blocks": 65536, 00:31:44.780 "uuid": "fa5d07d3-44a7-4bc5-b2e8-56493fd0e8ed", 00:31:44.780 "assigned_rate_limits": { 00:31:44.780 "rw_ios_per_sec": 0, 00:31:44.780 "rw_mbytes_per_sec": 0, 00:31:44.780 "r_mbytes_per_sec": 0, 00:31:44.780 "w_mbytes_per_sec": 0 00:31:44.780 }, 00:31:44.780 "claimed": true, 00:31:44.780 "claim_type": "exclusive_write", 00:31:44.780 "zoned": false, 00:31:44.780 "supported_io_types": { 00:31:44.780 "read": true, 00:31:44.780 "write": true, 00:31:44.780 "unmap": true, 00:31:44.780 "flush": true, 00:31:44.780 "reset": true, 00:31:44.780 "nvme_admin": false, 00:31:44.780 "nvme_io": false, 00:31:44.780 "nvme_io_md": false, 00:31:44.780 "write_zeroes": true, 00:31:44.780 "zcopy": true, 00:31:44.780 "get_zone_info": false, 00:31:44.780 "zone_management": false, 00:31:44.780 "zone_append": false, 00:31:44.780 "compare": false, 00:31:44.780 "compare_and_write": false, 00:31:44.780 "abort": true, 00:31:44.780 "seek_hole": false, 00:31:44.780 "seek_data": false, 00:31:44.780 "copy": true, 00:31:44.780 "nvme_iov_md": false 00:31:44.780 }, 00:31:44.780 "memory_domains": [ 00:31:44.780 { 00:31:44.780 "dma_device_id": "system", 00:31:44.780 "dma_device_type": 1 00:31:44.780 }, 00:31:44.780 { 00:31:44.780 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:31:44.780 "dma_device_type": 2 00:31:44.780 } 00:31:44.780 ], 00:31:44.780 "driver_specific": {} 00:31:44.780 } 00:31:44.780 ] 00:31:44.780 17:28:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:44.780 17:28:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:31:44.780 17:28:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:31:44.780 17:28:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:31:44.780 17:28:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid0 64 4 00:31:44.780 17:28:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:31:44.780 17:28:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:31:44.780 17:28:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:31:44.780 17:28:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:31:44.780 17:28:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:31:44.780 17:28:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:31:44.780 17:28:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:31:44.780 17:28:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:31:44.780 17:28:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:31:44.780 17:28:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:31:44.780 17:28:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:31:44.780 17:28:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:44.780 17:28:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:31:45.039 17:28:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:45.039 17:28:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:31:45.039 "name": "Existed_Raid", 00:31:45.039 "uuid": "154d5bc8-7551-4e69-98b6-177a8adec653", 00:31:45.039 "strip_size_kb": 64, 00:31:45.039 "state": "online", 00:31:45.039 "raid_level": "raid0", 00:31:45.039 "superblock": false, 00:31:45.039 "num_base_bdevs": 4, 00:31:45.039 "num_base_bdevs_discovered": 4, 00:31:45.039 "num_base_bdevs_operational": 4, 00:31:45.039 "base_bdevs_list": [ 00:31:45.039 { 00:31:45.039 "name": "BaseBdev1", 00:31:45.039 "uuid": "e8dfeba4-7239-4d10-8d98-50fa2f57218f", 00:31:45.039 "is_configured": true, 00:31:45.039 "data_offset": 0, 00:31:45.039 "data_size": 65536 00:31:45.039 }, 00:31:45.039 { 00:31:45.039 "name": "BaseBdev2", 00:31:45.039 "uuid": "503f807e-b4d2-4bd1-8e35-490f8c6561db", 00:31:45.039 "is_configured": true, 00:31:45.039 "data_offset": 0, 00:31:45.039 "data_size": 65536 00:31:45.039 }, 00:31:45.039 { 00:31:45.039 "name": "BaseBdev3", 00:31:45.039 "uuid": "5269c958-3b2a-4ed8-a648-f38724759940", 00:31:45.039 "is_configured": true, 00:31:45.039 "data_offset": 0, 00:31:45.039 "data_size": 65536 00:31:45.039 }, 00:31:45.039 { 00:31:45.039 "name": "BaseBdev4", 00:31:45.039 "uuid": "fa5d07d3-44a7-4bc5-b2e8-56493fd0e8ed", 00:31:45.039 "is_configured": true, 00:31:45.039 "data_offset": 0, 00:31:45.039 "data_size": 65536 00:31:45.039 } 00:31:45.039 ] 00:31:45.039 }' 00:31:45.039 17:28:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:31:45.039 17:28:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:31:45.299 17:28:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:31:45.299 17:28:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:31:45.299 17:28:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:31:45.299 17:28:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:31:45.299 17:28:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:31:45.299 17:28:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:31:45.299 17:28:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:31:45.299 17:28:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:31:45.299 17:28:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:45.299 17:28:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:31:45.299 [2024-11-26 17:28:22.665369] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:31:45.299 17:28:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:45.299 17:28:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:31:45.299 "name": "Existed_Raid", 00:31:45.299 "aliases": [ 00:31:45.299 "154d5bc8-7551-4e69-98b6-177a8adec653" 00:31:45.299 ], 00:31:45.299 "product_name": "Raid Volume", 00:31:45.299 "block_size": 512, 00:31:45.299 "num_blocks": 262144, 00:31:45.299 "uuid": "154d5bc8-7551-4e69-98b6-177a8adec653", 00:31:45.299 "assigned_rate_limits": { 00:31:45.299 "rw_ios_per_sec": 0, 00:31:45.299 "rw_mbytes_per_sec": 0, 00:31:45.299 "r_mbytes_per_sec": 0, 00:31:45.299 "w_mbytes_per_sec": 0 00:31:45.299 }, 00:31:45.299 "claimed": false, 00:31:45.299 "zoned": false, 00:31:45.299 "supported_io_types": { 00:31:45.299 "read": true, 00:31:45.299 "write": true, 00:31:45.299 "unmap": true, 00:31:45.299 "flush": true, 00:31:45.299 "reset": true, 00:31:45.299 "nvme_admin": false, 00:31:45.299 "nvme_io": false, 00:31:45.299 "nvme_io_md": false, 00:31:45.299 "write_zeroes": true, 00:31:45.299 "zcopy": false, 00:31:45.299 "get_zone_info": false, 00:31:45.299 "zone_management": false, 00:31:45.299 "zone_append": false, 00:31:45.299 "compare": false, 00:31:45.299 "compare_and_write": false, 00:31:45.299 "abort": false, 00:31:45.299 "seek_hole": false, 00:31:45.299 "seek_data": false, 00:31:45.299 "copy": false, 00:31:45.299 "nvme_iov_md": false 00:31:45.299 }, 00:31:45.299 "memory_domains": [ 00:31:45.299 { 00:31:45.299 "dma_device_id": "system", 00:31:45.299 "dma_device_type": 1 00:31:45.299 }, 00:31:45.299 { 00:31:45.299 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:31:45.299 "dma_device_type": 2 00:31:45.299 }, 00:31:45.299 { 00:31:45.299 "dma_device_id": "system", 00:31:45.299 "dma_device_type": 1 00:31:45.299 }, 00:31:45.299 { 00:31:45.299 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:31:45.299 "dma_device_type": 2 00:31:45.299 }, 00:31:45.299 { 00:31:45.299 "dma_device_id": "system", 00:31:45.299 "dma_device_type": 1 00:31:45.299 }, 00:31:45.299 { 00:31:45.299 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:31:45.299 "dma_device_type": 2 00:31:45.299 }, 00:31:45.299 { 00:31:45.299 "dma_device_id": "system", 00:31:45.299 "dma_device_type": 1 00:31:45.299 }, 00:31:45.299 { 00:31:45.299 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:31:45.299 "dma_device_type": 2 00:31:45.299 } 00:31:45.299 ], 00:31:45.299 "driver_specific": { 00:31:45.299 "raid": { 00:31:45.299 "uuid": "154d5bc8-7551-4e69-98b6-177a8adec653", 00:31:45.299 "strip_size_kb": 64, 00:31:45.299 "state": "online", 00:31:45.299 "raid_level": "raid0", 00:31:45.299 "superblock": false, 00:31:45.299 "num_base_bdevs": 4, 00:31:45.299 "num_base_bdevs_discovered": 4, 00:31:45.299 "num_base_bdevs_operational": 4, 00:31:45.299 "base_bdevs_list": [ 00:31:45.299 { 00:31:45.299 "name": "BaseBdev1", 00:31:45.299 "uuid": "e8dfeba4-7239-4d10-8d98-50fa2f57218f", 00:31:45.299 "is_configured": true, 00:31:45.299 "data_offset": 0, 00:31:45.299 "data_size": 65536 00:31:45.299 }, 00:31:45.299 { 00:31:45.299 "name": "BaseBdev2", 00:31:45.299 "uuid": "503f807e-b4d2-4bd1-8e35-490f8c6561db", 00:31:45.299 "is_configured": true, 00:31:45.299 "data_offset": 0, 00:31:45.299 "data_size": 65536 00:31:45.299 }, 00:31:45.299 { 00:31:45.299 "name": "BaseBdev3", 00:31:45.299 "uuid": "5269c958-3b2a-4ed8-a648-f38724759940", 00:31:45.299 "is_configured": true, 00:31:45.299 "data_offset": 0, 00:31:45.299 "data_size": 65536 00:31:45.299 }, 00:31:45.299 { 00:31:45.299 "name": "BaseBdev4", 00:31:45.299 "uuid": "fa5d07d3-44a7-4bc5-b2e8-56493fd0e8ed", 00:31:45.299 "is_configured": true, 00:31:45.299 "data_offset": 0, 00:31:45.299 "data_size": 65536 00:31:45.299 } 00:31:45.299 ] 00:31:45.299 } 00:31:45.299 } 00:31:45.299 }' 00:31:45.299 17:28:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:31:45.559 17:28:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:31:45.559 BaseBdev2 00:31:45.559 BaseBdev3 00:31:45.559 BaseBdev4' 00:31:45.559 17:28:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:31:45.559 17:28:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:31:45.559 17:28:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:31:45.559 17:28:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:31:45.559 17:28:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:45.559 17:28:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:31:45.559 17:28:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:31:45.559 17:28:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:45.559 17:28:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:31:45.559 17:28:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:31:45.559 17:28:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:31:45.559 17:28:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:31:45.559 17:28:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:45.559 17:28:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:31:45.559 17:28:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:31:45.559 17:28:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:45.559 17:28:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:31:45.559 17:28:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:31:45.559 17:28:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:31:45.559 17:28:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:31:45.559 17:28:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:31:45.559 17:28:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:45.559 17:28:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:31:45.559 17:28:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:45.559 17:28:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:31:45.559 17:28:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:31:45.559 17:28:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:31:45.559 17:28:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:31:45.559 17:28:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:31:45.559 17:28:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:45.559 17:28:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:31:45.559 17:28:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:45.559 17:28:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:31:45.559 17:28:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:31:45.559 17:28:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:31:45.559 17:28:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:45.559 17:28:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:31:45.559 [2024-11-26 17:28:22.989138] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:31:45.559 [2024-11-26 17:28:22.989169] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:31:45.559 [2024-11-26 17:28:22.989220] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:31:45.818 17:28:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:45.818 17:28:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@260 -- # local expected_state 00:31:45.818 17:28:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@261 -- # has_redundancy raid0 00:31:45.818 17:28:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:31:45.818 17:28:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # return 1 00:31:45.818 17:28:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@262 -- # expected_state=offline 00:31:45.818 17:28:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid offline raid0 64 3 00:31:45.818 17:28:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:31:45.818 17:28:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=offline 00:31:45.818 17:28:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:31:45.818 17:28:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:31:45.818 17:28:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:31:45.818 17:28:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:31:45.818 17:28:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:31:45.818 17:28:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:31:45.818 17:28:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:31:45.818 17:28:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:31:45.818 17:28:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:45.818 17:28:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:31:45.818 17:28:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:31:45.818 17:28:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:45.818 17:28:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:31:45.818 "name": "Existed_Raid", 00:31:45.818 "uuid": "154d5bc8-7551-4e69-98b6-177a8adec653", 00:31:45.818 "strip_size_kb": 64, 00:31:45.818 "state": "offline", 00:31:45.818 "raid_level": "raid0", 00:31:45.818 "superblock": false, 00:31:45.818 "num_base_bdevs": 4, 00:31:45.818 "num_base_bdevs_discovered": 3, 00:31:45.818 "num_base_bdevs_operational": 3, 00:31:45.818 "base_bdevs_list": [ 00:31:45.818 { 00:31:45.818 "name": null, 00:31:45.818 "uuid": "00000000-0000-0000-0000-000000000000", 00:31:45.818 "is_configured": false, 00:31:45.818 "data_offset": 0, 00:31:45.818 "data_size": 65536 00:31:45.818 }, 00:31:45.818 { 00:31:45.818 "name": "BaseBdev2", 00:31:45.818 "uuid": "503f807e-b4d2-4bd1-8e35-490f8c6561db", 00:31:45.818 "is_configured": true, 00:31:45.818 "data_offset": 0, 00:31:45.818 "data_size": 65536 00:31:45.818 }, 00:31:45.818 { 00:31:45.818 "name": "BaseBdev3", 00:31:45.818 "uuid": "5269c958-3b2a-4ed8-a648-f38724759940", 00:31:45.818 "is_configured": true, 00:31:45.818 "data_offset": 0, 00:31:45.818 "data_size": 65536 00:31:45.818 }, 00:31:45.818 { 00:31:45.818 "name": "BaseBdev4", 00:31:45.818 "uuid": "fa5d07d3-44a7-4bc5-b2e8-56493fd0e8ed", 00:31:45.818 "is_configured": true, 00:31:45.818 "data_offset": 0, 00:31:45.818 "data_size": 65536 00:31:45.818 } 00:31:45.818 ] 00:31:45.818 }' 00:31:45.818 17:28:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:31:45.818 17:28:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:31:46.387 17:28:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:31:46.387 17:28:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:31:46.387 17:28:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:31:46.387 17:28:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:46.387 17:28:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:31:46.387 17:28:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:31:46.387 17:28:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:46.387 17:28:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:31:46.387 17:28:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:31:46.387 17:28:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:31:46.387 17:28:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:46.387 17:28:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:31:46.387 [2024-11-26 17:28:23.601482] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:31:46.387 17:28:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:46.387 17:28:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:31:46.387 17:28:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:31:46.387 17:28:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:31:46.387 17:28:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:46.387 17:28:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:31:46.387 17:28:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:31:46.387 17:28:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:46.387 17:28:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:31:46.387 17:28:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:31:46.387 17:28:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:31:46.387 17:28:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:46.387 17:28:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:31:46.387 [2024-11-26 17:28:23.758689] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:31:46.646 17:28:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:46.646 17:28:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:31:46.646 17:28:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:31:46.646 17:28:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:31:46.646 17:28:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:46.646 17:28:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:31:46.646 17:28:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:31:46.646 17:28:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:46.646 17:28:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:31:46.646 17:28:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:31:46.646 17:28:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev4 00:31:46.646 17:28:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:46.646 17:28:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:31:46.646 [2024-11-26 17:28:23.917328] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev4 00:31:46.646 [2024-11-26 17:28:23.917379] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:31:46.646 17:28:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:46.646 17:28:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:31:46.646 17:28:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:31:46.646 17:28:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:31:46.646 17:28:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:31:46.646 17:28:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:46.646 17:28:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:31:46.646 17:28:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:46.646 17:28:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:31:46.646 17:28:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:31:46.646 17:28:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@284 -- # '[' 4 -gt 2 ']' 00:31:46.646 17:28:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:31:46.646 17:28:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:31:46.646 17:28:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:31:46.646 17:28:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:46.646 17:28:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:31:46.905 BaseBdev2 00:31:46.905 17:28:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:46.905 17:28:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:31:46.905 17:28:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:31:46.905 17:28:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:31:46.905 17:28:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:31:46.905 17:28:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:31:46.905 17:28:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:31:46.905 17:28:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:31:46.905 17:28:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:46.905 17:28:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:31:46.905 17:28:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:46.905 17:28:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:31:46.905 17:28:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:46.905 17:28:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:31:46.905 [ 00:31:46.905 { 00:31:46.905 "name": "BaseBdev2", 00:31:46.905 "aliases": [ 00:31:46.905 "1eff40e9-c672-47e5-b45f-aa0b0709a5b2" 00:31:46.905 ], 00:31:46.905 "product_name": "Malloc disk", 00:31:46.905 "block_size": 512, 00:31:46.905 "num_blocks": 65536, 00:31:46.905 "uuid": "1eff40e9-c672-47e5-b45f-aa0b0709a5b2", 00:31:46.905 "assigned_rate_limits": { 00:31:46.905 "rw_ios_per_sec": 0, 00:31:46.905 "rw_mbytes_per_sec": 0, 00:31:46.905 "r_mbytes_per_sec": 0, 00:31:46.905 "w_mbytes_per_sec": 0 00:31:46.905 }, 00:31:46.905 "claimed": false, 00:31:46.905 "zoned": false, 00:31:46.905 "supported_io_types": { 00:31:46.905 "read": true, 00:31:46.905 "write": true, 00:31:46.905 "unmap": true, 00:31:46.905 "flush": true, 00:31:46.905 "reset": true, 00:31:46.905 "nvme_admin": false, 00:31:46.905 "nvme_io": false, 00:31:46.905 "nvme_io_md": false, 00:31:46.905 "write_zeroes": true, 00:31:46.905 "zcopy": true, 00:31:46.905 "get_zone_info": false, 00:31:46.905 "zone_management": false, 00:31:46.905 "zone_append": false, 00:31:46.905 "compare": false, 00:31:46.905 "compare_and_write": false, 00:31:46.905 "abort": true, 00:31:46.905 "seek_hole": false, 00:31:46.905 "seek_data": false, 00:31:46.905 "copy": true, 00:31:46.905 "nvme_iov_md": false 00:31:46.905 }, 00:31:46.905 "memory_domains": [ 00:31:46.905 { 00:31:46.905 "dma_device_id": "system", 00:31:46.905 "dma_device_type": 1 00:31:46.905 }, 00:31:46.905 { 00:31:46.905 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:31:46.905 "dma_device_type": 2 00:31:46.905 } 00:31:46.905 ], 00:31:46.905 "driver_specific": {} 00:31:46.905 } 00:31:46.905 ] 00:31:46.905 17:28:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:46.905 17:28:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:31:46.906 17:28:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:31:46.906 17:28:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:31:46.906 17:28:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:31:46.906 17:28:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:46.906 17:28:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:31:46.906 BaseBdev3 00:31:46.906 17:28:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:46.906 17:28:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:31:46.906 17:28:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:31:46.906 17:28:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:31:46.906 17:28:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:31:46.906 17:28:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:31:46.906 17:28:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:31:46.906 17:28:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:31:46.906 17:28:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:46.906 17:28:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:31:46.906 17:28:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:46.906 17:28:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:31:46.906 17:28:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:46.906 17:28:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:31:46.906 [ 00:31:46.906 { 00:31:46.906 "name": "BaseBdev3", 00:31:46.906 "aliases": [ 00:31:46.906 "1bcec351-3a31-4afb-b705-d446c88607bc" 00:31:46.906 ], 00:31:46.906 "product_name": "Malloc disk", 00:31:46.906 "block_size": 512, 00:31:46.906 "num_blocks": 65536, 00:31:46.906 "uuid": "1bcec351-3a31-4afb-b705-d446c88607bc", 00:31:46.906 "assigned_rate_limits": { 00:31:46.906 "rw_ios_per_sec": 0, 00:31:46.906 "rw_mbytes_per_sec": 0, 00:31:46.906 "r_mbytes_per_sec": 0, 00:31:46.906 "w_mbytes_per_sec": 0 00:31:46.906 }, 00:31:46.906 "claimed": false, 00:31:46.906 "zoned": false, 00:31:46.906 "supported_io_types": { 00:31:46.906 "read": true, 00:31:46.906 "write": true, 00:31:46.906 "unmap": true, 00:31:46.906 "flush": true, 00:31:46.906 "reset": true, 00:31:46.906 "nvme_admin": false, 00:31:46.906 "nvme_io": false, 00:31:46.906 "nvme_io_md": false, 00:31:46.906 "write_zeroes": true, 00:31:46.906 "zcopy": true, 00:31:46.906 "get_zone_info": false, 00:31:46.906 "zone_management": false, 00:31:46.906 "zone_append": false, 00:31:46.906 "compare": false, 00:31:46.906 "compare_and_write": false, 00:31:46.906 "abort": true, 00:31:46.906 "seek_hole": false, 00:31:46.906 "seek_data": false, 00:31:46.906 "copy": true, 00:31:46.906 "nvme_iov_md": false 00:31:46.906 }, 00:31:46.906 "memory_domains": [ 00:31:46.906 { 00:31:46.906 "dma_device_id": "system", 00:31:46.906 "dma_device_type": 1 00:31:46.906 }, 00:31:46.906 { 00:31:46.906 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:31:46.906 "dma_device_type": 2 00:31:46.906 } 00:31:46.906 ], 00:31:46.906 "driver_specific": {} 00:31:46.906 } 00:31:46.906 ] 00:31:46.906 17:28:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:46.906 17:28:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:31:46.906 17:28:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:31:46.906 17:28:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:31:46.906 17:28:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:31:46.906 17:28:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:46.906 17:28:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:31:46.906 BaseBdev4 00:31:46.906 17:28:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:46.906 17:28:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev4 00:31:46.906 17:28:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev4 00:31:46.906 17:28:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:31:46.906 17:28:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:31:46.906 17:28:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:31:46.906 17:28:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:31:46.906 17:28:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:31:46.906 17:28:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:46.906 17:28:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:31:46.906 17:28:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:46.906 17:28:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:31:46.906 17:28:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:46.906 17:28:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:31:46.906 [ 00:31:46.906 { 00:31:46.906 "name": "BaseBdev4", 00:31:46.906 "aliases": [ 00:31:46.906 "a4d85439-ed34-40e1-9842-d03222b53288" 00:31:46.906 ], 00:31:46.906 "product_name": "Malloc disk", 00:31:46.906 "block_size": 512, 00:31:46.906 "num_blocks": 65536, 00:31:46.906 "uuid": "a4d85439-ed34-40e1-9842-d03222b53288", 00:31:46.906 "assigned_rate_limits": { 00:31:46.906 "rw_ios_per_sec": 0, 00:31:46.906 "rw_mbytes_per_sec": 0, 00:31:46.906 "r_mbytes_per_sec": 0, 00:31:46.906 "w_mbytes_per_sec": 0 00:31:46.906 }, 00:31:46.906 "claimed": false, 00:31:46.906 "zoned": false, 00:31:46.906 "supported_io_types": { 00:31:46.906 "read": true, 00:31:46.906 "write": true, 00:31:46.906 "unmap": true, 00:31:46.906 "flush": true, 00:31:46.906 "reset": true, 00:31:46.906 "nvme_admin": false, 00:31:46.906 "nvme_io": false, 00:31:46.906 "nvme_io_md": false, 00:31:46.906 "write_zeroes": true, 00:31:46.906 "zcopy": true, 00:31:46.906 "get_zone_info": false, 00:31:46.906 "zone_management": false, 00:31:46.906 "zone_append": false, 00:31:46.906 "compare": false, 00:31:46.906 "compare_and_write": false, 00:31:46.906 "abort": true, 00:31:46.906 "seek_hole": false, 00:31:46.906 "seek_data": false, 00:31:46.906 "copy": true, 00:31:46.906 "nvme_iov_md": false 00:31:46.906 }, 00:31:46.906 "memory_domains": [ 00:31:46.906 { 00:31:46.906 "dma_device_id": "system", 00:31:46.906 "dma_device_type": 1 00:31:46.906 }, 00:31:46.906 { 00:31:46.906 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:31:46.906 "dma_device_type": 2 00:31:46.906 } 00:31:46.906 ], 00:31:46.906 "driver_specific": {} 00:31:46.906 } 00:31:46.906 ] 00:31:46.906 17:28:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:46.906 17:28:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:31:46.906 17:28:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:31:46.906 17:28:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:31:46.906 17:28:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:31:46.906 17:28:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:46.906 17:28:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:31:46.906 [2024-11-26 17:28:24.302494] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:31:46.906 [2024-11-26 17:28:24.302652] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:31:46.906 [2024-11-26 17:28:24.302691] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:31:46.906 [2024-11-26 17:28:24.304793] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:31:46.906 [2024-11-26 17:28:24.304843] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:31:46.906 17:28:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:46.906 17:28:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:31:46.906 17:28:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:31:46.906 17:28:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:31:46.906 17:28:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:31:46.906 17:28:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:31:46.906 17:28:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:31:46.906 17:28:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:31:46.906 17:28:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:31:46.906 17:28:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:31:46.906 17:28:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:31:46.906 17:28:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:31:46.906 17:28:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:46.906 17:28:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:31:46.906 17:28:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:31:46.906 17:28:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:47.166 17:28:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:31:47.166 "name": "Existed_Raid", 00:31:47.166 "uuid": "00000000-0000-0000-0000-000000000000", 00:31:47.166 "strip_size_kb": 64, 00:31:47.166 "state": "configuring", 00:31:47.166 "raid_level": "raid0", 00:31:47.166 "superblock": false, 00:31:47.166 "num_base_bdevs": 4, 00:31:47.166 "num_base_bdevs_discovered": 3, 00:31:47.166 "num_base_bdevs_operational": 4, 00:31:47.166 "base_bdevs_list": [ 00:31:47.166 { 00:31:47.166 "name": "BaseBdev1", 00:31:47.166 "uuid": "00000000-0000-0000-0000-000000000000", 00:31:47.166 "is_configured": false, 00:31:47.166 "data_offset": 0, 00:31:47.166 "data_size": 0 00:31:47.166 }, 00:31:47.166 { 00:31:47.166 "name": "BaseBdev2", 00:31:47.166 "uuid": "1eff40e9-c672-47e5-b45f-aa0b0709a5b2", 00:31:47.166 "is_configured": true, 00:31:47.166 "data_offset": 0, 00:31:47.166 "data_size": 65536 00:31:47.166 }, 00:31:47.166 { 00:31:47.166 "name": "BaseBdev3", 00:31:47.166 "uuid": "1bcec351-3a31-4afb-b705-d446c88607bc", 00:31:47.166 "is_configured": true, 00:31:47.166 "data_offset": 0, 00:31:47.166 "data_size": 65536 00:31:47.166 }, 00:31:47.166 { 00:31:47.166 "name": "BaseBdev4", 00:31:47.166 "uuid": "a4d85439-ed34-40e1-9842-d03222b53288", 00:31:47.166 "is_configured": true, 00:31:47.166 "data_offset": 0, 00:31:47.166 "data_size": 65536 00:31:47.166 } 00:31:47.166 ] 00:31:47.166 }' 00:31:47.166 17:28:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:31:47.166 17:28:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:31:47.425 17:28:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:31:47.425 17:28:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:47.425 17:28:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:31:47.425 [2024-11-26 17:28:24.766611] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:31:47.425 17:28:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:47.425 17:28:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:31:47.425 17:28:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:31:47.425 17:28:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:31:47.425 17:28:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:31:47.425 17:28:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:31:47.425 17:28:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:31:47.425 17:28:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:31:47.425 17:28:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:31:47.425 17:28:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:31:47.425 17:28:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:31:47.425 17:28:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:31:47.425 17:28:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:31:47.425 17:28:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:47.425 17:28:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:31:47.425 17:28:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:47.425 17:28:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:31:47.425 "name": "Existed_Raid", 00:31:47.425 "uuid": "00000000-0000-0000-0000-000000000000", 00:31:47.425 "strip_size_kb": 64, 00:31:47.425 "state": "configuring", 00:31:47.425 "raid_level": "raid0", 00:31:47.425 "superblock": false, 00:31:47.425 "num_base_bdevs": 4, 00:31:47.425 "num_base_bdevs_discovered": 2, 00:31:47.425 "num_base_bdevs_operational": 4, 00:31:47.425 "base_bdevs_list": [ 00:31:47.425 { 00:31:47.425 "name": "BaseBdev1", 00:31:47.425 "uuid": "00000000-0000-0000-0000-000000000000", 00:31:47.425 "is_configured": false, 00:31:47.425 "data_offset": 0, 00:31:47.425 "data_size": 0 00:31:47.425 }, 00:31:47.425 { 00:31:47.425 "name": null, 00:31:47.425 "uuid": "1eff40e9-c672-47e5-b45f-aa0b0709a5b2", 00:31:47.425 "is_configured": false, 00:31:47.425 "data_offset": 0, 00:31:47.425 "data_size": 65536 00:31:47.425 }, 00:31:47.425 { 00:31:47.425 "name": "BaseBdev3", 00:31:47.425 "uuid": "1bcec351-3a31-4afb-b705-d446c88607bc", 00:31:47.425 "is_configured": true, 00:31:47.425 "data_offset": 0, 00:31:47.425 "data_size": 65536 00:31:47.425 }, 00:31:47.425 { 00:31:47.425 "name": "BaseBdev4", 00:31:47.425 "uuid": "a4d85439-ed34-40e1-9842-d03222b53288", 00:31:47.425 "is_configured": true, 00:31:47.425 "data_offset": 0, 00:31:47.425 "data_size": 65536 00:31:47.425 } 00:31:47.425 ] 00:31:47.425 }' 00:31:47.425 17:28:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:31:47.425 17:28:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:31:47.993 17:28:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:31:47.993 17:28:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:31:47.993 17:28:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:47.993 17:28:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:31:47.993 17:28:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:47.993 17:28:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:31:47.993 17:28:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:31:47.993 17:28:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:47.993 17:28:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:31:47.993 [2024-11-26 17:28:25.349132] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:31:47.993 BaseBdev1 00:31:47.993 17:28:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:47.993 17:28:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:31:47.993 17:28:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:31:47.993 17:28:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:31:47.993 17:28:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:31:47.993 17:28:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:31:47.993 17:28:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:31:47.993 17:28:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:31:47.993 17:28:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:47.993 17:28:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:31:47.993 17:28:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:47.993 17:28:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:31:47.993 17:28:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:47.993 17:28:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:31:47.993 [ 00:31:47.993 { 00:31:47.993 "name": "BaseBdev1", 00:31:47.993 "aliases": [ 00:31:47.993 "d48ba814-0126-4f88-b9f5-47cc6920f854" 00:31:47.993 ], 00:31:47.993 "product_name": "Malloc disk", 00:31:47.993 "block_size": 512, 00:31:47.993 "num_blocks": 65536, 00:31:47.993 "uuid": "d48ba814-0126-4f88-b9f5-47cc6920f854", 00:31:47.993 "assigned_rate_limits": { 00:31:47.993 "rw_ios_per_sec": 0, 00:31:47.993 "rw_mbytes_per_sec": 0, 00:31:47.993 "r_mbytes_per_sec": 0, 00:31:47.993 "w_mbytes_per_sec": 0 00:31:47.993 }, 00:31:47.993 "claimed": true, 00:31:47.993 "claim_type": "exclusive_write", 00:31:47.993 "zoned": false, 00:31:47.993 "supported_io_types": { 00:31:47.993 "read": true, 00:31:47.993 "write": true, 00:31:47.993 "unmap": true, 00:31:47.993 "flush": true, 00:31:47.993 "reset": true, 00:31:47.993 "nvme_admin": false, 00:31:47.993 "nvme_io": false, 00:31:47.993 "nvme_io_md": false, 00:31:47.993 "write_zeroes": true, 00:31:47.993 "zcopy": true, 00:31:47.993 "get_zone_info": false, 00:31:47.993 "zone_management": false, 00:31:47.993 "zone_append": false, 00:31:47.993 "compare": false, 00:31:47.993 "compare_and_write": false, 00:31:47.993 "abort": true, 00:31:47.993 "seek_hole": false, 00:31:47.993 "seek_data": false, 00:31:47.993 "copy": true, 00:31:47.993 "nvme_iov_md": false 00:31:47.993 }, 00:31:47.993 "memory_domains": [ 00:31:47.993 { 00:31:47.993 "dma_device_id": "system", 00:31:47.993 "dma_device_type": 1 00:31:47.993 }, 00:31:47.993 { 00:31:47.993 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:31:47.993 "dma_device_type": 2 00:31:47.993 } 00:31:47.993 ], 00:31:47.993 "driver_specific": {} 00:31:47.993 } 00:31:47.993 ] 00:31:47.993 17:28:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:47.993 17:28:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:31:47.993 17:28:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:31:47.993 17:28:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:31:47.993 17:28:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:31:47.993 17:28:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:31:47.993 17:28:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:31:47.993 17:28:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:31:47.993 17:28:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:31:47.993 17:28:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:31:47.993 17:28:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:31:47.993 17:28:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:31:47.993 17:28:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:31:47.993 17:28:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:47.993 17:28:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:31:47.993 17:28:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:31:47.993 17:28:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:47.993 17:28:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:31:47.993 "name": "Existed_Raid", 00:31:47.993 "uuid": "00000000-0000-0000-0000-000000000000", 00:31:47.993 "strip_size_kb": 64, 00:31:47.993 "state": "configuring", 00:31:47.993 "raid_level": "raid0", 00:31:47.993 "superblock": false, 00:31:47.993 "num_base_bdevs": 4, 00:31:47.993 "num_base_bdevs_discovered": 3, 00:31:47.993 "num_base_bdevs_operational": 4, 00:31:47.993 "base_bdevs_list": [ 00:31:47.993 { 00:31:47.993 "name": "BaseBdev1", 00:31:47.993 "uuid": "d48ba814-0126-4f88-b9f5-47cc6920f854", 00:31:47.993 "is_configured": true, 00:31:47.993 "data_offset": 0, 00:31:47.993 "data_size": 65536 00:31:47.993 }, 00:31:47.993 { 00:31:47.993 "name": null, 00:31:47.993 "uuid": "1eff40e9-c672-47e5-b45f-aa0b0709a5b2", 00:31:47.993 "is_configured": false, 00:31:47.993 "data_offset": 0, 00:31:47.993 "data_size": 65536 00:31:47.993 }, 00:31:47.993 { 00:31:47.993 "name": "BaseBdev3", 00:31:47.993 "uuid": "1bcec351-3a31-4afb-b705-d446c88607bc", 00:31:47.993 "is_configured": true, 00:31:47.993 "data_offset": 0, 00:31:47.993 "data_size": 65536 00:31:47.993 }, 00:31:47.993 { 00:31:47.993 "name": "BaseBdev4", 00:31:47.993 "uuid": "a4d85439-ed34-40e1-9842-d03222b53288", 00:31:47.993 "is_configured": true, 00:31:47.993 "data_offset": 0, 00:31:47.993 "data_size": 65536 00:31:47.993 } 00:31:47.993 ] 00:31:47.993 }' 00:31:47.993 17:28:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:31:47.993 17:28:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:31:48.561 17:28:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:31:48.561 17:28:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:31:48.561 17:28:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:48.561 17:28:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:31:48.561 17:28:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:48.561 17:28:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:31:48.561 17:28:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:31:48.561 17:28:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:48.561 17:28:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:31:48.561 [2024-11-26 17:28:25.813296] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:31:48.561 17:28:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:48.561 17:28:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:31:48.561 17:28:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:31:48.561 17:28:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:31:48.561 17:28:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:31:48.561 17:28:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:31:48.561 17:28:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:31:48.561 17:28:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:31:48.561 17:28:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:31:48.561 17:28:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:31:48.561 17:28:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:31:48.561 17:28:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:31:48.561 17:28:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:31:48.561 17:28:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:48.561 17:28:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:31:48.561 17:28:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:48.561 17:28:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:31:48.561 "name": "Existed_Raid", 00:31:48.561 "uuid": "00000000-0000-0000-0000-000000000000", 00:31:48.561 "strip_size_kb": 64, 00:31:48.561 "state": "configuring", 00:31:48.561 "raid_level": "raid0", 00:31:48.561 "superblock": false, 00:31:48.562 "num_base_bdevs": 4, 00:31:48.562 "num_base_bdevs_discovered": 2, 00:31:48.562 "num_base_bdevs_operational": 4, 00:31:48.562 "base_bdevs_list": [ 00:31:48.562 { 00:31:48.562 "name": "BaseBdev1", 00:31:48.562 "uuid": "d48ba814-0126-4f88-b9f5-47cc6920f854", 00:31:48.562 "is_configured": true, 00:31:48.562 "data_offset": 0, 00:31:48.562 "data_size": 65536 00:31:48.562 }, 00:31:48.562 { 00:31:48.562 "name": null, 00:31:48.562 "uuid": "1eff40e9-c672-47e5-b45f-aa0b0709a5b2", 00:31:48.562 "is_configured": false, 00:31:48.562 "data_offset": 0, 00:31:48.562 "data_size": 65536 00:31:48.562 }, 00:31:48.562 { 00:31:48.562 "name": null, 00:31:48.562 "uuid": "1bcec351-3a31-4afb-b705-d446c88607bc", 00:31:48.562 "is_configured": false, 00:31:48.562 "data_offset": 0, 00:31:48.562 "data_size": 65536 00:31:48.562 }, 00:31:48.562 { 00:31:48.562 "name": "BaseBdev4", 00:31:48.562 "uuid": "a4d85439-ed34-40e1-9842-d03222b53288", 00:31:48.562 "is_configured": true, 00:31:48.562 "data_offset": 0, 00:31:48.562 "data_size": 65536 00:31:48.562 } 00:31:48.562 ] 00:31:48.562 }' 00:31:48.562 17:28:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:31:48.562 17:28:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:31:48.821 17:28:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:31:48.821 17:28:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:48.821 17:28:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:31:48.821 17:28:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:31:49.080 17:28:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:49.080 17:28:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:31:49.080 17:28:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:31:49.080 17:28:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:49.080 17:28:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:31:49.080 [2024-11-26 17:28:26.297389] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:31:49.080 17:28:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:49.080 17:28:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:31:49.080 17:28:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:31:49.080 17:28:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:31:49.080 17:28:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:31:49.080 17:28:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:31:49.080 17:28:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:31:49.080 17:28:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:31:49.080 17:28:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:31:49.080 17:28:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:31:49.080 17:28:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:31:49.080 17:28:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:31:49.080 17:28:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:49.080 17:28:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:31:49.080 17:28:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:31:49.080 17:28:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:49.080 17:28:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:31:49.080 "name": "Existed_Raid", 00:31:49.080 "uuid": "00000000-0000-0000-0000-000000000000", 00:31:49.080 "strip_size_kb": 64, 00:31:49.080 "state": "configuring", 00:31:49.080 "raid_level": "raid0", 00:31:49.080 "superblock": false, 00:31:49.080 "num_base_bdevs": 4, 00:31:49.080 "num_base_bdevs_discovered": 3, 00:31:49.080 "num_base_bdevs_operational": 4, 00:31:49.080 "base_bdevs_list": [ 00:31:49.080 { 00:31:49.080 "name": "BaseBdev1", 00:31:49.080 "uuid": "d48ba814-0126-4f88-b9f5-47cc6920f854", 00:31:49.080 "is_configured": true, 00:31:49.080 "data_offset": 0, 00:31:49.080 "data_size": 65536 00:31:49.080 }, 00:31:49.080 { 00:31:49.080 "name": null, 00:31:49.080 "uuid": "1eff40e9-c672-47e5-b45f-aa0b0709a5b2", 00:31:49.080 "is_configured": false, 00:31:49.080 "data_offset": 0, 00:31:49.080 "data_size": 65536 00:31:49.080 }, 00:31:49.080 { 00:31:49.080 "name": "BaseBdev3", 00:31:49.080 "uuid": "1bcec351-3a31-4afb-b705-d446c88607bc", 00:31:49.080 "is_configured": true, 00:31:49.080 "data_offset": 0, 00:31:49.080 "data_size": 65536 00:31:49.080 }, 00:31:49.080 { 00:31:49.080 "name": "BaseBdev4", 00:31:49.080 "uuid": "a4d85439-ed34-40e1-9842-d03222b53288", 00:31:49.080 "is_configured": true, 00:31:49.080 "data_offset": 0, 00:31:49.080 "data_size": 65536 00:31:49.080 } 00:31:49.080 ] 00:31:49.080 }' 00:31:49.080 17:28:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:31:49.080 17:28:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:31:49.338 17:28:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:31:49.338 17:28:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:31:49.338 17:28:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:49.338 17:28:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:31:49.338 17:28:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:49.338 17:28:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:31:49.338 17:28:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:31:49.338 17:28:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:49.338 17:28:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:31:49.338 [2024-11-26 17:28:26.773498] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:31:49.597 17:28:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:49.597 17:28:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:31:49.597 17:28:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:31:49.597 17:28:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:31:49.597 17:28:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:31:49.597 17:28:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:31:49.597 17:28:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:31:49.597 17:28:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:31:49.597 17:28:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:31:49.597 17:28:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:31:49.597 17:28:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:31:49.597 17:28:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:31:49.597 17:28:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:49.597 17:28:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:31:49.597 17:28:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:31:49.597 17:28:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:49.597 17:28:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:31:49.597 "name": "Existed_Raid", 00:31:49.597 "uuid": "00000000-0000-0000-0000-000000000000", 00:31:49.597 "strip_size_kb": 64, 00:31:49.597 "state": "configuring", 00:31:49.597 "raid_level": "raid0", 00:31:49.597 "superblock": false, 00:31:49.597 "num_base_bdevs": 4, 00:31:49.597 "num_base_bdevs_discovered": 2, 00:31:49.597 "num_base_bdevs_operational": 4, 00:31:49.597 "base_bdevs_list": [ 00:31:49.597 { 00:31:49.597 "name": null, 00:31:49.597 "uuid": "d48ba814-0126-4f88-b9f5-47cc6920f854", 00:31:49.597 "is_configured": false, 00:31:49.597 "data_offset": 0, 00:31:49.597 "data_size": 65536 00:31:49.597 }, 00:31:49.597 { 00:31:49.597 "name": null, 00:31:49.597 "uuid": "1eff40e9-c672-47e5-b45f-aa0b0709a5b2", 00:31:49.597 "is_configured": false, 00:31:49.597 "data_offset": 0, 00:31:49.597 "data_size": 65536 00:31:49.597 }, 00:31:49.597 { 00:31:49.597 "name": "BaseBdev3", 00:31:49.597 "uuid": "1bcec351-3a31-4afb-b705-d446c88607bc", 00:31:49.597 "is_configured": true, 00:31:49.597 "data_offset": 0, 00:31:49.597 "data_size": 65536 00:31:49.597 }, 00:31:49.597 { 00:31:49.597 "name": "BaseBdev4", 00:31:49.597 "uuid": "a4d85439-ed34-40e1-9842-d03222b53288", 00:31:49.597 "is_configured": true, 00:31:49.597 "data_offset": 0, 00:31:49.597 "data_size": 65536 00:31:49.597 } 00:31:49.597 ] 00:31:49.597 }' 00:31:49.597 17:28:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:31:49.597 17:28:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:31:49.856 17:28:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:31:49.856 17:28:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:31:49.857 17:28:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:49.857 17:28:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:31:49.857 17:28:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:49.857 17:28:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:31:49.857 17:28:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:31:49.857 17:28:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:49.857 17:28:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:31:49.857 [2024-11-26 17:28:27.295481] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:31:50.115 17:28:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:50.115 17:28:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:31:50.115 17:28:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:31:50.115 17:28:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:31:50.115 17:28:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:31:50.115 17:28:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:31:50.115 17:28:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:31:50.115 17:28:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:31:50.115 17:28:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:31:50.115 17:28:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:31:50.115 17:28:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:31:50.115 17:28:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:31:50.115 17:28:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:31:50.115 17:28:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:50.115 17:28:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:31:50.115 17:28:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:50.115 17:28:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:31:50.115 "name": "Existed_Raid", 00:31:50.115 "uuid": "00000000-0000-0000-0000-000000000000", 00:31:50.115 "strip_size_kb": 64, 00:31:50.115 "state": "configuring", 00:31:50.115 "raid_level": "raid0", 00:31:50.115 "superblock": false, 00:31:50.115 "num_base_bdevs": 4, 00:31:50.115 "num_base_bdevs_discovered": 3, 00:31:50.115 "num_base_bdevs_operational": 4, 00:31:50.115 "base_bdevs_list": [ 00:31:50.115 { 00:31:50.115 "name": null, 00:31:50.115 "uuid": "d48ba814-0126-4f88-b9f5-47cc6920f854", 00:31:50.115 "is_configured": false, 00:31:50.115 "data_offset": 0, 00:31:50.115 "data_size": 65536 00:31:50.115 }, 00:31:50.115 { 00:31:50.115 "name": "BaseBdev2", 00:31:50.115 "uuid": "1eff40e9-c672-47e5-b45f-aa0b0709a5b2", 00:31:50.115 "is_configured": true, 00:31:50.115 "data_offset": 0, 00:31:50.115 "data_size": 65536 00:31:50.115 }, 00:31:50.115 { 00:31:50.115 "name": "BaseBdev3", 00:31:50.115 "uuid": "1bcec351-3a31-4afb-b705-d446c88607bc", 00:31:50.115 "is_configured": true, 00:31:50.115 "data_offset": 0, 00:31:50.115 "data_size": 65536 00:31:50.115 }, 00:31:50.115 { 00:31:50.115 "name": "BaseBdev4", 00:31:50.115 "uuid": "a4d85439-ed34-40e1-9842-d03222b53288", 00:31:50.115 "is_configured": true, 00:31:50.115 "data_offset": 0, 00:31:50.115 "data_size": 65536 00:31:50.115 } 00:31:50.115 ] 00:31:50.115 }' 00:31:50.115 17:28:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:31:50.115 17:28:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:31:50.374 17:28:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:31:50.374 17:28:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:31:50.374 17:28:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:50.374 17:28:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:31:50.374 17:28:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:50.374 17:28:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:31:50.374 17:28:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:31:50.374 17:28:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:31:50.374 17:28:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:50.374 17:28:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:31:50.374 17:28:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:50.374 17:28:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u d48ba814-0126-4f88-b9f5-47cc6920f854 00:31:50.633 17:28:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:50.633 17:28:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:31:50.633 [2024-11-26 17:28:27.857218] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:31:50.633 [2024-11-26 17:28:27.857274] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:31:50.633 [2024-11-26 17:28:27.857283] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 262144, blocklen 512 00:31:50.633 [2024-11-26 17:28:27.857564] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000063c0 00:31:50.634 [2024-11-26 17:28:27.857705] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:31:50.634 [2024-11-26 17:28:27.857718] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000008200 00:31:50.634 [2024-11-26 17:28:27.857957] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:31:50.634 NewBaseBdev 00:31:50.634 17:28:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:50.634 17:28:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:31:50.634 17:28:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=NewBaseBdev 00:31:50.634 17:28:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:31:50.634 17:28:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:31:50.634 17:28:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:31:50.634 17:28:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:31:50.634 17:28:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:31:50.634 17:28:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:50.634 17:28:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:31:50.634 17:28:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:50.634 17:28:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:31:50.634 17:28:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:50.634 17:28:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:31:50.634 [ 00:31:50.634 { 00:31:50.634 "name": "NewBaseBdev", 00:31:50.634 "aliases": [ 00:31:50.634 "d48ba814-0126-4f88-b9f5-47cc6920f854" 00:31:50.634 ], 00:31:50.634 "product_name": "Malloc disk", 00:31:50.634 "block_size": 512, 00:31:50.634 "num_blocks": 65536, 00:31:50.634 "uuid": "d48ba814-0126-4f88-b9f5-47cc6920f854", 00:31:50.634 "assigned_rate_limits": { 00:31:50.634 "rw_ios_per_sec": 0, 00:31:50.634 "rw_mbytes_per_sec": 0, 00:31:50.634 "r_mbytes_per_sec": 0, 00:31:50.634 "w_mbytes_per_sec": 0 00:31:50.634 }, 00:31:50.634 "claimed": true, 00:31:50.634 "claim_type": "exclusive_write", 00:31:50.634 "zoned": false, 00:31:50.634 "supported_io_types": { 00:31:50.634 "read": true, 00:31:50.634 "write": true, 00:31:50.634 "unmap": true, 00:31:50.634 "flush": true, 00:31:50.634 "reset": true, 00:31:50.634 "nvme_admin": false, 00:31:50.634 "nvme_io": false, 00:31:50.634 "nvme_io_md": false, 00:31:50.634 "write_zeroes": true, 00:31:50.634 "zcopy": true, 00:31:50.634 "get_zone_info": false, 00:31:50.634 "zone_management": false, 00:31:50.634 "zone_append": false, 00:31:50.634 "compare": false, 00:31:50.634 "compare_and_write": false, 00:31:50.634 "abort": true, 00:31:50.634 "seek_hole": false, 00:31:50.634 "seek_data": false, 00:31:50.634 "copy": true, 00:31:50.634 "nvme_iov_md": false 00:31:50.634 }, 00:31:50.634 "memory_domains": [ 00:31:50.634 { 00:31:50.634 "dma_device_id": "system", 00:31:50.634 "dma_device_type": 1 00:31:50.634 }, 00:31:50.634 { 00:31:50.634 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:31:50.634 "dma_device_type": 2 00:31:50.634 } 00:31:50.634 ], 00:31:50.634 "driver_specific": {} 00:31:50.634 } 00:31:50.634 ] 00:31:50.634 17:28:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:50.634 17:28:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:31:50.634 17:28:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online raid0 64 4 00:31:50.634 17:28:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:31:50.634 17:28:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:31:50.634 17:28:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:31:50.634 17:28:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:31:50.634 17:28:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:31:50.634 17:28:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:31:50.634 17:28:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:31:50.634 17:28:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:31:50.634 17:28:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:31:50.634 17:28:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:31:50.634 17:28:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:50.634 17:28:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:31:50.634 17:28:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:31:50.634 17:28:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:50.634 17:28:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:31:50.634 "name": "Existed_Raid", 00:31:50.634 "uuid": "72b32193-c43d-41ae-aab2-c2cf377074ae", 00:31:50.634 "strip_size_kb": 64, 00:31:50.634 "state": "online", 00:31:50.634 "raid_level": "raid0", 00:31:50.634 "superblock": false, 00:31:50.634 "num_base_bdevs": 4, 00:31:50.634 "num_base_bdevs_discovered": 4, 00:31:50.634 "num_base_bdevs_operational": 4, 00:31:50.634 "base_bdevs_list": [ 00:31:50.634 { 00:31:50.634 "name": "NewBaseBdev", 00:31:50.634 "uuid": "d48ba814-0126-4f88-b9f5-47cc6920f854", 00:31:50.634 "is_configured": true, 00:31:50.634 "data_offset": 0, 00:31:50.634 "data_size": 65536 00:31:50.634 }, 00:31:50.634 { 00:31:50.634 "name": "BaseBdev2", 00:31:50.634 "uuid": "1eff40e9-c672-47e5-b45f-aa0b0709a5b2", 00:31:50.634 "is_configured": true, 00:31:50.634 "data_offset": 0, 00:31:50.634 "data_size": 65536 00:31:50.634 }, 00:31:50.634 { 00:31:50.634 "name": "BaseBdev3", 00:31:50.634 "uuid": "1bcec351-3a31-4afb-b705-d446c88607bc", 00:31:50.634 "is_configured": true, 00:31:50.634 "data_offset": 0, 00:31:50.634 "data_size": 65536 00:31:50.634 }, 00:31:50.634 { 00:31:50.634 "name": "BaseBdev4", 00:31:50.634 "uuid": "a4d85439-ed34-40e1-9842-d03222b53288", 00:31:50.634 "is_configured": true, 00:31:50.634 "data_offset": 0, 00:31:50.634 "data_size": 65536 00:31:50.634 } 00:31:50.634 ] 00:31:50.634 }' 00:31:50.634 17:28:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:31:50.634 17:28:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:31:51.203 17:28:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:31:51.203 17:28:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:31:51.203 17:28:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:31:51.203 17:28:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:31:51.203 17:28:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:31:51.203 17:28:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:31:51.203 17:28:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:31:51.203 17:28:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:51.203 17:28:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:31:51.203 17:28:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:31:51.203 [2024-11-26 17:28:28.353743] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:31:51.203 17:28:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:51.203 17:28:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:31:51.203 "name": "Existed_Raid", 00:31:51.203 "aliases": [ 00:31:51.203 "72b32193-c43d-41ae-aab2-c2cf377074ae" 00:31:51.203 ], 00:31:51.203 "product_name": "Raid Volume", 00:31:51.203 "block_size": 512, 00:31:51.203 "num_blocks": 262144, 00:31:51.203 "uuid": "72b32193-c43d-41ae-aab2-c2cf377074ae", 00:31:51.203 "assigned_rate_limits": { 00:31:51.203 "rw_ios_per_sec": 0, 00:31:51.203 "rw_mbytes_per_sec": 0, 00:31:51.203 "r_mbytes_per_sec": 0, 00:31:51.203 "w_mbytes_per_sec": 0 00:31:51.203 }, 00:31:51.203 "claimed": false, 00:31:51.203 "zoned": false, 00:31:51.203 "supported_io_types": { 00:31:51.203 "read": true, 00:31:51.203 "write": true, 00:31:51.203 "unmap": true, 00:31:51.203 "flush": true, 00:31:51.203 "reset": true, 00:31:51.203 "nvme_admin": false, 00:31:51.203 "nvme_io": false, 00:31:51.203 "nvme_io_md": false, 00:31:51.203 "write_zeroes": true, 00:31:51.203 "zcopy": false, 00:31:51.203 "get_zone_info": false, 00:31:51.203 "zone_management": false, 00:31:51.203 "zone_append": false, 00:31:51.203 "compare": false, 00:31:51.203 "compare_and_write": false, 00:31:51.203 "abort": false, 00:31:51.203 "seek_hole": false, 00:31:51.203 "seek_data": false, 00:31:51.203 "copy": false, 00:31:51.203 "nvme_iov_md": false 00:31:51.203 }, 00:31:51.203 "memory_domains": [ 00:31:51.203 { 00:31:51.203 "dma_device_id": "system", 00:31:51.203 "dma_device_type": 1 00:31:51.203 }, 00:31:51.203 { 00:31:51.203 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:31:51.203 "dma_device_type": 2 00:31:51.203 }, 00:31:51.203 { 00:31:51.203 "dma_device_id": "system", 00:31:51.203 "dma_device_type": 1 00:31:51.203 }, 00:31:51.203 { 00:31:51.203 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:31:51.203 "dma_device_type": 2 00:31:51.203 }, 00:31:51.203 { 00:31:51.203 "dma_device_id": "system", 00:31:51.203 "dma_device_type": 1 00:31:51.203 }, 00:31:51.203 { 00:31:51.203 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:31:51.203 "dma_device_type": 2 00:31:51.203 }, 00:31:51.203 { 00:31:51.203 "dma_device_id": "system", 00:31:51.203 "dma_device_type": 1 00:31:51.203 }, 00:31:51.203 { 00:31:51.203 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:31:51.203 "dma_device_type": 2 00:31:51.203 } 00:31:51.203 ], 00:31:51.203 "driver_specific": { 00:31:51.203 "raid": { 00:31:51.203 "uuid": "72b32193-c43d-41ae-aab2-c2cf377074ae", 00:31:51.203 "strip_size_kb": 64, 00:31:51.203 "state": "online", 00:31:51.203 "raid_level": "raid0", 00:31:51.203 "superblock": false, 00:31:51.203 "num_base_bdevs": 4, 00:31:51.203 "num_base_bdevs_discovered": 4, 00:31:51.203 "num_base_bdevs_operational": 4, 00:31:51.203 "base_bdevs_list": [ 00:31:51.203 { 00:31:51.203 "name": "NewBaseBdev", 00:31:51.203 "uuid": "d48ba814-0126-4f88-b9f5-47cc6920f854", 00:31:51.203 "is_configured": true, 00:31:51.203 "data_offset": 0, 00:31:51.203 "data_size": 65536 00:31:51.203 }, 00:31:51.203 { 00:31:51.203 "name": "BaseBdev2", 00:31:51.203 "uuid": "1eff40e9-c672-47e5-b45f-aa0b0709a5b2", 00:31:51.203 "is_configured": true, 00:31:51.203 "data_offset": 0, 00:31:51.203 "data_size": 65536 00:31:51.203 }, 00:31:51.203 { 00:31:51.203 "name": "BaseBdev3", 00:31:51.203 "uuid": "1bcec351-3a31-4afb-b705-d446c88607bc", 00:31:51.203 "is_configured": true, 00:31:51.203 "data_offset": 0, 00:31:51.203 "data_size": 65536 00:31:51.203 }, 00:31:51.203 { 00:31:51.203 "name": "BaseBdev4", 00:31:51.203 "uuid": "a4d85439-ed34-40e1-9842-d03222b53288", 00:31:51.203 "is_configured": true, 00:31:51.203 "data_offset": 0, 00:31:51.203 "data_size": 65536 00:31:51.203 } 00:31:51.203 ] 00:31:51.203 } 00:31:51.203 } 00:31:51.203 }' 00:31:51.203 17:28:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:31:51.203 17:28:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:31:51.203 BaseBdev2 00:31:51.203 BaseBdev3 00:31:51.203 BaseBdev4' 00:31:51.203 17:28:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:31:51.203 17:28:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:31:51.203 17:28:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:31:51.203 17:28:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:31:51.203 17:28:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:31:51.203 17:28:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:51.203 17:28:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:31:51.203 17:28:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:51.203 17:28:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:31:51.203 17:28:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:31:51.203 17:28:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:31:51.204 17:28:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:31:51.204 17:28:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:51.204 17:28:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:31:51.204 17:28:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:31:51.204 17:28:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:51.204 17:28:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:31:51.204 17:28:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:31:51.204 17:28:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:31:51.204 17:28:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:31:51.204 17:28:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:51.204 17:28:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:31:51.204 17:28:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:31:51.204 17:28:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:51.204 17:28:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:31:51.204 17:28:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:31:51.204 17:28:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:31:51.204 17:28:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:31:51.204 17:28:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:31:51.204 17:28:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:51.204 17:28:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:31:51.463 17:28:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:51.463 17:28:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:31:51.463 17:28:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:31:51.463 17:28:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:31:51.463 17:28:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:51.463 17:28:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:31:51.463 [2024-11-26 17:28:28.673445] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:31:51.463 [2024-11-26 17:28:28.673582] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:31:51.463 [2024-11-26 17:28:28.673676] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:31:51.463 [2024-11-26 17:28:28.673746] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:31:51.463 [2024-11-26 17:28:28.673758] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name Existed_Raid, state offline 00:31:51.463 17:28:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:51.463 17:28:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@326 -- # killprocess 69806 00:31:51.463 17:28:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@954 -- # '[' -z 69806 ']' 00:31:51.463 17:28:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@958 -- # kill -0 69806 00:31:51.463 17:28:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@959 -- # uname 00:31:51.463 17:28:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:31:51.463 17:28:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 69806 00:31:51.463 killing process with pid 69806 00:31:51.463 17:28:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:31:51.463 17:28:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:31:51.463 17:28:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 69806' 00:31:51.463 17:28:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@973 -- # kill 69806 00:31:51.463 [2024-11-26 17:28:28.712260] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:31:51.463 17:28:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@978 -- # wait 69806 00:31:51.722 [2024-11-26 17:28:29.120513] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:31:53.099 17:28:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@328 -- # return 0 00:31:53.099 00:31:53.099 real 0m11.784s 00:31:53.099 user 0m18.804s 00:31:53.099 sys 0m2.185s 00:31:53.099 17:28:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:31:53.099 ************************************ 00:31:53.099 END TEST raid_state_function_test 00:31:53.099 ************************************ 00:31:53.099 17:28:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:31:53.099 17:28:30 bdev_raid -- bdev/bdev_raid.sh@969 -- # run_test raid_state_function_test_sb raid_state_function_test raid0 4 true 00:31:53.099 17:28:30 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:31:53.099 17:28:30 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:31:53.099 17:28:30 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:31:53.099 ************************************ 00:31:53.099 START TEST raid_state_function_test_sb 00:31:53.099 ************************************ 00:31:53.099 17:28:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1129 -- # raid_state_function_test raid0 4 true 00:31:53.099 17:28:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # local raid_level=raid0 00:31:53.099 17:28:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=4 00:31:53.099 17:28:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:31:53.099 17:28:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:31:53.099 17:28:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:31:53.099 17:28:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:31:53.099 17:28:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:31:53.099 17:28:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:31:53.099 17:28:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:31:53.099 17:28:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:31:53.099 17:28:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:31:53.099 17:28:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:31:53.099 17:28:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:31:53.099 17:28:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:31:53.099 17:28:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:31:53.099 17:28:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev4 00:31:53.099 17:28:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:31:53.099 17:28:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:31:53.099 17:28:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:31:53.099 17:28:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:31:53.099 17:28:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:31:53.099 17:28:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # local strip_size 00:31:53.099 17:28:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:31:53.099 17:28:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:31:53.099 17:28:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@215 -- # '[' raid0 '!=' raid1 ']' 00:31:53.099 17:28:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:31:53.099 17:28:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:31:53.099 17:28:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:31:53.099 17:28:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:31:53.099 17:28:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@229 -- # raid_pid=70477 00:31:53.099 Process raid pid: 70477 00:31:53.099 17:28:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 70477' 00:31:53.099 17:28:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # waitforlisten 70477 00:31:53.099 17:28:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:31:53.099 17:28:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@835 -- # '[' -z 70477 ']' 00:31:53.099 17:28:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:31:53.099 17:28:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@840 -- # local max_retries=100 00:31:53.099 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:31:53.099 17:28:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:31:53.099 17:28:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@844 -- # xtrace_disable 00:31:53.099 17:28:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:31:53.099 [2024-11-26 17:28:30.495159] Starting SPDK v25.01-pre git sha1 f7ce15267 / DPDK 24.03.0 initialization... 00:31:53.099 [2024-11-26 17:28:30.495344] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:31:53.358 [2024-11-26 17:28:30.685179] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:53.616 [2024-11-26 17:28:30.806492] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:31:53.616 [2024-11-26 17:28:31.031319] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:31:53.616 [2024-11-26 17:28:31.031364] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:31:54.183 17:28:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:31:54.183 17:28:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@868 -- # return 0 00:31:54.183 17:28:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:31:54.183 17:28:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:54.183 17:28:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:31:54.183 [2024-11-26 17:28:31.344663] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:31:54.183 [2024-11-26 17:28:31.344720] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:31:54.183 [2024-11-26 17:28:31.344732] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:31:54.183 [2024-11-26 17:28:31.344745] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:31:54.183 [2024-11-26 17:28:31.344753] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:31:54.183 [2024-11-26 17:28:31.344765] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:31:54.183 [2024-11-26 17:28:31.344772] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:31:54.183 [2024-11-26 17:28:31.344784] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:31:54.183 17:28:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:54.183 17:28:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:31:54.183 17:28:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:31:54.183 17:28:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:31:54.183 17:28:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:31:54.184 17:28:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:31:54.184 17:28:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:31:54.184 17:28:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:31:54.184 17:28:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:31:54.184 17:28:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:31:54.184 17:28:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:31:54.184 17:28:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:31:54.184 17:28:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:31:54.184 17:28:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:54.184 17:28:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:31:54.184 17:28:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:54.184 17:28:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:31:54.184 "name": "Existed_Raid", 00:31:54.184 "uuid": "51ac7d3c-e196-41cd-bee7-f80530d07a36", 00:31:54.184 "strip_size_kb": 64, 00:31:54.184 "state": "configuring", 00:31:54.184 "raid_level": "raid0", 00:31:54.184 "superblock": true, 00:31:54.184 "num_base_bdevs": 4, 00:31:54.184 "num_base_bdevs_discovered": 0, 00:31:54.184 "num_base_bdevs_operational": 4, 00:31:54.184 "base_bdevs_list": [ 00:31:54.184 { 00:31:54.184 "name": "BaseBdev1", 00:31:54.184 "uuid": "00000000-0000-0000-0000-000000000000", 00:31:54.184 "is_configured": false, 00:31:54.184 "data_offset": 0, 00:31:54.184 "data_size": 0 00:31:54.184 }, 00:31:54.184 { 00:31:54.184 "name": "BaseBdev2", 00:31:54.184 "uuid": "00000000-0000-0000-0000-000000000000", 00:31:54.184 "is_configured": false, 00:31:54.184 "data_offset": 0, 00:31:54.184 "data_size": 0 00:31:54.184 }, 00:31:54.184 { 00:31:54.184 "name": "BaseBdev3", 00:31:54.184 "uuid": "00000000-0000-0000-0000-000000000000", 00:31:54.184 "is_configured": false, 00:31:54.184 "data_offset": 0, 00:31:54.184 "data_size": 0 00:31:54.184 }, 00:31:54.184 { 00:31:54.184 "name": "BaseBdev4", 00:31:54.184 "uuid": "00000000-0000-0000-0000-000000000000", 00:31:54.184 "is_configured": false, 00:31:54.184 "data_offset": 0, 00:31:54.184 "data_size": 0 00:31:54.184 } 00:31:54.184 ] 00:31:54.184 }' 00:31:54.184 17:28:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:31:54.184 17:28:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:31:54.487 17:28:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:31:54.487 17:28:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:54.487 17:28:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:31:54.487 [2024-11-26 17:28:31.784681] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:31:54.487 [2024-11-26 17:28:31.784727] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:31:54.487 17:28:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:54.487 17:28:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:31:54.487 17:28:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:54.487 17:28:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:31:54.487 [2024-11-26 17:28:31.792707] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:31:54.487 [2024-11-26 17:28:31.792756] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:31:54.487 [2024-11-26 17:28:31.792767] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:31:54.487 [2024-11-26 17:28:31.792779] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:31:54.487 [2024-11-26 17:28:31.792794] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:31:54.487 [2024-11-26 17:28:31.792806] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:31:54.487 [2024-11-26 17:28:31.792814] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:31:54.487 [2024-11-26 17:28:31.792826] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:31:54.487 17:28:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:54.487 17:28:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:31:54.487 17:28:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:54.487 17:28:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:31:54.487 [2024-11-26 17:28:31.838914] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:31:54.487 BaseBdev1 00:31:54.487 17:28:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:54.487 17:28:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:31:54.487 17:28:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:31:54.487 17:28:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:31:54.487 17:28:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:31:54.487 17:28:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:31:54.487 17:28:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:31:54.488 17:28:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:31:54.488 17:28:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:54.488 17:28:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:31:54.488 17:28:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:54.488 17:28:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:31:54.488 17:28:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:54.488 17:28:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:31:54.488 [ 00:31:54.488 { 00:31:54.488 "name": "BaseBdev1", 00:31:54.488 "aliases": [ 00:31:54.488 "9a25e175-7984-48d7-a05c-5cc55b6bfb9a" 00:31:54.488 ], 00:31:54.488 "product_name": "Malloc disk", 00:31:54.488 "block_size": 512, 00:31:54.488 "num_blocks": 65536, 00:31:54.488 "uuid": "9a25e175-7984-48d7-a05c-5cc55b6bfb9a", 00:31:54.488 "assigned_rate_limits": { 00:31:54.488 "rw_ios_per_sec": 0, 00:31:54.488 "rw_mbytes_per_sec": 0, 00:31:54.488 "r_mbytes_per_sec": 0, 00:31:54.488 "w_mbytes_per_sec": 0 00:31:54.488 }, 00:31:54.488 "claimed": true, 00:31:54.488 "claim_type": "exclusive_write", 00:31:54.488 "zoned": false, 00:31:54.488 "supported_io_types": { 00:31:54.488 "read": true, 00:31:54.488 "write": true, 00:31:54.488 "unmap": true, 00:31:54.488 "flush": true, 00:31:54.488 "reset": true, 00:31:54.488 "nvme_admin": false, 00:31:54.488 "nvme_io": false, 00:31:54.488 "nvme_io_md": false, 00:31:54.488 "write_zeroes": true, 00:31:54.488 "zcopy": true, 00:31:54.488 "get_zone_info": false, 00:31:54.488 "zone_management": false, 00:31:54.488 "zone_append": false, 00:31:54.488 "compare": false, 00:31:54.488 "compare_and_write": false, 00:31:54.488 "abort": true, 00:31:54.488 "seek_hole": false, 00:31:54.488 "seek_data": false, 00:31:54.488 "copy": true, 00:31:54.488 "nvme_iov_md": false 00:31:54.488 }, 00:31:54.488 "memory_domains": [ 00:31:54.488 { 00:31:54.488 "dma_device_id": "system", 00:31:54.488 "dma_device_type": 1 00:31:54.488 }, 00:31:54.488 { 00:31:54.488 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:31:54.488 "dma_device_type": 2 00:31:54.488 } 00:31:54.488 ], 00:31:54.488 "driver_specific": {} 00:31:54.488 } 00:31:54.488 ] 00:31:54.488 17:28:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:54.488 17:28:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:31:54.488 17:28:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:31:54.488 17:28:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:31:54.488 17:28:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:31:54.488 17:28:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:31:54.488 17:28:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:31:54.488 17:28:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:31:54.488 17:28:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:31:54.488 17:28:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:31:54.488 17:28:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:31:54.488 17:28:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:31:54.488 17:28:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:31:54.488 17:28:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:54.488 17:28:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:31:54.488 17:28:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:31:54.488 17:28:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:54.746 17:28:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:31:54.746 "name": "Existed_Raid", 00:31:54.746 "uuid": "0f4b5cce-2ed8-41b0-8f6d-bd2b974e383b", 00:31:54.746 "strip_size_kb": 64, 00:31:54.746 "state": "configuring", 00:31:54.746 "raid_level": "raid0", 00:31:54.746 "superblock": true, 00:31:54.746 "num_base_bdevs": 4, 00:31:54.746 "num_base_bdevs_discovered": 1, 00:31:54.746 "num_base_bdevs_operational": 4, 00:31:54.746 "base_bdevs_list": [ 00:31:54.746 { 00:31:54.746 "name": "BaseBdev1", 00:31:54.746 "uuid": "9a25e175-7984-48d7-a05c-5cc55b6bfb9a", 00:31:54.746 "is_configured": true, 00:31:54.746 "data_offset": 2048, 00:31:54.746 "data_size": 63488 00:31:54.746 }, 00:31:54.746 { 00:31:54.746 "name": "BaseBdev2", 00:31:54.746 "uuid": "00000000-0000-0000-0000-000000000000", 00:31:54.746 "is_configured": false, 00:31:54.746 "data_offset": 0, 00:31:54.746 "data_size": 0 00:31:54.746 }, 00:31:54.746 { 00:31:54.746 "name": "BaseBdev3", 00:31:54.746 "uuid": "00000000-0000-0000-0000-000000000000", 00:31:54.746 "is_configured": false, 00:31:54.746 "data_offset": 0, 00:31:54.746 "data_size": 0 00:31:54.746 }, 00:31:54.746 { 00:31:54.746 "name": "BaseBdev4", 00:31:54.746 "uuid": "00000000-0000-0000-0000-000000000000", 00:31:54.746 "is_configured": false, 00:31:54.746 "data_offset": 0, 00:31:54.746 "data_size": 0 00:31:54.746 } 00:31:54.746 ] 00:31:54.746 }' 00:31:54.746 17:28:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:31:54.746 17:28:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:31:55.005 17:28:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:31:55.005 17:28:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:55.005 17:28:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:31:55.005 [2024-11-26 17:28:32.263092] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:31:55.005 [2024-11-26 17:28:32.263150] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:31:55.005 17:28:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:55.005 17:28:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:31:55.005 17:28:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:55.005 17:28:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:31:55.005 [2024-11-26 17:28:32.271158] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:31:55.005 [2024-11-26 17:28:32.273249] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:31:55.005 [2024-11-26 17:28:32.273294] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:31:55.005 [2024-11-26 17:28:32.273306] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:31:55.005 [2024-11-26 17:28:32.273321] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:31:55.005 [2024-11-26 17:28:32.273329] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:31:55.005 [2024-11-26 17:28:32.273340] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:31:55.005 17:28:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:55.005 17:28:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:31:55.005 17:28:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:31:55.005 17:28:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:31:55.005 17:28:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:31:55.005 17:28:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:31:55.005 17:28:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:31:55.005 17:28:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:31:55.005 17:28:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:31:55.005 17:28:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:31:55.005 17:28:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:31:55.005 17:28:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:31:55.005 17:28:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:31:55.005 17:28:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:31:55.005 17:28:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:31:55.005 17:28:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:55.005 17:28:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:31:55.005 17:28:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:55.005 17:28:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:31:55.005 "name": "Existed_Raid", 00:31:55.005 "uuid": "fe974364-c18c-4e48-a951-a3ffcefe7bd3", 00:31:55.005 "strip_size_kb": 64, 00:31:55.005 "state": "configuring", 00:31:55.005 "raid_level": "raid0", 00:31:55.005 "superblock": true, 00:31:55.005 "num_base_bdevs": 4, 00:31:55.005 "num_base_bdevs_discovered": 1, 00:31:55.005 "num_base_bdevs_operational": 4, 00:31:55.005 "base_bdevs_list": [ 00:31:55.005 { 00:31:55.005 "name": "BaseBdev1", 00:31:55.005 "uuid": "9a25e175-7984-48d7-a05c-5cc55b6bfb9a", 00:31:55.005 "is_configured": true, 00:31:55.005 "data_offset": 2048, 00:31:55.005 "data_size": 63488 00:31:55.005 }, 00:31:55.005 { 00:31:55.005 "name": "BaseBdev2", 00:31:55.005 "uuid": "00000000-0000-0000-0000-000000000000", 00:31:55.005 "is_configured": false, 00:31:55.005 "data_offset": 0, 00:31:55.005 "data_size": 0 00:31:55.005 }, 00:31:55.005 { 00:31:55.005 "name": "BaseBdev3", 00:31:55.005 "uuid": "00000000-0000-0000-0000-000000000000", 00:31:55.005 "is_configured": false, 00:31:55.005 "data_offset": 0, 00:31:55.005 "data_size": 0 00:31:55.005 }, 00:31:55.005 { 00:31:55.005 "name": "BaseBdev4", 00:31:55.005 "uuid": "00000000-0000-0000-0000-000000000000", 00:31:55.005 "is_configured": false, 00:31:55.005 "data_offset": 0, 00:31:55.005 "data_size": 0 00:31:55.005 } 00:31:55.005 ] 00:31:55.005 }' 00:31:55.005 17:28:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:31:55.005 17:28:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:31:55.572 17:28:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:31:55.572 17:28:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:55.572 17:28:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:31:55.572 [2024-11-26 17:28:32.756406] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:31:55.572 BaseBdev2 00:31:55.572 17:28:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:55.572 17:28:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:31:55.572 17:28:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:31:55.572 17:28:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:31:55.572 17:28:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:31:55.572 17:28:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:31:55.572 17:28:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:31:55.572 17:28:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:31:55.572 17:28:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:55.572 17:28:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:31:55.572 17:28:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:55.572 17:28:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:31:55.572 17:28:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:55.572 17:28:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:31:55.572 [ 00:31:55.572 { 00:31:55.572 "name": "BaseBdev2", 00:31:55.572 "aliases": [ 00:31:55.572 "b314cddf-f7ed-46a9-91bd-f787a875559f" 00:31:55.572 ], 00:31:55.572 "product_name": "Malloc disk", 00:31:55.572 "block_size": 512, 00:31:55.572 "num_blocks": 65536, 00:31:55.572 "uuid": "b314cddf-f7ed-46a9-91bd-f787a875559f", 00:31:55.572 "assigned_rate_limits": { 00:31:55.573 "rw_ios_per_sec": 0, 00:31:55.573 "rw_mbytes_per_sec": 0, 00:31:55.573 "r_mbytes_per_sec": 0, 00:31:55.573 "w_mbytes_per_sec": 0 00:31:55.573 }, 00:31:55.573 "claimed": true, 00:31:55.573 "claim_type": "exclusive_write", 00:31:55.573 "zoned": false, 00:31:55.573 "supported_io_types": { 00:31:55.573 "read": true, 00:31:55.573 "write": true, 00:31:55.573 "unmap": true, 00:31:55.573 "flush": true, 00:31:55.573 "reset": true, 00:31:55.573 "nvme_admin": false, 00:31:55.573 "nvme_io": false, 00:31:55.573 "nvme_io_md": false, 00:31:55.573 "write_zeroes": true, 00:31:55.573 "zcopy": true, 00:31:55.573 "get_zone_info": false, 00:31:55.573 "zone_management": false, 00:31:55.573 "zone_append": false, 00:31:55.573 "compare": false, 00:31:55.573 "compare_and_write": false, 00:31:55.573 "abort": true, 00:31:55.573 "seek_hole": false, 00:31:55.573 "seek_data": false, 00:31:55.573 "copy": true, 00:31:55.573 "nvme_iov_md": false 00:31:55.573 }, 00:31:55.573 "memory_domains": [ 00:31:55.573 { 00:31:55.573 "dma_device_id": "system", 00:31:55.573 "dma_device_type": 1 00:31:55.573 }, 00:31:55.573 { 00:31:55.573 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:31:55.573 "dma_device_type": 2 00:31:55.573 } 00:31:55.573 ], 00:31:55.573 "driver_specific": {} 00:31:55.573 } 00:31:55.573 ] 00:31:55.573 17:28:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:55.573 17:28:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:31:55.573 17:28:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:31:55.573 17:28:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:31:55.573 17:28:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:31:55.573 17:28:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:31:55.573 17:28:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:31:55.573 17:28:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:31:55.573 17:28:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:31:55.573 17:28:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:31:55.573 17:28:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:31:55.573 17:28:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:31:55.573 17:28:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:31:55.573 17:28:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:31:55.573 17:28:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:31:55.573 17:28:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:55.573 17:28:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:31:55.573 17:28:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:31:55.573 17:28:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:55.573 17:28:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:31:55.573 "name": "Existed_Raid", 00:31:55.573 "uuid": "fe974364-c18c-4e48-a951-a3ffcefe7bd3", 00:31:55.573 "strip_size_kb": 64, 00:31:55.573 "state": "configuring", 00:31:55.573 "raid_level": "raid0", 00:31:55.573 "superblock": true, 00:31:55.573 "num_base_bdevs": 4, 00:31:55.573 "num_base_bdevs_discovered": 2, 00:31:55.573 "num_base_bdevs_operational": 4, 00:31:55.573 "base_bdevs_list": [ 00:31:55.573 { 00:31:55.573 "name": "BaseBdev1", 00:31:55.573 "uuid": "9a25e175-7984-48d7-a05c-5cc55b6bfb9a", 00:31:55.573 "is_configured": true, 00:31:55.573 "data_offset": 2048, 00:31:55.573 "data_size": 63488 00:31:55.573 }, 00:31:55.573 { 00:31:55.573 "name": "BaseBdev2", 00:31:55.573 "uuid": "b314cddf-f7ed-46a9-91bd-f787a875559f", 00:31:55.573 "is_configured": true, 00:31:55.573 "data_offset": 2048, 00:31:55.573 "data_size": 63488 00:31:55.573 }, 00:31:55.573 { 00:31:55.573 "name": "BaseBdev3", 00:31:55.573 "uuid": "00000000-0000-0000-0000-000000000000", 00:31:55.573 "is_configured": false, 00:31:55.573 "data_offset": 0, 00:31:55.573 "data_size": 0 00:31:55.573 }, 00:31:55.573 { 00:31:55.573 "name": "BaseBdev4", 00:31:55.573 "uuid": "00000000-0000-0000-0000-000000000000", 00:31:55.573 "is_configured": false, 00:31:55.573 "data_offset": 0, 00:31:55.573 "data_size": 0 00:31:55.573 } 00:31:55.573 ] 00:31:55.573 }' 00:31:55.573 17:28:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:31:55.573 17:28:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:31:55.832 17:28:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:31:55.832 17:28:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:55.832 17:28:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:31:56.091 [2024-11-26 17:28:33.294823] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:31:56.091 BaseBdev3 00:31:56.091 17:28:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:56.091 17:28:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:31:56.091 17:28:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:31:56.091 17:28:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:31:56.091 17:28:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:31:56.091 17:28:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:31:56.091 17:28:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:31:56.091 17:28:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:31:56.091 17:28:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:56.091 17:28:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:31:56.091 17:28:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:56.091 17:28:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:31:56.091 17:28:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:56.091 17:28:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:31:56.091 [ 00:31:56.091 { 00:31:56.091 "name": "BaseBdev3", 00:31:56.091 "aliases": [ 00:31:56.091 "23f8bf4e-3f1a-458e-b7a6-69858b1cd8f5" 00:31:56.091 ], 00:31:56.091 "product_name": "Malloc disk", 00:31:56.091 "block_size": 512, 00:31:56.091 "num_blocks": 65536, 00:31:56.091 "uuid": "23f8bf4e-3f1a-458e-b7a6-69858b1cd8f5", 00:31:56.091 "assigned_rate_limits": { 00:31:56.091 "rw_ios_per_sec": 0, 00:31:56.091 "rw_mbytes_per_sec": 0, 00:31:56.091 "r_mbytes_per_sec": 0, 00:31:56.091 "w_mbytes_per_sec": 0 00:31:56.091 }, 00:31:56.091 "claimed": true, 00:31:56.091 "claim_type": "exclusive_write", 00:31:56.091 "zoned": false, 00:31:56.091 "supported_io_types": { 00:31:56.091 "read": true, 00:31:56.091 "write": true, 00:31:56.091 "unmap": true, 00:31:56.091 "flush": true, 00:31:56.091 "reset": true, 00:31:56.091 "nvme_admin": false, 00:31:56.091 "nvme_io": false, 00:31:56.092 "nvme_io_md": false, 00:31:56.092 "write_zeroes": true, 00:31:56.092 "zcopy": true, 00:31:56.092 "get_zone_info": false, 00:31:56.092 "zone_management": false, 00:31:56.092 "zone_append": false, 00:31:56.092 "compare": false, 00:31:56.092 "compare_and_write": false, 00:31:56.092 "abort": true, 00:31:56.092 "seek_hole": false, 00:31:56.092 "seek_data": false, 00:31:56.092 "copy": true, 00:31:56.092 "nvme_iov_md": false 00:31:56.092 }, 00:31:56.092 "memory_domains": [ 00:31:56.092 { 00:31:56.092 "dma_device_id": "system", 00:31:56.092 "dma_device_type": 1 00:31:56.092 }, 00:31:56.092 { 00:31:56.092 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:31:56.092 "dma_device_type": 2 00:31:56.092 } 00:31:56.092 ], 00:31:56.092 "driver_specific": {} 00:31:56.092 } 00:31:56.092 ] 00:31:56.092 17:28:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:56.092 17:28:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:31:56.092 17:28:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:31:56.092 17:28:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:31:56.092 17:28:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:31:56.092 17:28:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:31:56.092 17:28:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:31:56.092 17:28:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:31:56.092 17:28:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:31:56.092 17:28:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:31:56.092 17:28:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:31:56.092 17:28:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:31:56.092 17:28:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:31:56.092 17:28:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:31:56.092 17:28:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:31:56.092 17:28:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:31:56.092 17:28:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:56.092 17:28:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:31:56.092 17:28:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:56.092 17:28:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:31:56.092 "name": "Existed_Raid", 00:31:56.092 "uuid": "fe974364-c18c-4e48-a951-a3ffcefe7bd3", 00:31:56.092 "strip_size_kb": 64, 00:31:56.092 "state": "configuring", 00:31:56.092 "raid_level": "raid0", 00:31:56.092 "superblock": true, 00:31:56.092 "num_base_bdevs": 4, 00:31:56.092 "num_base_bdevs_discovered": 3, 00:31:56.092 "num_base_bdevs_operational": 4, 00:31:56.092 "base_bdevs_list": [ 00:31:56.092 { 00:31:56.092 "name": "BaseBdev1", 00:31:56.092 "uuid": "9a25e175-7984-48d7-a05c-5cc55b6bfb9a", 00:31:56.092 "is_configured": true, 00:31:56.092 "data_offset": 2048, 00:31:56.092 "data_size": 63488 00:31:56.092 }, 00:31:56.092 { 00:31:56.092 "name": "BaseBdev2", 00:31:56.092 "uuid": "b314cddf-f7ed-46a9-91bd-f787a875559f", 00:31:56.092 "is_configured": true, 00:31:56.092 "data_offset": 2048, 00:31:56.092 "data_size": 63488 00:31:56.092 }, 00:31:56.092 { 00:31:56.092 "name": "BaseBdev3", 00:31:56.092 "uuid": "23f8bf4e-3f1a-458e-b7a6-69858b1cd8f5", 00:31:56.092 "is_configured": true, 00:31:56.092 "data_offset": 2048, 00:31:56.092 "data_size": 63488 00:31:56.092 }, 00:31:56.092 { 00:31:56.092 "name": "BaseBdev4", 00:31:56.092 "uuid": "00000000-0000-0000-0000-000000000000", 00:31:56.092 "is_configured": false, 00:31:56.092 "data_offset": 0, 00:31:56.092 "data_size": 0 00:31:56.092 } 00:31:56.092 ] 00:31:56.092 }' 00:31:56.092 17:28:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:31:56.092 17:28:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:31:56.350 17:28:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:31:56.351 17:28:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:56.351 17:28:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:31:56.610 [2024-11-26 17:28:33.812542] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:31:56.610 [2024-11-26 17:28:33.812809] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:31:56.610 [2024-11-26 17:28:33.812830] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:31:56.610 [2024-11-26 17:28:33.813164] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:31:56.610 BaseBdev4 00:31:56.610 [2024-11-26 17:28:33.813327] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:31:56.610 [2024-11-26 17:28:33.813343] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:31:56.610 [2024-11-26 17:28:33.813508] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:31:56.610 17:28:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:56.610 17:28:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev4 00:31:56.610 17:28:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev4 00:31:56.610 17:28:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:31:56.610 17:28:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:31:56.610 17:28:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:31:56.610 17:28:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:31:56.610 17:28:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:31:56.610 17:28:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:56.610 17:28:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:31:56.610 17:28:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:56.610 17:28:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:31:56.610 17:28:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:56.610 17:28:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:31:56.610 [ 00:31:56.610 { 00:31:56.610 "name": "BaseBdev4", 00:31:56.610 "aliases": [ 00:31:56.610 "5f13fee4-1717-48eb-8857-f1ca8fedb7ad" 00:31:56.610 ], 00:31:56.610 "product_name": "Malloc disk", 00:31:56.610 "block_size": 512, 00:31:56.610 "num_blocks": 65536, 00:31:56.610 "uuid": "5f13fee4-1717-48eb-8857-f1ca8fedb7ad", 00:31:56.610 "assigned_rate_limits": { 00:31:56.610 "rw_ios_per_sec": 0, 00:31:56.610 "rw_mbytes_per_sec": 0, 00:31:56.610 "r_mbytes_per_sec": 0, 00:31:56.610 "w_mbytes_per_sec": 0 00:31:56.610 }, 00:31:56.610 "claimed": true, 00:31:56.610 "claim_type": "exclusive_write", 00:31:56.610 "zoned": false, 00:31:56.610 "supported_io_types": { 00:31:56.610 "read": true, 00:31:56.610 "write": true, 00:31:56.610 "unmap": true, 00:31:56.610 "flush": true, 00:31:56.610 "reset": true, 00:31:56.610 "nvme_admin": false, 00:31:56.610 "nvme_io": false, 00:31:56.610 "nvme_io_md": false, 00:31:56.610 "write_zeroes": true, 00:31:56.610 "zcopy": true, 00:31:56.610 "get_zone_info": false, 00:31:56.610 "zone_management": false, 00:31:56.610 "zone_append": false, 00:31:56.610 "compare": false, 00:31:56.610 "compare_and_write": false, 00:31:56.610 "abort": true, 00:31:56.610 "seek_hole": false, 00:31:56.610 "seek_data": false, 00:31:56.610 "copy": true, 00:31:56.610 "nvme_iov_md": false 00:31:56.610 }, 00:31:56.610 "memory_domains": [ 00:31:56.610 { 00:31:56.610 "dma_device_id": "system", 00:31:56.610 "dma_device_type": 1 00:31:56.610 }, 00:31:56.610 { 00:31:56.610 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:31:56.610 "dma_device_type": 2 00:31:56.610 } 00:31:56.610 ], 00:31:56.610 "driver_specific": {} 00:31:56.610 } 00:31:56.610 ] 00:31:56.610 17:28:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:56.610 17:28:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:31:56.610 17:28:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:31:56.610 17:28:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:31:56.610 17:28:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid0 64 4 00:31:56.610 17:28:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:31:56.610 17:28:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:31:56.610 17:28:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:31:56.610 17:28:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:31:56.610 17:28:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:31:56.610 17:28:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:31:56.610 17:28:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:31:56.610 17:28:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:31:56.610 17:28:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:31:56.610 17:28:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:31:56.610 17:28:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:56.610 17:28:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:31:56.610 17:28:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:31:56.610 17:28:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:56.610 17:28:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:31:56.610 "name": "Existed_Raid", 00:31:56.610 "uuid": "fe974364-c18c-4e48-a951-a3ffcefe7bd3", 00:31:56.610 "strip_size_kb": 64, 00:31:56.610 "state": "online", 00:31:56.610 "raid_level": "raid0", 00:31:56.610 "superblock": true, 00:31:56.610 "num_base_bdevs": 4, 00:31:56.610 "num_base_bdevs_discovered": 4, 00:31:56.610 "num_base_bdevs_operational": 4, 00:31:56.610 "base_bdevs_list": [ 00:31:56.610 { 00:31:56.610 "name": "BaseBdev1", 00:31:56.610 "uuid": "9a25e175-7984-48d7-a05c-5cc55b6bfb9a", 00:31:56.610 "is_configured": true, 00:31:56.610 "data_offset": 2048, 00:31:56.610 "data_size": 63488 00:31:56.610 }, 00:31:56.610 { 00:31:56.610 "name": "BaseBdev2", 00:31:56.610 "uuid": "b314cddf-f7ed-46a9-91bd-f787a875559f", 00:31:56.610 "is_configured": true, 00:31:56.610 "data_offset": 2048, 00:31:56.610 "data_size": 63488 00:31:56.610 }, 00:31:56.610 { 00:31:56.610 "name": "BaseBdev3", 00:31:56.610 "uuid": "23f8bf4e-3f1a-458e-b7a6-69858b1cd8f5", 00:31:56.610 "is_configured": true, 00:31:56.610 "data_offset": 2048, 00:31:56.610 "data_size": 63488 00:31:56.610 }, 00:31:56.610 { 00:31:56.610 "name": "BaseBdev4", 00:31:56.610 "uuid": "5f13fee4-1717-48eb-8857-f1ca8fedb7ad", 00:31:56.610 "is_configured": true, 00:31:56.610 "data_offset": 2048, 00:31:56.610 "data_size": 63488 00:31:56.610 } 00:31:56.610 ] 00:31:56.610 }' 00:31:56.610 17:28:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:31:56.610 17:28:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:31:56.869 17:28:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:31:56.869 17:28:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:31:56.869 17:28:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:31:56.869 17:28:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:31:56.869 17:28:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:31:56.869 17:28:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:31:56.869 17:28:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:31:56.869 17:28:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:31:56.869 17:28:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:56.869 17:28:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:31:56.869 [2024-11-26 17:28:34.305100] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:31:57.255 17:28:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:57.255 17:28:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:31:57.255 "name": "Existed_Raid", 00:31:57.255 "aliases": [ 00:31:57.255 "fe974364-c18c-4e48-a951-a3ffcefe7bd3" 00:31:57.255 ], 00:31:57.255 "product_name": "Raid Volume", 00:31:57.255 "block_size": 512, 00:31:57.255 "num_blocks": 253952, 00:31:57.255 "uuid": "fe974364-c18c-4e48-a951-a3ffcefe7bd3", 00:31:57.255 "assigned_rate_limits": { 00:31:57.255 "rw_ios_per_sec": 0, 00:31:57.255 "rw_mbytes_per_sec": 0, 00:31:57.255 "r_mbytes_per_sec": 0, 00:31:57.255 "w_mbytes_per_sec": 0 00:31:57.255 }, 00:31:57.255 "claimed": false, 00:31:57.255 "zoned": false, 00:31:57.255 "supported_io_types": { 00:31:57.255 "read": true, 00:31:57.255 "write": true, 00:31:57.255 "unmap": true, 00:31:57.255 "flush": true, 00:31:57.255 "reset": true, 00:31:57.255 "nvme_admin": false, 00:31:57.255 "nvme_io": false, 00:31:57.255 "nvme_io_md": false, 00:31:57.255 "write_zeroes": true, 00:31:57.255 "zcopy": false, 00:31:57.255 "get_zone_info": false, 00:31:57.255 "zone_management": false, 00:31:57.255 "zone_append": false, 00:31:57.255 "compare": false, 00:31:57.255 "compare_and_write": false, 00:31:57.255 "abort": false, 00:31:57.255 "seek_hole": false, 00:31:57.255 "seek_data": false, 00:31:57.255 "copy": false, 00:31:57.255 "nvme_iov_md": false 00:31:57.255 }, 00:31:57.255 "memory_domains": [ 00:31:57.255 { 00:31:57.255 "dma_device_id": "system", 00:31:57.255 "dma_device_type": 1 00:31:57.255 }, 00:31:57.255 { 00:31:57.255 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:31:57.255 "dma_device_type": 2 00:31:57.255 }, 00:31:57.255 { 00:31:57.255 "dma_device_id": "system", 00:31:57.255 "dma_device_type": 1 00:31:57.255 }, 00:31:57.255 { 00:31:57.255 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:31:57.255 "dma_device_type": 2 00:31:57.255 }, 00:31:57.255 { 00:31:57.255 "dma_device_id": "system", 00:31:57.255 "dma_device_type": 1 00:31:57.255 }, 00:31:57.255 { 00:31:57.255 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:31:57.255 "dma_device_type": 2 00:31:57.255 }, 00:31:57.255 { 00:31:57.255 "dma_device_id": "system", 00:31:57.255 "dma_device_type": 1 00:31:57.255 }, 00:31:57.255 { 00:31:57.255 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:31:57.255 "dma_device_type": 2 00:31:57.255 } 00:31:57.255 ], 00:31:57.255 "driver_specific": { 00:31:57.255 "raid": { 00:31:57.255 "uuid": "fe974364-c18c-4e48-a951-a3ffcefe7bd3", 00:31:57.255 "strip_size_kb": 64, 00:31:57.255 "state": "online", 00:31:57.255 "raid_level": "raid0", 00:31:57.255 "superblock": true, 00:31:57.255 "num_base_bdevs": 4, 00:31:57.255 "num_base_bdevs_discovered": 4, 00:31:57.255 "num_base_bdevs_operational": 4, 00:31:57.255 "base_bdevs_list": [ 00:31:57.255 { 00:31:57.255 "name": "BaseBdev1", 00:31:57.255 "uuid": "9a25e175-7984-48d7-a05c-5cc55b6bfb9a", 00:31:57.255 "is_configured": true, 00:31:57.255 "data_offset": 2048, 00:31:57.255 "data_size": 63488 00:31:57.255 }, 00:31:57.255 { 00:31:57.255 "name": "BaseBdev2", 00:31:57.255 "uuid": "b314cddf-f7ed-46a9-91bd-f787a875559f", 00:31:57.255 "is_configured": true, 00:31:57.255 "data_offset": 2048, 00:31:57.255 "data_size": 63488 00:31:57.255 }, 00:31:57.255 { 00:31:57.255 "name": "BaseBdev3", 00:31:57.255 "uuid": "23f8bf4e-3f1a-458e-b7a6-69858b1cd8f5", 00:31:57.255 "is_configured": true, 00:31:57.255 "data_offset": 2048, 00:31:57.255 "data_size": 63488 00:31:57.255 }, 00:31:57.255 { 00:31:57.255 "name": "BaseBdev4", 00:31:57.255 "uuid": "5f13fee4-1717-48eb-8857-f1ca8fedb7ad", 00:31:57.255 "is_configured": true, 00:31:57.255 "data_offset": 2048, 00:31:57.255 "data_size": 63488 00:31:57.255 } 00:31:57.255 ] 00:31:57.255 } 00:31:57.255 } 00:31:57.255 }' 00:31:57.255 17:28:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:31:57.255 17:28:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:31:57.255 BaseBdev2 00:31:57.255 BaseBdev3 00:31:57.255 BaseBdev4' 00:31:57.255 17:28:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:31:57.255 17:28:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:31:57.255 17:28:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:31:57.255 17:28:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:31:57.256 17:28:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:57.256 17:28:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:31:57.256 17:28:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:31:57.256 17:28:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:57.256 17:28:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:31:57.256 17:28:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:31:57.256 17:28:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:31:57.256 17:28:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:31:57.256 17:28:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:57.256 17:28:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:31:57.256 17:28:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:31:57.256 17:28:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:57.256 17:28:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:31:57.256 17:28:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:31:57.256 17:28:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:31:57.256 17:28:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:31:57.256 17:28:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:57.256 17:28:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:31:57.256 17:28:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:31:57.256 17:28:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:57.256 17:28:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:31:57.256 17:28:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:31:57.256 17:28:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:31:57.256 17:28:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:31:57.256 17:28:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:57.256 17:28:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:31:57.256 17:28:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:31:57.256 17:28:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:57.256 17:28:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:31:57.256 17:28:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:31:57.256 17:28:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:31:57.256 17:28:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:57.256 17:28:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:31:57.256 [2024-11-26 17:28:34.624847] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:31:57.256 [2024-11-26 17:28:34.624884] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:31:57.256 [2024-11-26 17:28:34.624940] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:31:57.514 17:28:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:57.514 17:28:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # local expected_state 00:31:57.514 17:28:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@261 -- # has_redundancy raid0 00:31:57.514 17:28:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # case $1 in 00:31:57.514 17:28:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # return 1 00:31:57.514 17:28:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@262 -- # expected_state=offline 00:31:57.514 17:28:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid offline raid0 64 3 00:31:57.514 17:28:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:31:57.514 17:28:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=offline 00:31:57.514 17:28:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:31:57.514 17:28:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:31:57.514 17:28:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:31:57.514 17:28:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:31:57.514 17:28:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:31:57.514 17:28:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:31:57.514 17:28:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:31:57.514 17:28:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:31:57.514 17:28:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:57.515 17:28:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:31:57.515 17:28:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:31:57.515 17:28:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:57.515 17:28:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:31:57.515 "name": "Existed_Raid", 00:31:57.515 "uuid": "fe974364-c18c-4e48-a951-a3ffcefe7bd3", 00:31:57.515 "strip_size_kb": 64, 00:31:57.515 "state": "offline", 00:31:57.515 "raid_level": "raid0", 00:31:57.515 "superblock": true, 00:31:57.515 "num_base_bdevs": 4, 00:31:57.515 "num_base_bdevs_discovered": 3, 00:31:57.515 "num_base_bdevs_operational": 3, 00:31:57.515 "base_bdevs_list": [ 00:31:57.515 { 00:31:57.515 "name": null, 00:31:57.515 "uuid": "00000000-0000-0000-0000-000000000000", 00:31:57.515 "is_configured": false, 00:31:57.515 "data_offset": 0, 00:31:57.515 "data_size": 63488 00:31:57.515 }, 00:31:57.515 { 00:31:57.515 "name": "BaseBdev2", 00:31:57.515 "uuid": "b314cddf-f7ed-46a9-91bd-f787a875559f", 00:31:57.515 "is_configured": true, 00:31:57.515 "data_offset": 2048, 00:31:57.515 "data_size": 63488 00:31:57.515 }, 00:31:57.515 { 00:31:57.515 "name": "BaseBdev3", 00:31:57.515 "uuid": "23f8bf4e-3f1a-458e-b7a6-69858b1cd8f5", 00:31:57.515 "is_configured": true, 00:31:57.515 "data_offset": 2048, 00:31:57.515 "data_size": 63488 00:31:57.515 }, 00:31:57.515 { 00:31:57.515 "name": "BaseBdev4", 00:31:57.515 "uuid": "5f13fee4-1717-48eb-8857-f1ca8fedb7ad", 00:31:57.515 "is_configured": true, 00:31:57.515 "data_offset": 2048, 00:31:57.515 "data_size": 63488 00:31:57.515 } 00:31:57.515 ] 00:31:57.515 }' 00:31:57.515 17:28:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:31:57.515 17:28:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:31:57.774 17:28:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:31:57.774 17:28:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:31:57.774 17:28:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:31:57.774 17:28:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:57.774 17:28:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:31:57.774 17:28:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:31:57.774 17:28:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:57.774 17:28:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:31:57.774 17:28:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:31:58.034 17:28:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:31:58.034 17:28:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:58.034 17:28:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:31:58.034 [2024-11-26 17:28:35.226920] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:31:58.034 17:28:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:58.034 17:28:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:31:58.034 17:28:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:31:58.034 17:28:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:31:58.034 17:28:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:58.034 17:28:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:31:58.034 17:28:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:31:58.034 17:28:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:58.034 17:28:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:31:58.034 17:28:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:31:58.034 17:28:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:31:58.034 17:28:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:58.034 17:28:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:31:58.034 [2024-11-26 17:28:35.376629] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:31:58.034 17:28:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:58.034 17:28:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:31:58.034 17:28:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:31:58.293 17:28:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:31:58.293 17:28:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:31:58.293 17:28:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:58.293 17:28:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:31:58.293 17:28:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:58.293 17:28:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:31:58.293 17:28:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:31:58.293 17:28:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev4 00:31:58.293 17:28:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:58.293 17:28:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:31:58.293 [2024-11-26 17:28:35.527441] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev4 00:31:58.293 [2024-11-26 17:28:35.527494] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:31:58.293 17:28:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:58.293 17:28:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:31:58.293 17:28:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:31:58.293 17:28:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:31:58.293 17:28:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:58.293 17:28:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:31:58.293 17:28:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:31:58.293 17:28:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:58.293 17:28:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:31:58.293 17:28:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:31:58.293 17:28:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@284 -- # '[' 4 -gt 2 ']' 00:31:58.293 17:28:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:31:58.293 17:28:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:31:58.293 17:28:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:31:58.293 17:28:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:58.293 17:28:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:31:58.293 BaseBdev2 00:31:58.293 17:28:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:58.293 17:28:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:31:58.293 17:28:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:31:58.293 17:28:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:31:58.293 17:28:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:31:58.293 17:28:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:31:58.293 17:28:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:31:58.293 17:28:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:31:58.293 17:28:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:58.293 17:28:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:31:58.293 17:28:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:58.293 17:28:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:31:58.293 17:28:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:58.294 17:28:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:31:58.294 [ 00:31:58.294 { 00:31:58.294 "name": "BaseBdev2", 00:31:58.294 "aliases": [ 00:31:58.294 "a94439ce-c8d1-4966-ad6f-0d0416b55412" 00:31:58.294 ], 00:31:58.553 "product_name": "Malloc disk", 00:31:58.553 "block_size": 512, 00:31:58.553 "num_blocks": 65536, 00:31:58.553 "uuid": "a94439ce-c8d1-4966-ad6f-0d0416b55412", 00:31:58.553 "assigned_rate_limits": { 00:31:58.553 "rw_ios_per_sec": 0, 00:31:58.553 "rw_mbytes_per_sec": 0, 00:31:58.553 "r_mbytes_per_sec": 0, 00:31:58.553 "w_mbytes_per_sec": 0 00:31:58.553 }, 00:31:58.553 "claimed": false, 00:31:58.553 "zoned": false, 00:31:58.553 "supported_io_types": { 00:31:58.553 "read": true, 00:31:58.553 "write": true, 00:31:58.553 "unmap": true, 00:31:58.553 "flush": true, 00:31:58.553 "reset": true, 00:31:58.553 "nvme_admin": false, 00:31:58.553 "nvme_io": false, 00:31:58.553 "nvme_io_md": false, 00:31:58.553 "write_zeroes": true, 00:31:58.553 "zcopy": true, 00:31:58.553 "get_zone_info": false, 00:31:58.553 "zone_management": false, 00:31:58.553 "zone_append": false, 00:31:58.553 "compare": false, 00:31:58.553 "compare_and_write": false, 00:31:58.553 "abort": true, 00:31:58.553 "seek_hole": false, 00:31:58.553 "seek_data": false, 00:31:58.553 "copy": true, 00:31:58.553 "nvme_iov_md": false 00:31:58.553 }, 00:31:58.553 "memory_domains": [ 00:31:58.553 { 00:31:58.553 "dma_device_id": "system", 00:31:58.553 "dma_device_type": 1 00:31:58.553 }, 00:31:58.553 { 00:31:58.553 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:31:58.553 "dma_device_type": 2 00:31:58.553 } 00:31:58.553 ], 00:31:58.553 "driver_specific": {} 00:31:58.553 } 00:31:58.553 ] 00:31:58.553 17:28:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:58.553 17:28:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:31:58.553 17:28:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:31:58.553 17:28:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:31:58.553 17:28:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:31:58.553 17:28:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:58.553 17:28:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:31:58.553 BaseBdev3 00:31:58.553 17:28:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:58.553 17:28:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:31:58.553 17:28:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:31:58.553 17:28:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:31:58.553 17:28:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:31:58.553 17:28:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:31:58.553 17:28:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:31:58.553 17:28:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:31:58.553 17:28:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:58.553 17:28:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:31:58.553 17:28:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:58.553 17:28:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:31:58.553 17:28:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:58.553 17:28:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:31:58.553 [ 00:31:58.553 { 00:31:58.553 "name": "BaseBdev3", 00:31:58.553 "aliases": [ 00:31:58.553 "6da9ea8b-e72f-4de8-8204-5e3707006421" 00:31:58.553 ], 00:31:58.553 "product_name": "Malloc disk", 00:31:58.553 "block_size": 512, 00:31:58.553 "num_blocks": 65536, 00:31:58.553 "uuid": "6da9ea8b-e72f-4de8-8204-5e3707006421", 00:31:58.553 "assigned_rate_limits": { 00:31:58.553 "rw_ios_per_sec": 0, 00:31:58.553 "rw_mbytes_per_sec": 0, 00:31:58.553 "r_mbytes_per_sec": 0, 00:31:58.553 "w_mbytes_per_sec": 0 00:31:58.553 }, 00:31:58.553 "claimed": false, 00:31:58.553 "zoned": false, 00:31:58.553 "supported_io_types": { 00:31:58.553 "read": true, 00:31:58.553 "write": true, 00:31:58.553 "unmap": true, 00:31:58.553 "flush": true, 00:31:58.553 "reset": true, 00:31:58.553 "nvme_admin": false, 00:31:58.553 "nvme_io": false, 00:31:58.553 "nvme_io_md": false, 00:31:58.553 "write_zeroes": true, 00:31:58.553 "zcopy": true, 00:31:58.553 "get_zone_info": false, 00:31:58.553 "zone_management": false, 00:31:58.553 "zone_append": false, 00:31:58.553 "compare": false, 00:31:58.553 "compare_and_write": false, 00:31:58.553 "abort": true, 00:31:58.553 "seek_hole": false, 00:31:58.553 "seek_data": false, 00:31:58.553 "copy": true, 00:31:58.553 "nvme_iov_md": false 00:31:58.553 }, 00:31:58.553 "memory_domains": [ 00:31:58.553 { 00:31:58.553 "dma_device_id": "system", 00:31:58.553 "dma_device_type": 1 00:31:58.553 }, 00:31:58.553 { 00:31:58.553 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:31:58.553 "dma_device_type": 2 00:31:58.553 } 00:31:58.553 ], 00:31:58.553 "driver_specific": {} 00:31:58.553 } 00:31:58.553 ] 00:31:58.553 17:28:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:58.553 17:28:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:31:58.553 17:28:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:31:58.553 17:28:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:31:58.553 17:28:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:31:58.553 17:28:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:58.553 17:28:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:31:58.553 BaseBdev4 00:31:58.553 17:28:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:58.553 17:28:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev4 00:31:58.553 17:28:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev4 00:31:58.553 17:28:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:31:58.553 17:28:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:31:58.553 17:28:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:31:58.554 17:28:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:31:58.554 17:28:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:31:58.554 17:28:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:58.554 17:28:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:31:58.554 17:28:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:58.554 17:28:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:31:58.554 17:28:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:58.554 17:28:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:31:58.554 [ 00:31:58.554 { 00:31:58.554 "name": "BaseBdev4", 00:31:58.554 "aliases": [ 00:31:58.554 "104eb545-81b9-4572-9d0c-f9869965f1c7" 00:31:58.554 ], 00:31:58.554 "product_name": "Malloc disk", 00:31:58.554 "block_size": 512, 00:31:58.554 "num_blocks": 65536, 00:31:58.554 "uuid": "104eb545-81b9-4572-9d0c-f9869965f1c7", 00:31:58.554 "assigned_rate_limits": { 00:31:58.554 "rw_ios_per_sec": 0, 00:31:58.554 "rw_mbytes_per_sec": 0, 00:31:58.554 "r_mbytes_per_sec": 0, 00:31:58.554 "w_mbytes_per_sec": 0 00:31:58.554 }, 00:31:58.554 "claimed": false, 00:31:58.554 "zoned": false, 00:31:58.554 "supported_io_types": { 00:31:58.554 "read": true, 00:31:58.554 "write": true, 00:31:58.554 "unmap": true, 00:31:58.554 "flush": true, 00:31:58.554 "reset": true, 00:31:58.554 "nvme_admin": false, 00:31:58.554 "nvme_io": false, 00:31:58.554 "nvme_io_md": false, 00:31:58.554 "write_zeroes": true, 00:31:58.554 "zcopy": true, 00:31:58.554 "get_zone_info": false, 00:31:58.554 "zone_management": false, 00:31:58.554 "zone_append": false, 00:31:58.554 "compare": false, 00:31:58.554 "compare_and_write": false, 00:31:58.554 "abort": true, 00:31:58.554 "seek_hole": false, 00:31:58.554 "seek_data": false, 00:31:58.554 "copy": true, 00:31:58.554 "nvme_iov_md": false 00:31:58.554 }, 00:31:58.554 "memory_domains": [ 00:31:58.554 { 00:31:58.554 "dma_device_id": "system", 00:31:58.554 "dma_device_type": 1 00:31:58.554 }, 00:31:58.554 { 00:31:58.554 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:31:58.554 "dma_device_type": 2 00:31:58.554 } 00:31:58.554 ], 00:31:58.554 "driver_specific": {} 00:31:58.554 } 00:31:58.554 ] 00:31:58.554 17:28:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:58.554 17:28:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:31:58.554 17:28:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:31:58.554 17:28:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:31:58.554 17:28:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:31:58.554 17:28:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:58.554 17:28:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:31:58.554 [2024-11-26 17:28:35.897855] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:31:58.554 [2024-11-26 17:28:35.897905] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:31:58.554 [2024-11-26 17:28:35.897933] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:31:58.554 [2024-11-26 17:28:35.900216] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:31:58.554 [2024-11-26 17:28:35.900271] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:31:58.554 17:28:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:58.554 17:28:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:31:58.554 17:28:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:31:58.554 17:28:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:31:58.554 17:28:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:31:58.554 17:28:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:31:58.554 17:28:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:31:58.554 17:28:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:31:58.554 17:28:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:31:58.554 17:28:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:31:58.554 17:28:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:31:58.554 17:28:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:31:58.554 17:28:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:58.554 17:28:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:31:58.554 17:28:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:31:58.554 17:28:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:58.554 17:28:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:31:58.554 "name": "Existed_Raid", 00:31:58.554 "uuid": "80d6cc7d-177c-4d0d-93d5-4d91e2e8ca4c", 00:31:58.554 "strip_size_kb": 64, 00:31:58.554 "state": "configuring", 00:31:58.554 "raid_level": "raid0", 00:31:58.554 "superblock": true, 00:31:58.554 "num_base_bdevs": 4, 00:31:58.554 "num_base_bdevs_discovered": 3, 00:31:58.554 "num_base_bdevs_operational": 4, 00:31:58.554 "base_bdevs_list": [ 00:31:58.554 { 00:31:58.554 "name": "BaseBdev1", 00:31:58.554 "uuid": "00000000-0000-0000-0000-000000000000", 00:31:58.554 "is_configured": false, 00:31:58.554 "data_offset": 0, 00:31:58.554 "data_size": 0 00:31:58.554 }, 00:31:58.554 { 00:31:58.554 "name": "BaseBdev2", 00:31:58.554 "uuid": "a94439ce-c8d1-4966-ad6f-0d0416b55412", 00:31:58.554 "is_configured": true, 00:31:58.554 "data_offset": 2048, 00:31:58.554 "data_size": 63488 00:31:58.554 }, 00:31:58.554 { 00:31:58.554 "name": "BaseBdev3", 00:31:58.554 "uuid": "6da9ea8b-e72f-4de8-8204-5e3707006421", 00:31:58.554 "is_configured": true, 00:31:58.554 "data_offset": 2048, 00:31:58.554 "data_size": 63488 00:31:58.554 }, 00:31:58.554 { 00:31:58.554 "name": "BaseBdev4", 00:31:58.554 "uuid": "104eb545-81b9-4572-9d0c-f9869965f1c7", 00:31:58.554 "is_configured": true, 00:31:58.554 "data_offset": 2048, 00:31:58.554 "data_size": 63488 00:31:58.554 } 00:31:58.554 ] 00:31:58.554 }' 00:31:58.554 17:28:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:31:58.554 17:28:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:31:59.121 17:28:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:31:59.121 17:28:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:59.121 17:28:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:31:59.121 [2024-11-26 17:28:36.341942] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:31:59.121 17:28:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:59.121 17:28:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:31:59.121 17:28:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:31:59.121 17:28:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:31:59.121 17:28:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:31:59.121 17:28:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:31:59.121 17:28:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:31:59.121 17:28:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:31:59.121 17:28:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:31:59.121 17:28:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:31:59.121 17:28:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:31:59.121 17:28:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:31:59.121 17:28:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:31:59.121 17:28:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:59.121 17:28:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:31:59.121 17:28:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:59.121 17:28:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:31:59.121 "name": "Existed_Raid", 00:31:59.121 "uuid": "80d6cc7d-177c-4d0d-93d5-4d91e2e8ca4c", 00:31:59.121 "strip_size_kb": 64, 00:31:59.121 "state": "configuring", 00:31:59.121 "raid_level": "raid0", 00:31:59.121 "superblock": true, 00:31:59.121 "num_base_bdevs": 4, 00:31:59.121 "num_base_bdevs_discovered": 2, 00:31:59.121 "num_base_bdevs_operational": 4, 00:31:59.121 "base_bdevs_list": [ 00:31:59.121 { 00:31:59.121 "name": "BaseBdev1", 00:31:59.121 "uuid": "00000000-0000-0000-0000-000000000000", 00:31:59.121 "is_configured": false, 00:31:59.121 "data_offset": 0, 00:31:59.121 "data_size": 0 00:31:59.121 }, 00:31:59.121 { 00:31:59.121 "name": null, 00:31:59.122 "uuid": "a94439ce-c8d1-4966-ad6f-0d0416b55412", 00:31:59.122 "is_configured": false, 00:31:59.122 "data_offset": 0, 00:31:59.122 "data_size": 63488 00:31:59.122 }, 00:31:59.122 { 00:31:59.122 "name": "BaseBdev3", 00:31:59.122 "uuid": "6da9ea8b-e72f-4de8-8204-5e3707006421", 00:31:59.122 "is_configured": true, 00:31:59.122 "data_offset": 2048, 00:31:59.122 "data_size": 63488 00:31:59.122 }, 00:31:59.122 { 00:31:59.122 "name": "BaseBdev4", 00:31:59.122 "uuid": "104eb545-81b9-4572-9d0c-f9869965f1c7", 00:31:59.122 "is_configured": true, 00:31:59.122 "data_offset": 2048, 00:31:59.122 "data_size": 63488 00:31:59.122 } 00:31:59.122 ] 00:31:59.122 }' 00:31:59.122 17:28:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:31:59.122 17:28:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:31:59.380 17:28:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:31:59.380 17:28:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:31:59.380 17:28:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:59.380 17:28:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:31:59.640 17:28:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:59.640 17:28:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:31:59.640 17:28:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:31:59.640 17:28:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:59.640 17:28:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:31:59.640 [2024-11-26 17:28:36.881125] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:31:59.640 BaseBdev1 00:31:59.640 17:28:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:59.640 17:28:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:31:59.640 17:28:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:31:59.640 17:28:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:31:59.640 17:28:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:31:59.640 17:28:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:31:59.640 17:28:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:31:59.640 17:28:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:31:59.640 17:28:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:59.640 17:28:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:31:59.640 17:28:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:59.640 17:28:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:31:59.640 17:28:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:59.640 17:28:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:31:59.640 [ 00:31:59.640 { 00:31:59.640 "name": "BaseBdev1", 00:31:59.640 "aliases": [ 00:31:59.640 "e5aad230-7786-46ea-ab19-1838f9d3850a" 00:31:59.640 ], 00:31:59.640 "product_name": "Malloc disk", 00:31:59.640 "block_size": 512, 00:31:59.640 "num_blocks": 65536, 00:31:59.640 "uuid": "e5aad230-7786-46ea-ab19-1838f9d3850a", 00:31:59.640 "assigned_rate_limits": { 00:31:59.640 "rw_ios_per_sec": 0, 00:31:59.640 "rw_mbytes_per_sec": 0, 00:31:59.640 "r_mbytes_per_sec": 0, 00:31:59.640 "w_mbytes_per_sec": 0 00:31:59.640 }, 00:31:59.640 "claimed": true, 00:31:59.640 "claim_type": "exclusive_write", 00:31:59.640 "zoned": false, 00:31:59.640 "supported_io_types": { 00:31:59.640 "read": true, 00:31:59.640 "write": true, 00:31:59.640 "unmap": true, 00:31:59.640 "flush": true, 00:31:59.640 "reset": true, 00:31:59.640 "nvme_admin": false, 00:31:59.640 "nvme_io": false, 00:31:59.640 "nvme_io_md": false, 00:31:59.640 "write_zeroes": true, 00:31:59.640 "zcopy": true, 00:31:59.640 "get_zone_info": false, 00:31:59.640 "zone_management": false, 00:31:59.640 "zone_append": false, 00:31:59.640 "compare": false, 00:31:59.640 "compare_and_write": false, 00:31:59.640 "abort": true, 00:31:59.640 "seek_hole": false, 00:31:59.640 "seek_data": false, 00:31:59.640 "copy": true, 00:31:59.640 "nvme_iov_md": false 00:31:59.640 }, 00:31:59.640 "memory_domains": [ 00:31:59.640 { 00:31:59.640 "dma_device_id": "system", 00:31:59.640 "dma_device_type": 1 00:31:59.640 }, 00:31:59.640 { 00:31:59.640 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:31:59.640 "dma_device_type": 2 00:31:59.640 } 00:31:59.640 ], 00:31:59.640 "driver_specific": {} 00:31:59.640 } 00:31:59.640 ] 00:31:59.640 17:28:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:59.640 17:28:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:31:59.640 17:28:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:31:59.640 17:28:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:31:59.640 17:28:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:31:59.640 17:28:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:31:59.640 17:28:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:31:59.640 17:28:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:31:59.640 17:28:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:31:59.640 17:28:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:31:59.640 17:28:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:31:59.640 17:28:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:31:59.640 17:28:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:31:59.640 17:28:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:31:59.640 17:28:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:59.640 17:28:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:31:59.640 17:28:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:59.640 17:28:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:31:59.640 "name": "Existed_Raid", 00:31:59.640 "uuid": "80d6cc7d-177c-4d0d-93d5-4d91e2e8ca4c", 00:31:59.640 "strip_size_kb": 64, 00:31:59.640 "state": "configuring", 00:31:59.640 "raid_level": "raid0", 00:31:59.640 "superblock": true, 00:31:59.640 "num_base_bdevs": 4, 00:31:59.640 "num_base_bdevs_discovered": 3, 00:31:59.640 "num_base_bdevs_operational": 4, 00:31:59.640 "base_bdevs_list": [ 00:31:59.640 { 00:31:59.640 "name": "BaseBdev1", 00:31:59.640 "uuid": "e5aad230-7786-46ea-ab19-1838f9d3850a", 00:31:59.640 "is_configured": true, 00:31:59.640 "data_offset": 2048, 00:31:59.640 "data_size": 63488 00:31:59.640 }, 00:31:59.640 { 00:31:59.640 "name": null, 00:31:59.640 "uuid": "a94439ce-c8d1-4966-ad6f-0d0416b55412", 00:31:59.640 "is_configured": false, 00:31:59.640 "data_offset": 0, 00:31:59.640 "data_size": 63488 00:31:59.640 }, 00:31:59.640 { 00:31:59.640 "name": "BaseBdev3", 00:31:59.640 "uuid": "6da9ea8b-e72f-4de8-8204-5e3707006421", 00:31:59.640 "is_configured": true, 00:31:59.640 "data_offset": 2048, 00:31:59.640 "data_size": 63488 00:31:59.640 }, 00:31:59.640 { 00:31:59.640 "name": "BaseBdev4", 00:31:59.640 "uuid": "104eb545-81b9-4572-9d0c-f9869965f1c7", 00:31:59.640 "is_configured": true, 00:31:59.640 "data_offset": 2048, 00:31:59.640 "data_size": 63488 00:31:59.640 } 00:31:59.640 ] 00:31:59.640 }' 00:31:59.640 17:28:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:31:59.640 17:28:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:32:00.207 17:28:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:32:00.207 17:28:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:00.207 17:28:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:32:00.207 17:28:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:32:00.207 17:28:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:00.207 17:28:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:32:00.207 17:28:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:32:00.207 17:28:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:00.207 17:28:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:32:00.207 [2024-11-26 17:28:37.405315] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:32:00.207 17:28:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:00.207 17:28:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:32:00.207 17:28:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:32:00.207 17:28:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:32:00.207 17:28:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:32:00.207 17:28:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:32:00.207 17:28:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:32:00.207 17:28:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:32:00.207 17:28:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:32:00.207 17:28:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:32:00.207 17:28:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:32:00.207 17:28:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:32:00.207 17:28:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:32:00.207 17:28:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:00.207 17:28:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:32:00.207 17:28:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:00.207 17:28:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:32:00.207 "name": "Existed_Raid", 00:32:00.207 "uuid": "80d6cc7d-177c-4d0d-93d5-4d91e2e8ca4c", 00:32:00.207 "strip_size_kb": 64, 00:32:00.207 "state": "configuring", 00:32:00.207 "raid_level": "raid0", 00:32:00.207 "superblock": true, 00:32:00.207 "num_base_bdevs": 4, 00:32:00.207 "num_base_bdevs_discovered": 2, 00:32:00.207 "num_base_bdevs_operational": 4, 00:32:00.207 "base_bdevs_list": [ 00:32:00.207 { 00:32:00.207 "name": "BaseBdev1", 00:32:00.207 "uuid": "e5aad230-7786-46ea-ab19-1838f9d3850a", 00:32:00.207 "is_configured": true, 00:32:00.207 "data_offset": 2048, 00:32:00.207 "data_size": 63488 00:32:00.207 }, 00:32:00.207 { 00:32:00.207 "name": null, 00:32:00.207 "uuid": "a94439ce-c8d1-4966-ad6f-0d0416b55412", 00:32:00.207 "is_configured": false, 00:32:00.207 "data_offset": 0, 00:32:00.208 "data_size": 63488 00:32:00.208 }, 00:32:00.208 { 00:32:00.208 "name": null, 00:32:00.208 "uuid": "6da9ea8b-e72f-4de8-8204-5e3707006421", 00:32:00.208 "is_configured": false, 00:32:00.208 "data_offset": 0, 00:32:00.208 "data_size": 63488 00:32:00.208 }, 00:32:00.208 { 00:32:00.208 "name": "BaseBdev4", 00:32:00.208 "uuid": "104eb545-81b9-4572-9d0c-f9869965f1c7", 00:32:00.208 "is_configured": true, 00:32:00.208 "data_offset": 2048, 00:32:00.208 "data_size": 63488 00:32:00.208 } 00:32:00.208 ] 00:32:00.208 }' 00:32:00.208 17:28:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:32:00.208 17:28:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:32:00.467 17:28:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:32:00.467 17:28:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:00.467 17:28:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:32:00.467 17:28:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:32:00.467 17:28:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:00.467 17:28:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:32:00.467 17:28:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:32:00.467 17:28:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:00.467 17:28:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:32:00.467 [2024-11-26 17:28:37.897420] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:32:00.467 17:28:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:00.467 17:28:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:32:00.467 17:28:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:32:00.467 17:28:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:32:00.467 17:28:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:32:00.467 17:28:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:32:00.467 17:28:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:32:00.467 17:28:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:32:00.467 17:28:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:32:00.467 17:28:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:32:00.467 17:28:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:32:00.467 17:28:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:32:00.467 17:28:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:00.467 17:28:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:32:00.467 17:28:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:32:00.728 17:28:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:00.728 17:28:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:32:00.728 "name": "Existed_Raid", 00:32:00.728 "uuid": "80d6cc7d-177c-4d0d-93d5-4d91e2e8ca4c", 00:32:00.728 "strip_size_kb": 64, 00:32:00.728 "state": "configuring", 00:32:00.728 "raid_level": "raid0", 00:32:00.728 "superblock": true, 00:32:00.728 "num_base_bdevs": 4, 00:32:00.728 "num_base_bdevs_discovered": 3, 00:32:00.728 "num_base_bdevs_operational": 4, 00:32:00.728 "base_bdevs_list": [ 00:32:00.728 { 00:32:00.728 "name": "BaseBdev1", 00:32:00.728 "uuid": "e5aad230-7786-46ea-ab19-1838f9d3850a", 00:32:00.728 "is_configured": true, 00:32:00.728 "data_offset": 2048, 00:32:00.728 "data_size": 63488 00:32:00.728 }, 00:32:00.728 { 00:32:00.728 "name": null, 00:32:00.729 "uuid": "a94439ce-c8d1-4966-ad6f-0d0416b55412", 00:32:00.729 "is_configured": false, 00:32:00.729 "data_offset": 0, 00:32:00.729 "data_size": 63488 00:32:00.729 }, 00:32:00.729 { 00:32:00.729 "name": "BaseBdev3", 00:32:00.729 "uuid": "6da9ea8b-e72f-4de8-8204-5e3707006421", 00:32:00.729 "is_configured": true, 00:32:00.729 "data_offset": 2048, 00:32:00.729 "data_size": 63488 00:32:00.729 }, 00:32:00.729 { 00:32:00.729 "name": "BaseBdev4", 00:32:00.729 "uuid": "104eb545-81b9-4572-9d0c-f9869965f1c7", 00:32:00.729 "is_configured": true, 00:32:00.729 "data_offset": 2048, 00:32:00.729 "data_size": 63488 00:32:00.729 } 00:32:00.729 ] 00:32:00.729 }' 00:32:00.729 17:28:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:32:00.729 17:28:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:32:00.987 17:28:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:32:00.987 17:28:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:00.987 17:28:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:32:00.987 17:28:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:32:00.987 17:28:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:00.987 17:28:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:32:00.987 17:28:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:32:00.987 17:28:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:00.987 17:28:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:32:00.987 [2024-11-26 17:28:38.361537] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:32:01.246 17:28:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:01.246 17:28:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:32:01.246 17:28:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:32:01.246 17:28:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:32:01.246 17:28:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:32:01.246 17:28:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:32:01.246 17:28:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:32:01.246 17:28:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:32:01.246 17:28:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:32:01.246 17:28:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:32:01.246 17:28:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:32:01.246 17:28:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:32:01.246 17:28:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:32:01.246 17:28:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:01.246 17:28:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:32:01.246 17:28:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:01.246 17:28:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:32:01.246 "name": "Existed_Raid", 00:32:01.246 "uuid": "80d6cc7d-177c-4d0d-93d5-4d91e2e8ca4c", 00:32:01.247 "strip_size_kb": 64, 00:32:01.247 "state": "configuring", 00:32:01.247 "raid_level": "raid0", 00:32:01.247 "superblock": true, 00:32:01.247 "num_base_bdevs": 4, 00:32:01.247 "num_base_bdevs_discovered": 2, 00:32:01.247 "num_base_bdevs_operational": 4, 00:32:01.247 "base_bdevs_list": [ 00:32:01.247 { 00:32:01.247 "name": null, 00:32:01.247 "uuid": "e5aad230-7786-46ea-ab19-1838f9d3850a", 00:32:01.247 "is_configured": false, 00:32:01.247 "data_offset": 0, 00:32:01.247 "data_size": 63488 00:32:01.247 }, 00:32:01.247 { 00:32:01.247 "name": null, 00:32:01.247 "uuid": "a94439ce-c8d1-4966-ad6f-0d0416b55412", 00:32:01.247 "is_configured": false, 00:32:01.247 "data_offset": 0, 00:32:01.247 "data_size": 63488 00:32:01.247 }, 00:32:01.247 { 00:32:01.247 "name": "BaseBdev3", 00:32:01.247 "uuid": "6da9ea8b-e72f-4de8-8204-5e3707006421", 00:32:01.247 "is_configured": true, 00:32:01.247 "data_offset": 2048, 00:32:01.247 "data_size": 63488 00:32:01.247 }, 00:32:01.247 { 00:32:01.247 "name": "BaseBdev4", 00:32:01.247 "uuid": "104eb545-81b9-4572-9d0c-f9869965f1c7", 00:32:01.247 "is_configured": true, 00:32:01.247 "data_offset": 2048, 00:32:01.247 "data_size": 63488 00:32:01.247 } 00:32:01.247 ] 00:32:01.247 }' 00:32:01.247 17:28:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:32:01.247 17:28:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:32:01.505 17:28:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:32:01.505 17:28:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:01.505 17:28:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:32:01.505 17:28:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:32:01.505 17:28:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:01.505 17:28:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:32:01.505 17:28:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:32:01.505 17:28:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:01.505 17:28:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:32:01.506 [2024-11-26 17:28:38.949864] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:32:01.764 17:28:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:01.764 17:28:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:32:01.764 17:28:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:32:01.764 17:28:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:32:01.764 17:28:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:32:01.764 17:28:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:32:01.764 17:28:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:32:01.764 17:28:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:32:01.764 17:28:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:32:01.764 17:28:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:32:01.764 17:28:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:32:01.764 17:28:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:32:01.764 17:28:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:32:01.764 17:28:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:01.764 17:28:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:32:01.764 17:28:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:01.764 17:28:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:32:01.764 "name": "Existed_Raid", 00:32:01.764 "uuid": "80d6cc7d-177c-4d0d-93d5-4d91e2e8ca4c", 00:32:01.764 "strip_size_kb": 64, 00:32:01.764 "state": "configuring", 00:32:01.764 "raid_level": "raid0", 00:32:01.764 "superblock": true, 00:32:01.764 "num_base_bdevs": 4, 00:32:01.764 "num_base_bdevs_discovered": 3, 00:32:01.764 "num_base_bdevs_operational": 4, 00:32:01.764 "base_bdevs_list": [ 00:32:01.764 { 00:32:01.764 "name": null, 00:32:01.764 "uuid": "e5aad230-7786-46ea-ab19-1838f9d3850a", 00:32:01.764 "is_configured": false, 00:32:01.764 "data_offset": 0, 00:32:01.764 "data_size": 63488 00:32:01.764 }, 00:32:01.764 { 00:32:01.764 "name": "BaseBdev2", 00:32:01.765 "uuid": "a94439ce-c8d1-4966-ad6f-0d0416b55412", 00:32:01.765 "is_configured": true, 00:32:01.765 "data_offset": 2048, 00:32:01.765 "data_size": 63488 00:32:01.765 }, 00:32:01.765 { 00:32:01.765 "name": "BaseBdev3", 00:32:01.765 "uuid": "6da9ea8b-e72f-4de8-8204-5e3707006421", 00:32:01.765 "is_configured": true, 00:32:01.765 "data_offset": 2048, 00:32:01.765 "data_size": 63488 00:32:01.765 }, 00:32:01.765 { 00:32:01.765 "name": "BaseBdev4", 00:32:01.765 "uuid": "104eb545-81b9-4572-9d0c-f9869965f1c7", 00:32:01.765 "is_configured": true, 00:32:01.765 "data_offset": 2048, 00:32:01.765 "data_size": 63488 00:32:01.765 } 00:32:01.765 ] 00:32:01.765 }' 00:32:01.765 17:28:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:32:01.765 17:28:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:32:02.022 17:28:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:32:02.023 17:28:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:02.023 17:28:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:32:02.023 17:28:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:32:02.023 17:28:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:02.023 17:28:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:32:02.023 17:28:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:32:02.023 17:28:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:02.023 17:28:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:32:02.023 17:28:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:32:02.282 17:28:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:02.282 17:28:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u e5aad230-7786-46ea-ab19-1838f9d3850a 00:32:02.282 17:28:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:02.282 17:28:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:32:02.282 [2024-11-26 17:28:39.540426] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:32:02.282 [2024-11-26 17:28:39.540660] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:32:02.282 [2024-11-26 17:28:39.540675] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:32:02.282 [2024-11-26 17:28:39.540960] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000063c0 00:32:02.282 [2024-11-26 17:28:39.541122] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:32:02.282 [2024-11-26 17:28:39.541137] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000008200 00:32:02.282 [2024-11-26 17:28:39.541271] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:32:02.282 NewBaseBdev 00:32:02.282 17:28:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:02.282 17:28:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:32:02.282 17:28:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=NewBaseBdev 00:32:02.282 17:28:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:32:02.282 17:28:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:32:02.282 17:28:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:32:02.282 17:28:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:32:02.282 17:28:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:32:02.282 17:28:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:02.282 17:28:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:32:02.282 17:28:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:02.282 17:28:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:32:02.282 17:28:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:02.282 17:28:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:32:02.282 [ 00:32:02.282 { 00:32:02.282 "name": "NewBaseBdev", 00:32:02.282 "aliases": [ 00:32:02.282 "e5aad230-7786-46ea-ab19-1838f9d3850a" 00:32:02.282 ], 00:32:02.282 "product_name": "Malloc disk", 00:32:02.282 "block_size": 512, 00:32:02.282 "num_blocks": 65536, 00:32:02.282 "uuid": "e5aad230-7786-46ea-ab19-1838f9d3850a", 00:32:02.282 "assigned_rate_limits": { 00:32:02.282 "rw_ios_per_sec": 0, 00:32:02.282 "rw_mbytes_per_sec": 0, 00:32:02.282 "r_mbytes_per_sec": 0, 00:32:02.282 "w_mbytes_per_sec": 0 00:32:02.282 }, 00:32:02.282 "claimed": true, 00:32:02.282 "claim_type": "exclusive_write", 00:32:02.282 "zoned": false, 00:32:02.282 "supported_io_types": { 00:32:02.282 "read": true, 00:32:02.282 "write": true, 00:32:02.282 "unmap": true, 00:32:02.282 "flush": true, 00:32:02.282 "reset": true, 00:32:02.282 "nvme_admin": false, 00:32:02.282 "nvme_io": false, 00:32:02.282 "nvme_io_md": false, 00:32:02.282 "write_zeroes": true, 00:32:02.282 "zcopy": true, 00:32:02.282 "get_zone_info": false, 00:32:02.282 "zone_management": false, 00:32:02.282 "zone_append": false, 00:32:02.282 "compare": false, 00:32:02.282 "compare_and_write": false, 00:32:02.282 "abort": true, 00:32:02.282 "seek_hole": false, 00:32:02.282 "seek_data": false, 00:32:02.282 "copy": true, 00:32:02.282 "nvme_iov_md": false 00:32:02.282 }, 00:32:02.282 "memory_domains": [ 00:32:02.282 { 00:32:02.282 "dma_device_id": "system", 00:32:02.282 "dma_device_type": 1 00:32:02.282 }, 00:32:02.282 { 00:32:02.282 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:32:02.282 "dma_device_type": 2 00:32:02.282 } 00:32:02.282 ], 00:32:02.282 "driver_specific": {} 00:32:02.282 } 00:32:02.282 ] 00:32:02.282 17:28:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:02.282 17:28:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:32:02.282 17:28:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online raid0 64 4 00:32:02.282 17:28:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:32:02.282 17:28:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:32:02.282 17:28:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:32:02.282 17:28:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:32:02.282 17:28:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:32:02.282 17:28:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:32:02.282 17:28:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:32:02.282 17:28:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:32:02.282 17:28:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:32:02.282 17:28:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:32:02.282 17:28:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:02.283 17:28:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:32:02.283 17:28:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:32:02.283 17:28:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:02.283 17:28:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:32:02.283 "name": "Existed_Raid", 00:32:02.283 "uuid": "80d6cc7d-177c-4d0d-93d5-4d91e2e8ca4c", 00:32:02.283 "strip_size_kb": 64, 00:32:02.283 "state": "online", 00:32:02.283 "raid_level": "raid0", 00:32:02.283 "superblock": true, 00:32:02.283 "num_base_bdevs": 4, 00:32:02.283 "num_base_bdevs_discovered": 4, 00:32:02.283 "num_base_bdevs_operational": 4, 00:32:02.283 "base_bdevs_list": [ 00:32:02.283 { 00:32:02.283 "name": "NewBaseBdev", 00:32:02.283 "uuid": "e5aad230-7786-46ea-ab19-1838f9d3850a", 00:32:02.283 "is_configured": true, 00:32:02.283 "data_offset": 2048, 00:32:02.283 "data_size": 63488 00:32:02.283 }, 00:32:02.283 { 00:32:02.283 "name": "BaseBdev2", 00:32:02.283 "uuid": "a94439ce-c8d1-4966-ad6f-0d0416b55412", 00:32:02.283 "is_configured": true, 00:32:02.283 "data_offset": 2048, 00:32:02.283 "data_size": 63488 00:32:02.283 }, 00:32:02.283 { 00:32:02.283 "name": "BaseBdev3", 00:32:02.283 "uuid": "6da9ea8b-e72f-4de8-8204-5e3707006421", 00:32:02.283 "is_configured": true, 00:32:02.283 "data_offset": 2048, 00:32:02.283 "data_size": 63488 00:32:02.283 }, 00:32:02.283 { 00:32:02.283 "name": "BaseBdev4", 00:32:02.283 "uuid": "104eb545-81b9-4572-9d0c-f9869965f1c7", 00:32:02.283 "is_configured": true, 00:32:02.283 "data_offset": 2048, 00:32:02.283 "data_size": 63488 00:32:02.283 } 00:32:02.283 ] 00:32:02.283 }' 00:32:02.283 17:28:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:32:02.283 17:28:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:32:02.851 17:28:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:32:02.851 17:28:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:32:02.851 17:28:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:32:02.851 17:28:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:32:02.851 17:28:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:32:02.851 17:28:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:32:02.851 17:28:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:32:02.851 17:28:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:02.851 17:28:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:32:02.851 17:28:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:32:02.851 [2024-11-26 17:28:40.060949] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:32:02.851 17:28:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:02.851 17:28:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:32:02.851 "name": "Existed_Raid", 00:32:02.851 "aliases": [ 00:32:02.851 "80d6cc7d-177c-4d0d-93d5-4d91e2e8ca4c" 00:32:02.851 ], 00:32:02.851 "product_name": "Raid Volume", 00:32:02.851 "block_size": 512, 00:32:02.851 "num_blocks": 253952, 00:32:02.851 "uuid": "80d6cc7d-177c-4d0d-93d5-4d91e2e8ca4c", 00:32:02.851 "assigned_rate_limits": { 00:32:02.851 "rw_ios_per_sec": 0, 00:32:02.851 "rw_mbytes_per_sec": 0, 00:32:02.851 "r_mbytes_per_sec": 0, 00:32:02.851 "w_mbytes_per_sec": 0 00:32:02.851 }, 00:32:02.851 "claimed": false, 00:32:02.851 "zoned": false, 00:32:02.851 "supported_io_types": { 00:32:02.851 "read": true, 00:32:02.851 "write": true, 00:32:02.851 "unmap": true, 00:32:02.851 "flush": true, 00:32:02.851 "reset": true, 00:32:02.851 "nvme_admin": false, 00:32:02.851 "nvme_io": false, 00:32:02.851 "nvme_io_md": false, 00:32:02.851 "write_zeroes": true, 00:32:02.851 "zcopy": false, 00:32:02.851 "get_zone_info": false, 00:32:02.851 "zone_management": false, 00:32:02.851 "zone_append": false, 00:32:02.851 "compare": false, 00:32:02.851 "compare_and_write": false, 00:32:02.851 "abort": false, 00:32:02.851 "seek_hole": false, 00:32:02.851 "seek_data": false, 00:32:02.851 "copy": false, 00:32:02.851 "nvme_iov_md": false 00:32:02.851 }, 00:32:02.851 "memory_domains": [ 00:32:02.851 { 00:32:02.851 "dma_device_id": "system", 00:32:02.851 "dma_device_type": 1 00:32:02.851 }, 00:32:02.851 { 00:32:02.851 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:32:02.852 "dma_device_type": 2 00:32:02.852 }, 00:32:02.852 { 00:32:02.852 "dma_device_id": "system", 00:32:02.852 "dma_device_type": 1 00:32:02.852 }, 00:32:02.852 { 00:32:02.852 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:32:02.852 "dma_device_type": 2 00:32:02.852 }, 00:32:02.852 { 00:32:02.852 "dma_device_id": "system", 00:32:02.852 "dma_device_type": 1 00:32:02.852 }, 00:32:02.852 { 00:32:02.852 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:32:02.852 "dma_device_type": 2 00:32:02.852 }, 00:32:02.852 { 00:32:02.852 "dma_device_id": "system", 00:32:02.852 "dma_device_type": 1 00:32:02.852 }, 00:32:02.852 { 00:32:02.852 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:32:02.852 "dma_device_type": 2 00:32:02.852 } 00:32:02.852 ], 00:32:02.852 "driver_specific": { 00:32:02.852 "raid": { 00:32:02.852 "uuid": "80d6cc7d-177c-4d0d-93d5-4d91e2e8ca4c", 00:32:02.852 "strip_size_kb": 64, 00:32:02.852 "state": "online", 00:32:02.852 "raid_level": "raid0", 00:32:02.852 "superblock": true, 00:32:02.852 "num_base_bdevs": 4, 00:32:02.852 "num_base_bdevs_discovered": 4, 00:32:02.852 "num_base_bdevs_operational": 4, 00:32:02.852 "base_bdevs_list": [ 00:32:02.852 { 00:32:02.852 "name": "NewBaseBdev", 00:32:02.852 "uuid": "e5aad230-7786-46ea-ab19-1838f9d3850a", 00:32:02.852 "is_configured": true, 00:32:02.852 "data_offset": 2048, 00:32:02.852 "data_size": 63488 00:32:02.852 }, 00:32:02.852 { 00:32:02.852 "name": "BaseBdev2", 00:32:02.852 "uuid": "a94439ce-c8d1-4966-ad6f-0d0416b55412", 00:32:02.852 "is_configured": true, 00:32:02.852 "data_offset": 2048, 00:32:02.852 "data_size": 63488 00:32:02.852 }, 00:32:02.852 { 00:32:02.852 "name": "BaseBdev3", 00:32:02.852 "uuid": "6da9ea8b-e72f-4de8-8204-5e3707006421", 00:32:02.852 "is_configured": true, 00:32:02.852 "data_offset": 2048, 00:32:02.852 "data_size": 63488 00:32:02.852 }, 00:32:02.852 { 00:32:02.852 "name": "BaseBdev4", 00:32:02.852 "uuid": "104eb545-81b9-4572-9d0c-f9869965f1c7", 00:32:02.852 "is_configured": true, 00:32:02.852 "data_offset": 2048, 00:32:02.852 "data_size": 63488 00:32:02.852 } 00:32:02.852 ] 00:32:02.852 } 00:32:02.852 } 00:32:02.852 }' 00:32:02.852 17:28:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:32:02.852 17:28:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:32:02.852 BaseBdev2 00:32:02.852 BaseBdev3 00:32:02.852 BaseBdev4' 00:32:02.852 17:28:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:32:02.852 17:28:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:32:02.852 17:28:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:32:02.852 17:28:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:32:02.852 17:28:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:02.852 17:28:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:32:02.852 17:28:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:32:02.852 17:28:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:02.852 17:28:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:32:02.852 17:28:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:32:02.852 17:28:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:32:02.852 17:28:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:32:02.852 17:28:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:02.852 17:28:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:32:02.852 17:28:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:32:02.852 17:28:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:02.852 17:28:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:32:02.852 17:28:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:32:02.852 17:28:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:32:02.852 17:28:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:32:02.852 17:28:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:32:02.852 17:28:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:02.852 17:28:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:32:03.111 17:28:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:03.111 17:28:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:32:03.111 17:28:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:32:03.111 17:28:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:32:03.111 17:28:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:32:03.111 17:28:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:32:03.111 17:28:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:03.111 17:28:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:32:03.111 17:28:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:03.111 17:28:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:32:03.111 17:28:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:32:03.111 17:28:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:32:03.111 17:28:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:03.111 17:28:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:32:03.111 [2024-11-26 17:28:40.384671] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:32:03.111 [2024-11-26 17:28:40.384707] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:32:03.111 [2024-11-26 17:28:40.384786] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:32:03.111 [2024-11-26 17:28:40.384856] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:32:03.111 [2024-11-26 17:28:40.384869] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name Existed_Raid, state offline 00:32:03.111 17:28:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:03.111 17:28:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@326 -- # killprocess 70477 00:32:03.111 17:28:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@954 -- # '[' -z 70477 ']' 00:32:03.111 17:28:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@958 -- # kill -0 70477 00:32:03.111 17:28:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@959 -- # uname 00:32:03.111 17:28:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:32:03.111 17:28:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 70477 00:32:03.111 killing process with pid 70477 00:32:03.111 17:28:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:32:03.111 17:28:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:32:03.111 17:28:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@972 -- # echo 'killing process with pid 70477' 00:32:03.111 17:28:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@973 -- # kill 70477 00:32:03.111 [2024-11-26 17:28:40.428322] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:32:03.111 17:28:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@978 -- # wait 70477 00:32:03.678 [2024-11-26 17:28:40.840272] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:32:04.618 17:28:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@328 -- # return 0 00:32:04.618 00:32:04.618 real 0m11.633s 00:32:04.618 user 0m18.585s 00:32:04.618 sys 0m2.194s 00:32:04.618 17:28:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1130 -- # xtrace_disable 00:32:04.618 17:28:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:32:04.618 ************************************ 00:32:04.618 END TEST raid_state_function_test_sb 00:32:04.618 ************************************ 00:32:04.618 17:28:42 bdev_raid -- bdev/bdev_raid.sh@970 -- # run_test raid_superblock_test raid_superblock_test raid0 4 00:32:04.618 17:28:42 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:32:04.618 17:28:42 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:32:04.618 17:28:42 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:32:04.877 ************************************ 00:32:04.877 START TEST raid_superblock_test 00:32:04.877 ************************************ 00:32:04.877 17:28:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1129 -- # raid_superblock_test raid0 4 00:32:04.877 17:28:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@393 -- # local raid_level=raid0 00:32:04.877 17:28:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=4 00:32:04.877 17:28:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:32:04.877 17:28:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:32:04.877 17:28:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:32:04.877 17:28:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:32:04.877 17:28:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:32:04.877 17:28:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:32:04.877 17:28:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:32:04.877 17:28:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size 00:32:04.877 17:28:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:32:04.877 17:28:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:32:04.877 17:28:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:32:04.877 17:28:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@404 -- # '[' raid0 '!=' raid1 ']' 00:32:04.877 17:28:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@405 -- # strip_size=64 00:32:04.877 17:28:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@406 -- # strip_size_create_arg='-z 64' 00:32:04.877 17:28:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@412 -- # raid_pid=71148 00:32:04.877 17:28:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:32:04.877 17:28:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@413 -- # waitforlisten 71148 00:32:04.877 17:28:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@835 -- # '[' -z 71148 ']' 00:32:04.877 17:28:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:32:04.877 17:28:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:32:04.877 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:32:04.877 17:28:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:32:04.877 17:28:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:32:04.877 17:28:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:32:04.877 [2024-11-26 17:28:42.151706] Starting SPDK v25.01-pre git sha1 f7ce15267 / DPDK 24.03.0 initialization... 00:32:04.877 [2024-11-26 17:28:42.151842] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71148 ] 00:32:04.877 [2024-11-26 17:28:42.317958] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:05.135 [2024-11-26 17:28:42.438892] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:32:05.394 [2024-11-26 17:28:42.651228] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:32:05.394 [2024-11-26 17:28:42.651293] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:32:05.961 17:28:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:32:05.961 17:28:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@868 -- # return 0 00:32:05.961 17:28:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:32:05.961 17:28:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:32:05.961 17:28:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:32:05.961 17:28:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:32:05.961 17:28:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:32:05.961 17:28:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:32:05.961 17:28:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:32:05.961 17:28:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:32:05.961 17:28:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc1 00:32:05.961 17:28:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:05.961 17:28:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:32:05.961 malloc1 00:32:05.961 17:28:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:05.961 17:28:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:32:05.961 17:28:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:05.961 17:28:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:32:05.961 [2024-11-26 17:28:43.183623] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:32:05.961 [2024-11-26 17:28:43.183684] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:32:05.961 [2024-11-26 17:28:43.183710] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:32:05.961 [2024-11-26 17:28:43.183722] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:32:05.961 [2024-11-26 17:28:43.186207] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:32:05.961 [2024-11-26 17:28:43.186248] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:32:05.961 pt1 00:32:05.961 17:28:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:05.961 17:28:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:32:05.961 17:28:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:32:05.961 17:28:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:32:05.961 17:28:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:32:05.961 17:28:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:32:05.962 17:28:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:32:05.962 17:28:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:32:05.962 17:28:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:32:05.962 17:28:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc2 00:32:05.962 17:28:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:05.962 17:28:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:32:05.962 malloc2 00:32:05.962 17:28:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:05.962 17:28:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:32:05.962 17:28:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:05.962 17:28:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:32:05.962 [2024-11-26 17:28:43.237799] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:32:05.962 [2024-11-26 17:28:43.237862] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:32:05.962 [2024-11-26 17:28:43.237894] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:32:05.962 [2024-11-26 17:28:43.237906] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:32:05.962 [2024-11-26 17:28:43.240315] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:32:05.962 [2024-11-26 17:28:43.240352] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:32:05.962 pt2 00:32:05.962 17:28:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:05.962 17:28:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:32:05.962 17:28:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:32:05.962 17:28:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc3 00:32:05.962 17:28:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt3 00:32:05.962 17:28:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000003 00:32:05.962 17:28:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:32:05.962 17:28:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:32:05.962 17:28:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:32:05.962 17:28:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc3 00:32:05.962 17:28:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:05.962 17:28:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:32:05.962 malloc3 00:32:05.962 17:28:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:05.962 17:28:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:32:05.962 17:28:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:05.962 17:28:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:32:05.962 [2024-11-26 17:28:43.309081] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:32:05.962 [2024-11-26 17:28:43.309138] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:32:05.962 [2024-11-26 17:28:43.309163] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:32:05.962 [2024-11-26 17:28:43.309175] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:32:05.962 [2024-11-26 17:28:43.311561] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:32:05.962 [2024-11-26 17:28:43.311601] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:32:05.962 pt3 00:32:05.962 17:28:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:05.962 17:28:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:32:05.962 17:28:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:32:05.962 17:28:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc4 00:32:05.962 17:28:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt4 00:32:05.962 17:28:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000004 00:32:05.962 17:28:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:32:05.962 17:28:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:32:05.962 17:28:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:32:05.962 17:28:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc4 00:32:05.962 17:28:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:05.962 17:28:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:32:05.962 malloc4 00:32:05.962 17:28:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:05.962 17:28:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:32:05.962 17:28:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:05.962 17:28:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:32:05.962 [2024-11-26 17:28:43.364331] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:32:05.962 [2024-11-26 17:28:43.364396] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:32:05.962 [2024-11-26 17:28:43.364423] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:32:05.962 [2024-11-26 17:28:43.364434] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:32:05.962 [2024-11-26 17:28:43.366827] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:32:05.962 [2024-11-26 17:28:43.366868] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:32:05.962 pt4 00:32:05.962 17:28:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:05.962 17:28:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:32:05.962 17:28:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:32:05.962 17:28:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''pt1 pt2 pt3 pt4'\''' -n raid_bdev1 -s 00:32:05.962 17:28:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:05.962 17:28:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:32:05.962 [2024-11-26 17:28:43.376350] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:32:05.962 [2024-11-26 17:28:43.378463] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:32:05.962 [2024-11-26 17:28:43.378558] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:32:05.962 [2024-11-26 17:28:43.378602] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:32:05.962 [2024-11-26 17:28:43.378780] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:32:05.962 [2024-11-26 17:28:43.378792] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:32:05.962 [2024-11-26 17:28:43.379086] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:32:05.962 [2024-11-26 17:28:43.379260] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:32:05.962 [2024-11-26 17:28:43.379274] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:32:05.962 [2024-11-26 17:28:43.379435] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:32:05.962 17:28:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:05.962 17:28:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 4 00:32:05.962 17:28:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:32:05.962 17:28:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:32:05.962 17:28:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:32:05.962 17:28:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:32:05.962 17:28:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:32:05.962 17:28:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:32:05.962 17:28:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:32:05.962 17:28:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:32:05.962 17:28:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:32:05.962 17:28:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:32:05.962 17:28:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:32:05.962 17:28:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:05.962 17:28:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:32:06.221 17:28:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:06.221 17:28:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:32:06.221 "name": "raid_bdev1", 00:32:06.221 "uuid": "94975f64-bdf9-41cf-8461-dfe0f2d44e85", 00:32:06.221 "strip_size_kb": 64, 00:32:06.221 "state": "online", 00:32:06.221 "raid_level": "raid0", 00:32:06.221 "superblock": true, 00:32:06.221 "num_base_bdevs": 4, 00:32:06.221 "num_base_bdevs_discovered": 4, 00:32:06.221 "num_base_bdevs_operational": 4, 00:32:06.221 "base_bdevs_list": [ 00:32:06.221 { 00:32:06.221 "name": "pt1", 00:32:06.221 "uuid": "00000000-0000-0000-0000-000000000001", 00:32:06.221 "is_configured": true, 00:32:06.221 "data_offset": 2048, 00:32:06.221 "data_size": 63488 00:32:06.221 }, 00:32:06.221 { 00:32:06.221 "name": "pt2", 00:32:06.221 "uuid": "00000000-0000-0000-0000-000000000002", 00:32:06.221 "is_configured": true, 00:32:06.221 "data_offset": 2048, 00:32:06.221 "data_size": 63488 00:32:06.221 }, 00:32:06.221 { 00:32:06.221 "name": "pt3", 00:32:06.221 "uuid": "00000000-0000-0000-0000-000000000003", 00:32:06.221 "is_configured": true, 00:32:06.221 "data_offset": 2048, 00:32:06.221 "data_size": 63488 00:32:06.221 }, 00:32:06.221 { 00:32:06.221 "name": "pt4", 00:32:06.221 "uuid": "00000000-0000-0000-0000-000000000004", 00:32:06.221 "is_configured": true, 00:32:06.221 "data_offset": 2048, 00:32:06.221 "data_size": 63488 00:32:06.221 } 00:32:06.221 ] 00:32:06.221 }' 00:32:06.221 17:28:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:32:06.221 17:28:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:32:06.480 17:28:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:32:06.480 17:28:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:32:06.480 17:28:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:32:06.480 17:28:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:32:06.480 17:28:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:32:06.480 17:28:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:32:06.480 17:28:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:32:06.480 17:28:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:06.480 17:28:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:32:06.480 17:28:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:32:06.480 [2024-11-26 17:28:43.801132] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:32:06.480 17:28:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:06.480 17:28:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:32:06.480 "name": "raid_bdev1", 00:32:06.480 "aliases": [ 00:32:06.480 "94975f64-bdf9-41cf-8461-dfe0f2d44e85" 00:32:06.480 ], 00:32:06.480 "product_name": "Raid Volume", 00:32:06.480 "block_size": 512, 00:32:06.480 "num_blocks": 253952, 00:32:06.480 "uuid": "94975f64-bdf9-41cf-8461-dfe0f2d44e85", 00:32:06.480 "assigned_rate_limits": { 00:32:06.480 "rw_ios_per_sec": 0, 00:32:06.480 "rw_mbytes_per_sec": 0, 00:32:06.480 "r_mbytes_per_sec": 0, 00:32:06.480 "w_mbytes_per_sec": 0 00:32:06.480 }, 00:32:06.480 "claimed": false, 00:32:06.480 "zoned": false, 00:32:06.480 "supported_io_types": { 00:32:06.480 "read": true, 00:32:06.480 "write": true, 00:32:06.480 "unmap": true, 00:32:06.480 "flush": true, 00:32:06.480 "reset": true, 00:32:06.480 "nvme_admin": false, 00:32:06.480 "nvme_io": false, 00:32:06.480 "nvme_io_md": false, 00:32:06.480 "write_zeroes": true, 00:32:06.480 "zcopy": false, 00:32:06.480 "get_zone_info": false, 00:32:06.480 "zone_management": false, 00:32:06.480 "zone_append": false, 00:32:06.480 "compare": false, 00:32:06.480 "compare_and_write": false, 00:32:06.480 "abort": false, 00:32:06.480 "seek_hole": false, 00:32:06.480 "seek_data": false, 00:32:06.480 "copy": false, 00:32:06.480 "nvme_iov_md": false 00:32:06.480 }, 00:32:06.480 "memory_domains": [ 00:32:06.480 { 00:32:06.480 "dma_device_id": "system", 00:32:06.480 "dma_device_type": 1 00:32:06.480 }, 00:32:06.480 { 00:32:06.480 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:32:06.480 "dma_device_type": 2 00:32:06.480 }, 00:32:06.480 { 00:32:06.480 "dma_device_id": "system", 00:32:06.480 "dma_device_type": 1 00:32:06.480 }, 00:32:06.480 { 00:32:06.480 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:32:06.480 "dma_device_type": 2 00:32:06.480 }, 00:32:06.480 { 00:32:06.480 "dma_device_id": "system", 00:32:06.480 "dma_device_type": 1 00:32:06.480 }, 00:32:06.480 { 00:32:06.480 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:32:06.480 "dma_device_type": 2 00:32:06.480 }, 00:32:06.480 { 00:32:06.480 "dma_device_id": "system", 00:32:06.480 "dma_device_type": 1 00:32:06.480 }, 00:32:06.480 { 00:32:06.480 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:32:06.480 "dma_device_type": 2 00:32:06.480 } 00:32:06.480 ], 00:32:06.480 "driver_specific": { 00:32:06.480 "raid": { 00:32:06.480 "uuid": "94975f64-bdf9-41cf-8461-dfe0f2d44e85", 00:32:06.480 "strip_size_kb": 64, 00:32:06.480 "state": "online", 00:32:06.480 "raid_level": "raid0", 00:32:06.480 "superblock": true, 00:32:06.480 "num_base_bdevs": 4, 00:32:06.480 "num_base_bdevs_discovered": 4, 00:32:06.480 "num_base_bdevs_operational": 4, 00:32:06.480 "base_bdevs_list": [ 00:32:06.480 { 00:32:06.480 "name": "pt1", 00:32:06.480 "uuid": "00000000-0000-0000-0000-000000000001", 00:32:06.480 "is_configured": true, 00:32:06.480 "data_offset": 2048, 00:32:06.480 "data_size": 63488 00:32:06.480 }, 00:32:06.480 { 00:32:06.480 "name": "pt2", 00:32:06.480 "uuid": "00000000-0000-0000-0000-000000000002", 00:32:06.480 "is_configured": true, 00:32:06.480 "data_offset": 2048, 00:32:06.480 "data_size": 63488 00:32:06.481 }, 00:32:06.481 { 00:32:06.481 "name": "pt3", 00:32:06.481 "uuid": "00000000-0000-0000-0000-000000000003", 00:32:06.481 "is_configured": true, 00:32:06.481 "data_offset": 2048, 00:32:06.481 "data_size": 63488 00:32:06.481 }, 00:32:06.481 { 00:32:06.481 "name": "pt4", 00:32:06.481 "uuid": "00000000-0000-0000-0000-000000000004", 00:32:06.481 "is_configured": true, 00:32:06.481 "data_offset": 2048, 00:32:06.481 "data_size": 63488 00:32:06.481 } 00:32:06.481 ] 00:32:06.481 } 00:32:06.481 } 00:32:06.481 }' 00:32:06.481 17:28:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:32:06.481 17:28:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:32:06.481 pt2 00:32:06.481 pt3 00:32:06.481 pt4' 00:32:06.481 17:28:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:32:06.740 17:28:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:32:06.740 17:28:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:32:06.740 17:28:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:32:06.740 17:28:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:06.740 17:28:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:32:06.740 17:28:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:32:06.740 17:28:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:06.740 17:28:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:32:06.740 17:28:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:32:06.740 17:28:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:32:06.740 17:28:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:32:06.740 17:28:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:32:06.740 17:28:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:06.740 17:28:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:32:06.740 17:28:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:06.740 17:28:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:32:06.740 17:28:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:32:06.740 17:28:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:32:06.740 17:28:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:32:06.740 17:28:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:06.740 17:28:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:32:06.740 17:28:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:32:06.740 17:28:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:06.740 17:28:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:32:06.740 17:28:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:32:06.740 17:28:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:32:06.740 17:28:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt4 00:32:06.740 17:28:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:06.740 17:28:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:32:06.740 17:28:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:32:06.740 17:28:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:06.740 17:28:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:32:06.740 17:28:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:32:06.740 17:28:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:32:06.740 17:28:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:32:06.740 17:28:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:06.740 17:28:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:32:06.740 [2024-11-26 17:28:44.133114] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:32:06.740 17:28:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:06.740 17:28:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=94975f64-bdf9-41cf-8461-dfe0f2d44e85 00:32:06.740 17:28:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@436 -- # '[' -z 94975f64-bdf9-41cf-8461-dfe0f2d44e85 ']' 00:32:06.740 17:28:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:32:06.740 17:28:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:06.740 17:28:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:32:06.740 [2024-11-26 17:28:44.172835] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:32:06.740 [2024-11-26 17:28:44.172867] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:32:06.740 [2024-11-26 17:28:44.172950] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:32:06.740 [2024-11-26 17:28:44.173019] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:32:06.740 [2024-11-26 17:28:44.173053] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:32:06.740 17:28:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:06.740 17:28:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:32:06.740 17:28:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:32:06.740 17:28:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:06.740 17:28:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:32:06.999 17:28:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:07.000 17:28:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:32:07.000 17:28:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:32:07.000 17:28:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:32:07.000 17:28:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:32:07.000 17:28:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:07.000 17:28:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:32:07.000 17:28:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:07.000 17:28:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:32:07.000 17:28:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:32:07.000 17:28:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:07.000 17:28:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:32:07.000 17:28:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:07.000 17:28:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:32:07.000 17:28:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt3 00:32:07.000 17:28:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:07.000 17:28:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:32:07.000 17:28:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:07.000 17:28:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:32:07.000 17:28:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt4 00:32:07.000 17:28:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:07.000 17:28:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:32:07.000 17:28:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:07.000 17:28:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:32:07.000 17:28:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:32:07.000 17:28:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:07.000 17:28:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:32:07.000 17:28:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:07.000 17:28:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:32:07.000 17:28:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''malloc1 malloc2 malloc3 malloc4'\''' -n raid_bdev1 00:32:07.000 17:28:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@652 -- # local es=0 00:32:07.000 17:28:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''malloc1 malloc2 malloc3 malloc4'\''' -n raid_bdev1 00:32:07.000 17:28:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:32:07.000 17:28:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:32:07.000 17:28:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:32:07.000 17:28:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:32:07.000 17:28:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''malloc1 malloc2 malloc3 malloc4'\''' -n raid_bdev1 00:32:07.000 17:28:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:07.000 17:28:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:32:07.000 [2024-11-26 17:28:44.328911] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:32:07.000 [2024-11-26 17:28:44.331063] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:32:07.000 [2024-11-26 17:28:44.331113] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc3 is claimed 00:32:07.000 [2024-11-26 17:28:44.331148] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc4 is claimed 00:32:07.000 [2024-11-26 17:28:44.331202] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:32:07.000 [2024-11-26 17:28:44.331257] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:32:07.000 [2024-11-26 17:28:44.331279] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc3 00:32:07.000 [2024-11-26 17:28:44.331302] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc4 00:32:07.000 [2024-11-26 17:28:44.331318] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:32:07.000 [2024-11-26 17:28:44.331334] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state configuring 00:32:07.000 request: 00:32:07.000 { 00:32:07.000 "name": "raid_bdev1", 00:32:07.000 "raid_level": "raid0", 00:32:07.000 "base_bdevs": [ 00:32:07.000 "malloc1", 00:32:07.000 "malloc2", 00:32:07.000 "malloc3", 00:32:07.000 "malloc4" 00:32:07.000 ], 00:32:07.000 "strip_size_kb": 64, 00:32:07.000 "superblock": false, 00:32:07.000 "method": "bdev_raid_create", 00:32:07.000 "req_id": 1 00:32:07.000 } 00:32:07.000 Got JSON-RPC error response 00:32:07.000 response: 00:32:07.000 { 00:32:07.000 "code": -17, 00:32:07.000 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:32:07.000 } 00:32:07.000 17:28:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:32:07.000 17:28:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@655 -- # es=1 00:32:07.000 17:28:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:32:07.000 17:28:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:32:07.000 17:28:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:32:07.000 17:28:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:32:07.000 17:28:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:07.000 17:28:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:32:07.000 17:28:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:32:07.000 17:28:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:07.000 17:28:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:32:07.000 17:28:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:32:07.000 17:28:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:32:07.000 17:28:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:07.000 17:28:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:32:07.000 [2024-11-26 17:28:44.392885] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:32:07.000 [2024-11-26 17:28:44.393113] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:32:07.000 [2024-11-26 17:28:44.393233] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:32:07.000 [2024-11-26 17:28:44.393330] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:32:07.000 [2024-11-26 17:28:44.395844] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:32:07.000 [2024-11-26 17:28:44.395987] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:32:07.000 [2024-11-26 17:28:44.396181] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:32:07.000 [2024-11-26 17:28:44.396318] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:32:07.000 pt1 00:32:07.000 17:28:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:07.000 17:28:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring raid0 64 4 00:32:07.000 17:28:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:32:07.000 17:28:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:32:07.000 17:28:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:32:07.000 17:28:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:32:07.000 17:28:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:32:07.000 17:28:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:32:07.000 17:28:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:32:07.000 17:28:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:32:07.000 17:28:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:32:07.000 17:28:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:32:07.000 17:28:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:07.000 17:28:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:32:07.000 17:28:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:32:07.000 17:28:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:07.259 17:28:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:32:07.259 "name": "raid_bdev1", 00:32:07.259 "uuid": "94975f64-bdf9-41cf-8461-dfe0f2d44e85", 00:32:07.259 "strip_size_kb": 64, 00:32:07.259 "state": "configuring", 00:32:07.259 "raid_level": "raid0", 00:32:07.259 "superblock": true, 00:32:07.259 "num_base_bdevs": 4, 00:32:07.259 "num_base_bdevs_discovered": 1, 00:32:07.259 "num_base_bdevs_operational": 4, 00:32:07.259 "base_bdevs_list": [ 00:32:07.259 { 00:32:07.259 "name": "pt1", 00:32:07.259 "uuid": "00000000-0000-0000-0000-000000000001", 00:32:07.259 "is_configured": true, 00:32:07.259 "data_offset": 2048, 00:32:07.259 "data_size": 63488 00:32:07.259 }, 00:32:07.259 { 00:32:07.259 "name": null, 00:32:07.259 "uuid": "00000000-0000-0000-0000-000000000002", 00:32:07.259 "is_configured": false, 00:32:07.259 "data_offset": 2048, 00:32:07.259 "data_size": 63488 00:32:07.259 }, 00:32:07.259 { 00:32:07.259 "name": null, 00:32:07.259 "uuid": "00000000-0000-0000-0000-000000000003", 00:32:07.259 "is_configured": false, 00:32:07.259 "data_offset": 2048, 00:32:07.259 "data_size": 63488 00:32:07.259 }, 00:32:07.259 { 00:32:07.259 "name": null, 00:32:07.259 "uuid": "00000000-0000-0000-0000-000000000004", 00:32:07.259 "is_configured": false, 00:32:07.259 "data_offset": 2048, 00:32:07.259 "data_size": 63488 00:32:07.259 } 00:32:07.259 ] 00:32:07.259 }' 00:32:07.259 17:28:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:32:07.259 17:28:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:32:07.519 17:28:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@470 -- # '[' 4 -gt 2 ']' 00:32:07.519 17:28:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@472 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:32:07.519 17:28:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:07.519 17:28:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:32:07.519 [2024-11-26 17:28:44.844983] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:32:07.519 [2024-11-26 17:28:44.845074] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:32:07.519 [2024-11-26 17:28:44.845098] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a880 00:32:07.519 [2024-11-26 17:28:44.845112] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:32:07.519 [2024-11-26 17:28:44.845565] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:32:07.519 [2024-11-26 17:28:44.845587] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:32:07.519 [2024-11-26 17:28:44.845669] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:32:07.519 [2024-11-26 17:28:44.845696] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:32:07.519 pt2 00:32:07.519 17:28:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:07.519 17:28:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@473 -- # rpc_cmd bdev_passthru_delete pt2 00:32:07.519 17:28:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:07.519 17:28:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:32:07.519 [2024-11-26 17:28:44.852969] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: pt2 00:32:07.519 17:28:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:07.519 17:28:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@474 -- # verify_raid_bdev_state raid_bdev1 configuring raid0 64 4 00:32:07.519 17:28:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:32:07.519 17:28:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:32:07.519 17:28:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:32:07.519 17:28:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:32:07.519 17:28:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:32:07.519 17:28:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:32:07.519 17:28:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:32:07.519 17:28:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:32:07.519 17:28:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:32:07.519 17:28:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:32:07.519 17:28:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:32:07.519 17:28:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:07.519 17:28:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:32:07.519 17:28:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:07.519 17:28:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:32:07.519 "name": "raid_bdev1", 00:32:07.519 "uuid": "94975f64-bdf9-41cf-8461-dfe0f2d44e85", 00:32:07.519 "strip_size_kb": 64, 00:32:07.519 "state": "configuring", 00:32:07.519 "raid_level": "raid0", 00:32:07.519 "superblock": true, 00:32:07.519 "num_base_bdevs": 4, 00:32:07.519 "num_base_bdevs_discovered": 1, 00:32:07.519 "num_base_bdevs_operational": 4, 00:32:07.519 "base_bdevs_list": [ 00:32:07.519 { 00:32:07.519 "name": "pt1", 00:32:07.519 "uuid": "00000000-0000-0000-0000-000000000001", 00:32:07.519 "is_configured": true, 00:32:07.519 "data_offset": 2048, 00:32:07.519 "data_size": 63488 00:32:07.519 }, 00:32:07.519 { 00:32:07.519 "name": null, 00:32:07.519 "uuid": "00000000-0000-0000-0000-000000000002", 00:32:07.519 "is_configured": false, 00:32:07.519 "data_offset": 0, 00:32:07.519 "data_size": 63488 00:32:07.519 }, 00:32:07.519 { 00:32:07.519 "name": null, 00:32:07.519 "uuid": "00000000-0000-0000-0000-000000000003", 00:32:07.519 "is_configured": false, 00:32:07.519 "data_offset": 2048, 00:32:07.519 "data_size": 63488 00:32:07.519 }, 00:32:07.519 { 00:32:07.519 "name": null, 00:32:07.519 "uuid": "00000000-0000-0000-0000-000000000004", 00:32:07.519 "is_configured": false, 00:32:07.519 "data_offset": 2048, 00:32:07.519 "data_size": 63488 00:32:07.519 } 00:32:07.519 ] 00:32:07.519 }' 00:32:07.519 17:28:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:32:07.519 17:28:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:32:08.089 17:28:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:32:08.089 17:28:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:32:08.089 17:28:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:32:08.089 17:28:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:08.089 17:28:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:32:08.089 [2024-11-26 17:28:45.325089] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:32:08.089 [2024-11-26 17:28:45.325287] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:32:08.089 [2024-11-26 17:28:45.325320] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ab80 00:32:08.089 [2024-11-26 17:28:45.325332] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:32:08.089 [2024-11-26 17:28:45.325805] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:32:08.089 [2024-11-26 17:28:45.325833] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:32:08.089 [2024-11-26 17:28:45.325924] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:32:08.089 [2024-11-26 17:28:45.325947] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:32:08.089 pt2 00:32:08.089 17:28:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:08.089 17:28:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:32:08.089 17:28:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:32:08.089 17:28:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:32:08.089 17:28:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:08.089 17:28:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:32:08.089 [2024-11-26 17:28:45.337041] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:32:08.089 [2024-11-26 17:28:45.337101] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:32:08.089 [2024-11-26 17:28:45.337122] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ae80 00:32:08.089 [2024-11-26 17:28:45.337133] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:32:08.089 [2024-11-26 17:28:45.337510] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:32:08.089 [2024-11-26 17:28:45.337527] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:32:08.089 [2024-11-26 17:28:45.337588] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:32:08.089 [2024-11-26 17:28:45.337613] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:32:08.089 pt3 00:32:08.089 17:28:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:08.089 17:28:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:32:08.089 17:28:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:32:08.089 17:28:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:32:08.089 17:28:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:08.089 17:28:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:32:08.089 [2024-11-26 17:28:45.345014] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:32:08.089 [2024-11-26 17:28:45.345172] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:32:08.089 [2024-11-26 17:28:45.345201] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b180 00:32:08.090 [2024-11-26 17:28:45.345213] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:32:08.090 [2024-11-26 17:28:45.345578] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:32:08.090 [2024-11-26 17:28:45.345603] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:32:08.090 [2024-11-26 17:28:45.345664] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt4 00:32:08.090 [2024-11-26 17:28:45.345687] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:32:08.090 [2024-11-26 17:28:45.345808] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:32:08.090 [2024-11-26 17:28:45.345818] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:32:08.090 [2024-11-26 17:28:45.346074] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:32:08.090 [2024-11-26 17:28:45.346227] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:32:08.090 [2024-11-26 17:28:45.346241] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:32:08.090 [2024-11-26 17:28:45.346362] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:32:08.090 pt4 00:32:08.090 17:28:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:08.090 17:28:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:32:08.090 17:28:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:32:08.090 17:28:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 4 00:32:08.090 17:28:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:32:08.090 17:28:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:32:08.090 17:28:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:32:08.090 17:28:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:32:08.090 17:28:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:32:08.090 17:28:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:32:08.090 17:28:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:32:08.090 17:28:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:32:08.090 17:28:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:32:08.090 17:28:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:32:08.090 17:28:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:32:08.090 17:28:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:08.090 17:28:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:32:08.090 17:28:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:08.090 17:28:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:32:08.090 "name": "raid_bdev1", 00:32:08.090 "uuid": "94975f64-bdf9-41cf-8461-dfe0f2d44e85", 00:32:08.090 "strip_size_kb": 64, 00:32:08.090 "state": "online", 00:32:08.090 "raid_level": "raid0", 00:32:08.090 "superblock": true, 00:32:08.090 "num_base_bdevs": 4, 00:32:08.090 "num_base_bdevs_discovered": 4, 00:32:08.090 "num_base_bdevs_operational": 4, 00:32:08.090 "base_bdevs_list": [ 00:32:08.090 { 00:32:08.090 "name": "pt1", 00:32:08.090 "uuid": "00000000-0000-0000-0000-000000000001", 00:32:08.090 "is_configured": true, 00:32:08.090 "data_offset": 2048, 00:32:08.090 "data_size": 63488 00:32:08.090 }, 00:32:08.090 { 00:32:08.090 "name": "pt2", 00:32:08.090 "uuid": "00000000-0000-0000-0000-000000000002", 00:32:08.090 "is_configured": true, 00:32:08.090 "data_offset": 2048, 00:32:08.090 "data_size": 63488 00:32:08.090 }, 00:32:08.090 { 00:32:08.090 "name": "pt3", 00:32:08.090 "uuid": "00000000-0000-0000-0000-000000000003", 00:32:08.090 "is_configured": true, 00:32:08.090 "data_offset": 2048, 00:32:08.090 "data_size": 63488 00:32:08.090 }, 00:32:08.090 { 00:32:08.090 "name": "pt4", 00:32:08.090 "uuid": "00000000-0000-0000-0000-000000000004", 00:32:08.090 "is_configured": true, 00:32:08.090 "data_offset": 2048, 00:32:08.090 "data_size": 63488 00:32:08.090 } 00:32:08.090 ] 00:32:08.090 }' 00:32:08.090 17:28:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:32:08.090 17:28:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:32:08.366 17:28:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:32:08.366 17:28:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:32:08.366 17:28:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:32:08.366 17:28:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:32:08.366 17:28:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:32:08.366 17:28:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:32:08.366 17:28:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:32:08.624 17:28:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:32:08.624 17:28:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:08.624 17:28:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:32:08.624 [2024-11-26 17:28:45.817500] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:32:08.624 17:28:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:08.624 17:28:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:32:08.624 "name": "raid_bdev1", 00:32:08.624 "aliases": [ 00:32:08.624 "94975f64-bdf9-41cf-8461-dfe0f2d44e85" 00:32:08.624 ], 00:32:08.624 "product_name": "Raid Volume", 00:32:08.624 "block_size": 512, 00:32:08.624 "num_blocks": 253952, 00:32:08.624 "uuid": "94975f64-bdf9-41cf-8461-dfe0f2d44e85", 00:32:08.624 "assigned_rate_limits": { 00:32:08.624 "rw_ios_per_sec": 0, 00:32:08.624 "rw_mbytes_per_sec": 0, 00:32:08.624 "r_mbytes_per_sec": 0, 00:32:08.624 "w_mbytes_per_sec": 0 00:32:08.624 }, 00:32:08.624 "claimed": false, 00:32:08.624 "zoned": false, 00:32:08.624 "supported_io_types": { 00:32:08.624 "read": true, 00:32:08.624 "write": true, 00:32:08.624 "unmap": true, 00:32:08.624 "flush": true, 00:32:08.625 "reset": true, 00:32:08.625 "nvme_admin": false, 00:32:08.625 "nvme_io": false, 00:32:08.625 "nvme_io_md": false, 00:32:08.625 "write_zeroes": true, 00:32:08.625 "zcopy": false, 00:32:08.625 "get_zone_info": false, 00:32:08.625 "zone_management": false, 00:32:08.625 "zone_append": false, 00:32:08.625 "compare": false, 00:32:08.625 "compare_and_write": false, 00:32:08.625 "abort": false, 00:32:08.625 "seek_hole": false, 00:32:08.625 "seek_data": false, 00:32:08.625 "copy": false, 00:32:08.625 "nvme_iov_md": false 00:32:08.625 }, 00:32:08.625 "memory_domains": [ 00:32:08.625 { 00:32:08.625 "dma_device_id": "system", 00:32:08.625 "dma_device_type": 1 00:32:08.625 }, 00:32:08.625 { 00:32:08.625 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:32:08.625 "dma_device_type": 2 00:32:08.625 }, 00:32:08.625 { 00:32:08.625 "dma_device_id": "system", 00:32:08.625 "dma_device_type": 1 00:32:08.625 }, 00:32:08.625 { 00:32:08.625 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:32:08.625 "dma_device_type": 2 00:32:08.625 }, 00:32:08.625 { 00:32:08.625 "dma_device_id": "system", 00:32:08.625 "dma_device_type": 1 00:32:08.625 }, 00:32:08.625 { 00:32:08.625 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:32:08.625 "dma_device_type": 2 00:32:08.625 }, 00:32:08.625 { 00:32:08.625 "dma_device_id": "system", 00:32:08.625 "dma_device_type": 1 00:32:08.625 }, 00:32:08.625 { 00:32:08.625 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:32:08.625 "dma_device_type": 2 00:32:08.625 } 00:32:08.625 ], 00:32:08.625 "driver_specific": { 00:32:08.625 "raid": { 00:32:08.625 "uuid": "94975f64-bdf9-41cf-8461-dfe0f2d44e85", 00:32:08.625 "strip_size_kb": 64, 00:32:08.625 "state": "online", 00:32:08.625 "raid_level": "raid0", 00:32:08.625 "superblock": true, 00:32:08.625 "num_base_bdevs": 4, 00:32:08.625 "num_base_bdevs_discovered": 4, 00:32:08.625 "num_base_bdevs_operational": 4, 00:32:08.625 "base_bdevs_list": [ 00:32:08.625 { 00:32:08.625 "name": "pt1", 00:32:08.625 "uuid": "00000000-0000-0000-0000-000000000001", 00:32:08.625 "is_configured": true, 00:32:08.625 "data_offset": 2048, 00:32:08.625 "data_size": 63488 00:32:08.625 }, 00:32:08.625 { 00:32:08.625 "name": "pt2", 00:32:08.625 "uuid": "00000000-0000-0000-0000-000000000002", 00:32:08.625 "is_configured": true, 00:32:08.625 "data_offset": 2048, 00:32:08.625 "data_size": 63488 00:32:08.625 }, 00:32:08.625 { 00:32:08.625 "name": "pt3", 00:32:08.625 "uuid": "00000000-0000-0000-0000-000000000003", 00:32:08.625 "is_configured": true, 00:32:08.625 "data_offset": 2048, 00:32:08.625 "data_size": 63488 00:32:08.625 }, 00:32:08.625 { 00:32:08.625 "name": "pt4", 00:32:08.625 "uuid": "00000000-0000-0000-0000-000000000004", 00:32:08.625 "is_configured": true, 00:32:08.625 "data_offset": 2048, 00:32:08.625 "data_size": 63488 00:32:08.625 } 00:32:08.625 ] 00:32:08.625 } 00:32:08.625 } 00:32:08.625 }' 00:32:08.625 17:28:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:32:08.625 17:28:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:32:08.625 pt2 00:32:08.625 pt3 00:32:08.625 pt4' 00:32:08.625 17:28:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:32:08.625 17:28:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:32:08.625 17:28:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:32:08.625 17:28:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:32:08.625 17:28:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:08.625 17:28:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:32:08.625 17:28:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:32:08.625 17:28:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:08.625 17:28:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:32:08.625 17:28:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:32:08.625 17:28:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:32:08.625 17:28:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:32:08.625 17:28:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:32:08.625 17:28:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:08.625 17:28:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:32:08.625 17:28:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:08.625 17:28:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:32:08.625 17:28:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:32:08.625 17:28:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:32:08.625 17:28:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:32:08.625 17:28:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:32:08.625 17:28:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:08.625 17:28:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:32:08.884 17:28:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:08.884 17:28:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:32:08.884 17:28:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:32:08.884 17:28:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:32:08.884 17:28:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt4 00:32:08.884 17:28:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:08.884 17:28:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:32:08.884 17:28:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:32:08.884 17:28:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:08.884 17:28:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:32:08.884 17:28:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:32:08.884 17:28:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:32:08.884 17:28:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:08.884 17:28:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:32:08.884 17:28:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:32:08.884 [2024-11-26 17:28:46.145543] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:32:08.884 17:28:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:08.884 17:28:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # '[' 94975f64-bdf9-41cf-8461-dfe0f2d44e85 '!=' 94975f64-bdf9-41cf-8461-dfe0f2d44e85 ']' 00:32:08.884 17:28:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@491 -- # has_redundancy raid0 00:32:08.884 17:28:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:32:08.884 17:28:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # return 1 00:32:08.884 17:28:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@563 -- # killprocess 71148 00:32:08.884 17:28:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@954 -- # '[' -z 71148 ']' 00:32:08.884 17:28:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@958 -- # kill -0 71148 00:32:08.884 17:28:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@959 -- # uname 00:32:08.884 17:28:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:32:08.884 17:28:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 71148 00:32:08.884 17:28:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:32:08.884 17:28:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:32:08.884 17:28:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 71148' 00:32:08.884 killing process with pid 71148 00:32:08.884 17:28:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@973 -- # kill 71148 00:32:08.884 [2024-11-26 17:28:46.226985] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:32:08.884 [2024-11-26 17:28:46.227086] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:32:08.884 [2024-11-26 17:28:46.227164] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:32:08.884 [2024-11-26 17:28:46.227175] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:32:08.884 17:28:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@978 -- # wait 71148 00:32:09.450 [2024-11-26 17:28:46.632173] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:32:10.384 ************************************ 00:32:10.384 END TEST raid_superblock_test 00:32:10.384 ************************************ 00:32:10.384 17:28:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@565 -- # return 0 00:32:10.384 00:32:10.384 real 0m5.718s 00:32:10.384 user 0m8.313s 00:32:10.384 sys 0m1.010s 00:32:10.384 17:28:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:32:10.384 17:28:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:32:10.643 17:28:47 bdev_raid -- bdev/bdev_raid.sh@971 -- # run_test raid_read_error_test raid_io_error_test raid0 4 read 00:32:10.643 17:28:47 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:32:10.643 17:28:47 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:32:10.643 17:28:47 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:32:10.643 ************************************ 00:32:10.643 START TEST raid_read_error_test 00:32:10.643 ************************************ 00:32:10.643 17:28:47 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1129 -- # raid_io_error_test raid0 4 read 00:32:10.643 17:28:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=raid0 00:32:10.643 17:28:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=4 00:32:10.643 17:28:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=read 00:32:10.643 17:28:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:32:10.643 17:28:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:32:10.643 17:28:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:32:10.643 17:28:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:32:10.643 17:28:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:32:10.643 17:28:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:32:10.643 17:28:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:32:10.643 17:28:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:32:10.643 17:28:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev3 00:32:10.643 17:28:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:32:10.643 17:28:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:32:10.643 17:28:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev4 00:32:10.643 17:28:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:32:10.643 17:28:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:32:10.643 17:28:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:32:10.643 17:28:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:32:10.643 17:28:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:32:10.643 17:28:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:32:10.643 17:28:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:32:10.643 17:28:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:32:10.643 17:28:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:32:10.643 17:28:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@800 -- # '[' raid0 '!=' raid1 ']' 00:32:10.643 17:28:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@801 -- # strip_size=64 00:32:10.643 17:28:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@802 -- # create_arg+=' -z 64' 00:32:10.643 17:28:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:32:10.643 17:28:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.vyTz3X7stF 00:32:10.643 17:28:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=71415 00:32:10.643 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:32:10.643 17:28:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 71415 00:32:10.643 17:28:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:32:10.643 17:28:47 bdev_raid.raid_read_error_test -- common/autotest_common.sh@835 -- # '[' -z 71415 ']' 00:32:10.643 17:28:47 bdev_raid.raid_read_error_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:32:10.643 17:28:47 bdev_raid.raid_read_error_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:32:10.643 17:28:47 bdev_raid.raid_read_error_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:32:10.643 17:28:47 bdev_raid.raid_read_error_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:32:10.643 17:28:47 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:32:10.643 [2024-11-26 17:28:47.984292] Starting SPDK v25.01-pre git sha1 f7ce15267 / DPDK 24.03.0 initialization... 00:32:10.643 [2024-11-26 17:28:47.984471] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71415 ] 00:32:10.901 [2024-11-26 17:28:48.178925] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:10.901 [2024-11-26 17:28:48.298988] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:32:11.158 [2024-11-26 17:28:48.512419] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:32:11.159 [2024-11-26 17:28:48.512488] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:32:11.725 17:28:48 bdev_raid.raid_read_error_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:32:11.725 17:28:48 bdev_raid.raid_read_error_test -- common/autotest_common.sh@868 -- # return 0 00:32:11.725 17:28:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:32:11.725 17:28:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:32:11.725 17:28:48 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:11.725 17:28:48 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:32:11.725 BaseBdev1_malloc 00:32:11.725 17:28:48 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:11.725 17:28:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:32:11.725 17:28:48 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:11.725 17:28:48 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:32:11.725 true 00:32:11.725 17:28:48 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:11.725 17:28:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:32:11.725 17:28:48 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:11.725 17:28:48 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:32:11.725 [2024-11-26 17:28:48.963898] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:32:11.725 [2024-11-26 17:28:48.963959] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:32:11.726 [2024-11-26 17:28:48.963983] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:32:11.726 [2024-11-26 17:28:48.963997] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:32:11.726 [2024-11-26 17:28:48.966519] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:32:11.726 [2024-11-26 17:28:48.966717] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:32:11.726 BaseBdev1 00:32:11.726 17:28:48 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:11.726 17:28:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:32:11.726 17:28:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:32:11.726 17:28:48 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:11.726 17:28:48 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:32:11.726 BaseBdev2_malloc 00:32:11.726 17:28:49 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:11.726 17:28:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:32:11.726 17:28:49 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:11.726 17:28:49 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:32:11.726 true 00:32:11.726 17:28:49 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:11.726 17:28:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:32:11.726 17:28:49 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:11.726 17:28:49 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:32:11.726 [2024-11-26 17:28:49.034133] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:32:11.726 [2024-11-26 17:28:49.034190] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:32:11.726 [2024-11-26 17:28:49.034210] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:32:11.726 [2024-11-26 17:28:49.034224] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:32:11.726 [2024-11-26 17:28:49.036577] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:32:11.726 [2024-11-26 17:28:49.036751] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:32:11.726 BaseBdev2 00:32:11.726 17:28:49 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:11.726 17:28:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:32:11.726 17:28:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:32:11.726 17:28:49 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:11.726 17:28:49 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:32:11.726 BaseBdev3_malloc 00:32:11.726 17:28:49 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:11.726 17:28:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev3_malloc 00:32:11.726 17:28:49 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:11.726 17:28:49 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:32:11.726 true 00:32:11.726 17:28:49 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:11.726 17:28:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:32:11.726 17:28:49 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:11.726 17:28:49 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:32:11.726 [2024-11-26 17:28:49.116455] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:32:11.726 [2024-11-26 17:28:49.116511] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:32:11.726 [2024-11-26 17:28:49.116532] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:32:11.726 [2024-11-26 17:28:49.116546] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:32:11.726 [2024-11-26 17:28:49.118927] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:32:11.726 [2024-11-26 17:28:49.118970] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:32:11.726 BaseBdev3 00:32:11.726 17:28:49 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:11.726 17:28:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:32:11.726 17:28:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:32:11.726 17:28:49 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:11.726 17:28:49 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:32:11.726 BaseBdev4_malloc 00:32:11.726 17:28:49 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:11.726 17:28:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev4_malloc 00:32:11.726 17:28:49 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:11.726 17:28:49 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:32:11.986 true 00:32:11.986 17:28:49 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:11.986 17:28:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev4_malloc -p BaseBdev4 00:32:11.986 17:28:49 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:11.986 17:28:49 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:32:11.986 [2024-11-26 17:28:49.180001] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev4_malloc 00:32:11.986 [2024-11-26 17:28:49.180090] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:32:11.986 [2024-11-26 17:28:49.180114] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:32:11.986 [2024-11-26 17:28:49.180128] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:32:11.986 [2024-11-26 17:28:49.182703] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:32:11.986 [2024-11-26 17:28:49.182869] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:32:11.986 BaseBdev4 00:32:11.986 17:28:49 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:11.986 17:28:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n raid_bdev1 -s 00:32:11.986 17:28:49 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:11.986 17:28:49 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:32:11.987 [2024-11-26 17:28:49.192121] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:32:11.987 [2024-11-26 17:28:49.194543] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:32:11.987 [2024-11-26 17:28:49.194630] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:32:11.987 [2024-11-26 17:28:49.194703] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:32:11.987 [2024-11-26 17:28:49.194944] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008580 00:32:11.987 [2024-11-26 17:28:49.194968] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:32:11.987 [2024-11-26 17:28:49.195283] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000068a0 00:32:11.987 [2024-11-26 17:28:49.195476] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008580 00:32:11.987 [2024-11-26 17:28:49.195491] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008580 00:32:11.987 [2024-11-26 17:28:49.195681] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:32:11.987 17:28:49 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:11.987 17:28:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 4 00:32:11.987 17:28:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:32:11.987 17:28:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:32:11.987 17:28:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:32:11.987 17:28:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:32:11.987 17:28:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:32:11.987 17:28:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:32:11.987 17:28:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:32:11.987 17:28:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:32:11.987 17:28:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:32:11.987 17:28:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:32:11.987 17:28:49 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:11.987 17:28:49 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:32:11.987 17:28:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:32:11.987 17:28:49 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:11.987 17:28:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:32:11.987 "name": "raid_bdev1", 00:32:11.987 "uuid": "32589d46-797d-4fd5-bfcf-b9e00be342be", 00:32:11.987 "strip_size_kb": 64, 00:32:11.987 "state": "online", 00:32:11.987 "raid_level": "raid0", 00:32:11.987 "superblock": true, 00:32:11.987 "num_base_bdevs": 4, 00:32:11.987 "num_base_bdevs_discovered": 4, 00:32:11.987 "num_base_bdevs_operational": 4, 00:32:11.987 "base_bdevs_list": [ 00:32:11.987 { 00:32:11.987 "name": "BaseBdev1", 00:32:11.987 "uuid": "c384c15b-d328-5e07-b4df-b7ad8e158906", 00:32:11.987 "is_configured": true, 00:32:11.987 "data_offset": 2048, 00:32:11.987 "data_size": 63488 00:32:11.987 }, 00:32:11.987 { 00:32:11.987 "name": "BaseBdev2", 00:32:11.987 "uuid": "81389d2a-c7e1-563d-85e2-e1e26009006e", 00:32:11.987 "is_configured": true, 00:32:11.987 "data_offset": 2048, 00:32:11.987 "data_size": 63488 00:32:11.987 }, 00:32:11.987 { 00:32:11.987 "name": "BaseBdev3", 00:32:11.987 "uuid": "b0540c99-1bee-5581-b514-59ecda6bd478", 00:32:11.987 "is_configured": true, 00:32:11.987 "data_offset": 2048, 00:32:11.987 "data_size": 63488 00:32:11.987 }, 00:32:11.987 { 00:32:11.987 "name": "BaseBdev4", 00:32:11.987 "uuid": "7dafde44-2b82-5dbb-9b4c-b1dd64d060de", 00:32:11.987 "is_configured": true, 00:32:11.987 "data_offset": 2048, 00:32:11.987 "data_size": 63488 00:32:11.987 } 00:32:11.987 ] 00:32:11.987 }' 00:32:11.987 17:28:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:32:11.987 17:28:49 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:32:12.247 17:28:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:32:12.247 17:28:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:32:12.506 [2024-11-26 17:28:49.833584] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006a40 00:32:13.440 17:28:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc read failure 00:32:13.440 17:28:50 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:13.441 17:28:50 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:32:13.441 17:28:50 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:13.441 17:28:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:32:13.441 17:28:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@832 -- # [[ raid0 = \r\a\i\d\1 ]] 00:32:13.441 17:28:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=4 00:32:13.441 17:28:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 4 00:32:13.441 17:28:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:32:13.441 17:28:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:32:13.441 17:28:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:32:13.441 17:28:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:32:13.441 17:28:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:32:13.441 17:28:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:32:13.441 17:28:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:32:13.441 17:28:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:32:13.441 17:28:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:32:13.441 17:28:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:32:13.441 17:28:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:32:13.441 17:28:50 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:13.441 17:28:50 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:32:13.441 17:28:50 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:13.441 17:28:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:32:13.441 "name": "raid_bdev1", 00:32:13.441 "uuid": "32589d46-797d-4fd5-bfcf-b9e00be342be", 00:32:13.441 "strip_size_kb": 64, 00:32:13.441 "state": "online", 00:32:13.441 "raid_level": "raid0", 00:32:13.441 "superblock": true, 00:32:13.441 "num_base_bdevs": 4, 00:32:13.441 "num_base_bdevs_discovered": 4, 00:32:13.441 "num_base_bdevs_operational": 4, 00:32:13.441 "base_bdevs_list": [ 00:32:13.441 { 00:32:13.441 "name": "BaseBdev1", 00:32:13.441 "uuid": "c384c15b-d328-5e07-b4df-b7ad8e158906", 00:32:13.441 "is_configured": true, 00:32:13.441 "data_offset": 2048, 00:32:13.441 "data_size": 63488 00:32:13.441 }, 00:32:13.441 { 00:32:13.441 "name": "BaseBdev2", 00:32:13.441 "uuid": "81389d2a-c7e1-563d-85e2-e1e26009006e", 00:32:13.441 "is_configured": true, 00:32:13.441 "data_offset": 2048, 00:32:13.441 "data_size": 63488 00:32:13.441 }, 00:32:13.441 { 00:32:13.441 "name": "BaseBdev3", 00:32:13.441 "uuid": "b0540c99-1bee-5581-b514-59ecda6bd478", 00:32:13.441 "is_configured": true, 00:32:13.441 "data_offset": 2048, 00:32:13.441 "data_size": 63488 00:32:13.441 }, 00:32:13.441 { 00:32:13.441 "name": "BaseBdev4", 00:32:13.441 "uuid": "7dafde44-2b82-5dbb-9b4c-b1dd64d060de", 00:32:13.441 "is_configured": true, 00:32:13.441 "data_offset": 2048, 00:32:13.441 "data_size": 63488 00:32:13.441 } 00:32:13.441 ] 00:32:13.441 }' 00:32:13.441 17:28:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:32:13.441 17:28:50 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:32:13.700 17:28:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:32:13.700 17:28:51 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:13.700 17:28:51 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:32:13.700 [2024-11-26 17:28:51.102248] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:32:13.700 [2024-11-26 17:28:51.102436] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:32:13.700 [2024-11-26 17:28:51.105176] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:32:13.700 [2024-11-26 17:28:51.105236] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:32:13.700 [2024-11-26 17:28:51.105279] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:32:13.700 [2024-11-26 17:28:51.105294] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008580 name raid_bdev1, state offline 00:32:13.700 { 00:32:13.700 "results": [ 00:32:13.700 { 00:32:13.700 "job": "raid_bdev1", 00:32:13.700 "core_mask": "0x1", 00:32:13.700 "workload": "randrw", 00:32:13.700 "percentage": 50, 00:32:13.700 "status": "finished", 00:32:13.700 "queue_depth": 1, 00:32:13.700 "io_size": 131072, 00:32:13.700 "runtime": 1.266496, 00:32:13.700 "iops": 15159.937338925665, 00:32:13.700 "mibps": 1894.9921673657082, 00:32:13.700 "io_failed": 1, 00:32:13.700 "io_timeout": 0, 00:32:13.700 "avg_latency_us": 91.11632102494663, 00:32:13.700 "min_latency_us": 27.794285714285714, 00:32:13.700 "max_latency_us": 1427.7485714285715 00:32:13.700 } 00:32:13.700 ], 00:32:13.700 "core_count": 1 00:32:13.700 } 00:32:13.700 17:28:51 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:13.700 17:28:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 71415 00:32:13.700 17:28:51 bdev_raid.raid_read_error_test -- common/autotest_common.sh@954 -- # '[' -z 71415 ']' 00:32:13.700 17:28:51 bdev_raid.raid_read_error_test -- common/autotest_common.sh@958 -- # kill -0 71415 00:32:13.700 17:28:51 bdev_raid.raid_read_error_test -- common/autotest_common.sh@959 -- # uname 00:32:13.700 17:28:51 bdev_raid.raid_read_error_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:32:13.700 17:28:51 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 71415 00:32:13.959 killing process with pid 71415 00:32:13.959 17:28:51 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:32:13.959 17:28:51 bdev_raid.raid_read_error_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:32:13.959 17:28:51 bdev_raid.raid_read_error_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 71415' 00:32:13.959 17:28:51 bdev_raid.raid_read_error_test -- common/autotest_common.sh@973 -- # kill 71415 00:32:13.959 [2024-11-26 17:28:51.153555] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:32:13.959 17:28:51 bdev_raid.raid_read_error_test -- common/autotest_common.sh@978 -- # wait 71415 00:32:14.218 [2024-11-26 17:28:51.493547] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:32:15.613 17:28:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.vyTz3X7stF 00:32:15.613 17:28:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:32:15.613 17:28:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:32:15.613 17:28:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.79 00:32:15.613 17:28:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy raid0 00:32:15.613 17:28:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:32:15.613 17:28:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@200 -- # return 1 00:32:15.613 17:28:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@849 -- # [[ 0.79 != \0\.\0\0 ]] 00:32:15.613 00:32:15.613 real 0m4.881s 00:32:15.613 user 0m5.835s 00:32:15.613 sys 0m0.658s 00:32:15.613 ************************************ 00:32:15.613 END TEST raid_read_error_test 00:32:15.613 ************************************ 00:32:15.613 17:28:52 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:32:15.613 17:28:52 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:32:15.613 17:28:52 bdev_raid -- bdev/bdev_raid.sh@972 -- # run_test raid_write_error_test raid_io_error_test raid0 4 write 00:32:15.613 17:28:52 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:32:15.613 17:28:52 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:32:15.613 17:28:52 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:32:15.613 ************************************ 00:32:15.613 START TEST raid_write_error_test 00:32:15.613 ************************************ 00:32:15.613 17:28:52 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1129 -- # raid_io_error_test raid0 4 write 00:32:15.613 17:28:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=raid0 00:32:15.613 17:28:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=4 00:32:15.613 17:28:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=write 00:32:15.613 17:28:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:32:15.613 17:28:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:32:15.613 17:28:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:32:15.613 17:28:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:32:15.613 17:28:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:32:15.613 17:28:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:32:15.613 17:28:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:32:15.614 17:28:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:32:15.614 17:28:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev3 00:32:15.614 17:28:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:32:15.614 17:28:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:32:15.614 17:28:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev4 00:32:15.614 17:28:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:32:15.614 17:28:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:32:15.614 17:28:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:32:15.614 17:28:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:32:15.614 17:28:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:32:15.614 17:28:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:32:15.614 17:28:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:32:15.614 17:28:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:32:15.614 17:28:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:32:15.614 17:28:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@800 -- # '[' raid0 '!=' raid1 ']' 00:32:15.614 17:28:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@801 -- # strip_size=64 00:32:15.614 17:28:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@802 -- # create_arg+=' -z 64' 00:32:15.614 17:28:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:32:15.614 17:28:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.D7r5EFdnKs 00:32:15.614 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:32:15.614 17:28:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=71562 00:32:15.614 17:28:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 71562 00:32:15.614 17:28:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:32:15.614 17:28:52 bdev_raid.raid_write_error_test -- common/autotest_common.sh@835 -- # '[' -z 71562 ']' 00:32:15.614 17:28:52 bdev_raid.raid_write_error_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:32:15.614 17:28:52 bdev_raid.raid_write_error_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:32:15.614 17:28:52 bdev_raid.raid_write_error_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:32:15.614 17:28:52 bdev_raid.raid_write_error_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:32:15.614 17:28:52 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:32:15.614 [2024-11-26 17:28:52.927499] Starting SPDK v25.01-pre git sha1 f7ce15267 / DPDK 24.03.0 initialization... 00:32:15.614 [2024-11-26 17:28:52.928377] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71562 ] 00:32:15.873 [2024-11-26 17:28:53.129796] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:15.873 [2024-11-26 17:28:53.308484] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:32:16.132 [2024-11-26 17:28:53.530324] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:32:16.132 [2024-11-26 17:28:53.530392] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:32:16.392 17:28:53 bdev_raid.raid_write_error_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:32:16.392 17:28:53 bdev_raid.raid_write_error_test -- common/autotest_common.sh@868 -- # return 0 00:32:16.392 17:28:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:32:16.392 17:28:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:32:16.392 17:28:53 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:16.392 17:28:53 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:32:16.392 BaseBdev1_malloc 00:32:16.392 17:28:53 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:16.392 17:28:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:32:16.392 17:28:53 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:16.392 17:28:53 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:32:16.392 true 00:32:16.392 17:28:53 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:16.392 17:28:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:32:16.392 17:28:53 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:16.392 17:28:53 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:32:16.392 [2024-11-26 17:28:53.834961] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:32:16.392 [2024-11-26 17:28:53.835026] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:32:16.392 [2024-11-26 17:28:53.835060] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:32:16.392 [2024-11-26 17:28:53.835076] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:32:16.392 [2024-11-26 17:28:53.837437] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:32:16.392 [2024-11-26 17:28:53.837482] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:32:16.651 BaseBdev1 00:32:16.651 17:28:53 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:16.651 17:28:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:32:16.651 17:28:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:32:16.651 17:28:53 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:16.651 17:28:53 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:32:16.651 BaseBdev2_malloc 00:32:16.651 17:28:53 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:16.651 17:28:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:32:16.651 17:28:53 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:16.651 17:28:53 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:32:16.651 true 00:32:16.651 17:28:53 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:16.651 17:28:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:32:16.651 17:28:53 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:16.651 17:28:53 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:32:16.651 [2024-11-26 17:28:53.899546] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:32:16.651 [2024-11-26 17:28:53.899612] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:32:16.651 [2024-11-26 17:28:53.899635] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:32:16.651 [2024-11-26 17:28:53.899652] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:32:16.651 [2024-11-26 17:28:53.902945] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:32:16.651 [2024-11-26 17:28:53.903241] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:32:16.651 BaseBdev2 00:32:16.651 17:28:53 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:16.651 17:28:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:32:16.651 17:28:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:32:16.651 17:28:53 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:16.651 17:28:53 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:32:16.651 BaseBdev3_malloc 00:32:16.651 17:28:53 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:16.651 17:28:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev3_malloc 00:32:16.651 17:28:53 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:16.651 17:28:53 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:32:16.651 true 00:32:16.651 17:28:53 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:16.651 17:28:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:32:16.651 17:28:53 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:16.651 17:28:53 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:32:16.651 [2024-11-26 17:28:53.980799] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:32:16.651 [2024-11-26 17:28:53.980980] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:32:16.651 [2024-11-26 17:28:53.981009] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:32:16.651 [2024-11-26 17:28:53.981023] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:32:16.651 [2024-11-26 17:28:53.983448] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:32:16.651 [2024-11-26 17:28:53.983493] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:32:16.651 BaseBdev3 00:32:16.651 17:28:53 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:16.651 17:28:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:32:16.651 17:28:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:32:16.651 17:28:53 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:16.651 17:28:53 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:32:16.651 BaseBdev4_malloc 00:32:16.651 17:28:54 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:16.651 17:28:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev4_malloc 00:32:16.651 17:28:54 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:16.651 17:28:54 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:32:16.651 true 00:32:16.651 17:28:54 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:16.651 17:28:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev4_malloc -p BaseBdev4 00:32:16.651 17:28:54 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:16.651 17:28:54 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:32:16.651 [2024-11-26 17:28:54.050498] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev4_malloc 00:32:16.651 [2024-11-26 17:28:54.050556] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:32:16.651 [2024-11-26 17:28:54.050577] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:32:16.651 [2024-11-26 17:28:54.050591] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:32:16.651 [2024-11-26 17:28:54.053027] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:32:16.651 [2024-11-26 17:28:54.053091] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:32:16.651 BaseBdev4 00:32:16.651 17:28:54 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:16.651 17:28:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n raid_bdev1 -s 00:32:16.651 17:28:54 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:16.651 17:28:54 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:32:16.651 [2024-11-26 17:28:54.058566] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:32:16.651 [2024-11-26 17:28:54.060756] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:32:16.651 [2024-11-26 17:28:54.060831] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:32:16.651 [2024-11-26 17:28:54.060895] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:32:16.651 [2024-11-26 17:28:54.061145] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008580 00:32:16.651 [2024-11-26 17:28:54.061164] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:32:16.651 [2024-11-26 17:28:54.061423] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000068a0 00:32:16.651 [2024-11-26 17:28:54.061589] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008580 00:32:16.651 [2024-11-26 17:28:54.061616] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008580 00:32:16.651 [2024-11-26 17:28:54.061777] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:32:16.651 17:28:54 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:16.651 17:28:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 4 00:32:16.651 17:28:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:32:16.651 17:28:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:32:16.651 17:28:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:32:16.651 17:28:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:32:16.651 17:28:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:32:16.651 17:28:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:32:16.651 17:28:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:32:16.651 17:28:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:32:16.651 17:28:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:32:16.651 17:28:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:32:16.651 17:28:54 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:16.652 17:28:54 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:32:16.652 17:28:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:32:16.652 17:28:54 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:16.910 17:28:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:32:16.910 "name": "raid_bdev1", 00:32:16.910 "uuid": "9375bc3e-79d7-4c47-88c6-8355d6e163f6", 00:32:16.910 "strip_size_kb": 64, 00:32:16.910 "state": "online", 00:32:16.910 "raid_level": "raid0", 00:32:16.910 "superblock": true, 00:32:16.911 "num_base_bdevs": 4, 00:32:16.911 "num_base_bdevs_discovered": 4, 00:32:16.911 "num_base_bdevs_operational": 4, 00:32:16.911 "base_bdevs_list": [ 00:32:16.911 { 00:32:16.911 "name": "BaseBdev1", 00:32:16.911 "uuid": "56a372e8-2d0e-5c8f-823e-e440db3f84d9", 00:32:16.911 "is_configured": true, 00:32:16.911 "data_offset": 2048, 00:32:16.911 "data_size": 63488 00:32:16.911 }, 00:32:16.911 { 00:32:16.911 "name": "BaseBdev2", 00:32:16.911 "uuid": "68941767-8d54-51a0-a31e-ae09acb5ff54", 00:32:16.911 "is_configured": true, 00:32:16.911 "data_offset": 2048, 00:32:16.911 "data_size": 63488 00:32:16.911 }, 00:32:16.911 { 00:32:16.911 "name": "BaseBdev3", 00:32:16.911 "uuid": "26f2de16-d39b-5b9d-8eaf-897178f1a3cc", 00:32:16.911 "is_configured": true, 00:32:16.911 "data_offset": 2048, 00:32:16.911 "data_size": 63488 00:32:16.911 }, 00:32:16.911 { 00:32:16.911 "name": "BaseBdev4", 00:32:16.911 "uuid": "ae73a2df-cc0d-5797-89a6-f3b480947d0b", 00:32:16.911 "is_configured": true, 00:32:16.911 "data_offset": 2048, 00:32:16.911 "data_size": 63488 00:32:16.911 } 00:32:16.911 ] 00:32:16.911 }' 00:32:16.911 17:28:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:32:16.911 17:28:54 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:32:17.170 17:28:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:32:17.170 17:28:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:32:17.170 [2024-11-26 17:28:54.588157] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006a40 00:32:18.108 17:28:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc write failure 00:32:18.108 17:28:55 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:18.108 17:28:55 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:32:18.108 17:28:55 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:18.108 17:28:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:32:18.108 17:28:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@832 -- # [[ raid0 = \r\a\i\d\1 ]] 00:32:18.108 17:28:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=4 00:32:18.108 17:28:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 4 00:32:18.108 17:28:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:32:18.108 17:28:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:32:18.108 17:28:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:32:18.108 17:28:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:32:18.108 17:28:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:32:18.108 17:28:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:32:18.108 17:28:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:32:18.108 17:28:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:32:18.108 17:28:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:32:18.108 17:28:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:32:18.108 17:28:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:32:18.108 17:28:55 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:18.108 17:28:55 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:32:18.108 17:28:55 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:18.108 17:28:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:32:18.108 "name": "raid_bdev1", 00:32:18.108 "uuid": "9375bc3e-79d7-4c47-88c6-8355d6e163f6", 00:32:18.108 "strip_size_kb": 64, 00:32:18.108 "state": "online", 00:32:18.108 "raid_level": "raid0", 00:32:18.108 "superblock": true, 00:32:18.108 "num_base_bdevs": 4, 00:32:18.108 "num_base_bdevs_discovered": 4, 00:32:18.108 "num_base_bdevs_operational": 4, 00:32:18.108 "base_bdevs_list": [ 00:32:18.108 { 00:32:18.108 "name": "BaseBdev1", 00:32:18.108 "uuid": "56a372e8-2d0e-5c8f-823e-e440db3f84d9", 00:32:18.108 "is_configured": true, 00:32:18.108 "data_offset": 2048, 00:32:18.108 "data_size": 63488 00:32:18.108 }, 00:32:18.108 { 00:32:18.108 "name": "BaseBdev2", 00:32:18.108 "uuid": "68941767-8d54-51a0-a31e-ae09acb5ff54", 00:32:18.108 "is_configured": true, 00:32:18.108 "data_offset": 2048, 00:32:18.108 "data_size": 63488 00:32:18.108 }, 00:32:18.108 { 00:32:18.108 "name": "BaseBdev3", 00:32:18.108 "uuid": "26f2de16-d39b-5b9d-8eaf-897178f1a3cc", 00:32:18.108 "is_configured": true, 00:32:18.108 "data_offset": 2048, 00:32:18.108 "data_size": 63488 00:32:18.108 }, 00:32:18.108 { 00:32:18.108 "name": "BaseBdev4", 00:32:18.108 "uuid": "ae73a2df-cc0d-5797-89a6-f3b480947d0b", 00:32:18.108 "is_configured": true, 00:32:18.108 "data_offset": 2048, 00:32:18.108 "data_size": 63488 00:32:18.109 } 00:32:18.109 ] 00:32:18.109 }' 00:32:18.109 17:28:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:32:18.109 17:28:55 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:32:18.677 17:28:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:32:18.677 17:28:55 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:18.677 17:28:55 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:32:18.677 [2024-11-26 17:28:55.884939] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:32:18.677 [2024-11-26 17:28:55.885194] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:32:18.677 [2024-11-26 17:28:55.888357] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:32:18.678 [2024-11-26 17:28:55.888538] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:32:18.678 [2024-11-26 17:28:55.888594] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:32:18.678 [2024-11-26 17:28:55.888609] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008580 name raid_bdev1, state offline 00:32:18.678 { 00:32:18.678 "results": [ 00:32:18.678 { 00:32:18.678 "job": "raid_bdev1", 00:32:18.678 "core_mask": "0x1", 00:32:18.678 "workload": "randrw", 00:32:18.678 "percentage": 50, 00:32:18.678 "status": "finished", 00:32:18.678 "queue_depth": 1, 00:32:18.678 "io_size": 131072, 00:32:18.678 "runtime": 1.294913, 00:32:18.678 "iops": 14977.840210114502, 00:32:18.678 "mibps": 1872.2300262643128, 00:32:18.678 "io_failed": 1, 00:32:18.678 "io_timeout": 0, 00:32:18.678 "avg_latency_us": 92.23683469345669, 00:32:18.678 "min_latency_us": 27.916190476190476, 00:32:18.678 "max_latency_us": 1435.5504761904763 00:32:18.678 } 00:32:18.678 ], 00:32:18.678 "core_count": 1 00:32:18.678 } 00:32:18.678 17:28:55 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:18.678 17:28:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 71562 00:32:18.678 17:28:55 bdev_raid.raid_write_error_test -- common/autotest_common.sh@954 -- # '[' -z 71562 ']' 00:32:18.678 17:28:55 bdev_raid.raid_write_error_test -- common/autotest_common.sh@958 -- # kill -0 71562 00:32:18.678 17:28:55 bdev_raid.raid_write_error_test -- common/autotest_common.sh@959 -- # uname 00:32:18.678 17:28:55 bdev_raid.raid_write_error_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:32:18.678 17:28:55 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 71562 00:32:18.678 killing process with pid 71562 00:32:18.678 17:28:55 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:32:18.678 17:28:55 bdev_raid.raid_write_error_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:32:18.678 17:28:55 bdev_raid.raid_write_error_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 71562' 00:32:18.678 17:28:55 bdev_raid.raid_write_error_test -- common/autotest_common.sh@973 -- # kill 71562 00:32:18.678 [2024-11-26 17:28:55.934269] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:32:18.678 17:28:55 bdev_raid.raid_write_error_test -- common/autotest_common.sh@978 -- # wait 71562 00:32:18.936 [2024-11-26 17:28:56.277585] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:32:20.315 17:28:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.D7r5EFdnKs 00:32:20.315 17:28:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:32:20.315 17:28:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:32:20.315 17:28:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.77 00:32:20.315 17:28:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy raid0 00:32:20.315 17:28:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:32:20.315 17:28:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@200 -- # return 1 00:32:20.315 17:28:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@849 -- # [[ 0.77 != \0\.\0\0 ]] 00:32:20.315 00:32:20.315 real 0m4.733s 00:32:20.315 user 0m5.508s 00:32:20.315 sys 0m0.645s 00:32:20.315 17:28:57 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:32:20.315 17:28:57 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:32:20.315 ************************************ 00:32:20.315 END TEST raid_write_error_test 00:32:20.315 ************************************ 00:32:20.315 17:28:57 bdev_raid -- bdev/bdev_raid.sh@967 -- # for level in raid0 concat raid1 00:32:20.315 17:28:57 bdev_raid -- bdev/bdev_raid.sh@968 -- # run_test raid_state_function_test raid_state_function_test concat 4 false 00:32:20.315 17:28:57 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:32:20.315 17:28:57 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:32:20.315 17:28:57 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:32:20.315 ************************************ 00:32:20.315 START TEST raid_state_function_test 00:32:20.315 ************************************ 00:32:20.315 17:28:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1129 -- # raid_state_function_test concat 4 false 00:32:20.315 17:28:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # local raid_level=concat 00:32:20.315 17:28:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=4 00:32:20.315 17:28:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # local superblock=false 00:32:20.315 17:28:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:32:20.315 17:28:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:32:20.315 17:28:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:32:20.315 17:28:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:32:20.315 17:28:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:32:20.315 17:28:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:32:20.315 17:28:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:32:20.315 17:28:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:32:20.315 17:28:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:32:20.315 17:28:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:32:20.315 17:28:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:32:20.315 17:28:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:32:20.315 17:28:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev4 00:32:20.315 17:28:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:32:20.315 17:28:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:32:20.315 17:28:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:32:20.315 17:28:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:32:20.315 17:28:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:32:20.315 17:28:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # local strip_size 00:32:20.315 17:28:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:32:20.315 17:28:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:32:20.315 17:28:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@215 -- # '[' concat '!=' raid1 ']' 00:32:20.315 17:28:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:32:20.315 17:28:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:32:20.315 17:28:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@222 -- # '[' false = true ']' 00:32:20.315 17:28:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # superblock_create_arg= 00:32:20.315 Process raid pid: 71713 00:32:20.315 17:28:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@229 -- # raid_pid=71713 00:32:20.315 17:28:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 71713' 00:32:20.315 17:28:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@231 -- # waitforlisten 71713 00:32:20.315 17:28:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:32:20.315 17:28:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@835 -- # '[' -z 71713 ']' 00:32:20.315 17:28:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:32:20.315 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:32:20.315 17:28:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:32:20.315 17:28:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:32:20.315 17:28:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:32:20.315 17:28:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:32:20.315 [2024-11-26 17:28:57.703219] Starting SPDK v25.01-pre git sha1 f7ce15267 / DPDK 24.03.0 initialization... 00:32:20.315 [2024-11-26 17:28:57.703645] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:32:20.575 [2024-11-26 17:28:57.901041] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:20.834 [2024-11-26 17:28:58.021141] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:32:20.834 [2024-11-26 17:28:58.235623] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:32:20.834 [2024-11-26 17:28:58.235873] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:32:21.408 17:28:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:32:21.408 17:28:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@868 -- # return 0 00:32:21.408 17:28:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:32:21.408 17:28:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:21.408 17:28:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:32:21.408 [2024-11-26 17:28:58.641092] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:32:21.408 [2024-11-26 17:28:58.641151] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:32:21.408 [2024-11-26 17:28:58.641164] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:32:21.409 [2024-11-26 17:28:58.641177] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:32:21.409 [2024-11-26 17:28:58.641185] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:32:21.409 [2024-11-26 17:28:58.641197] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:32:21.409 [2024-11-26 17:28:58.641205] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:32:21.409 [2024-11-26 17:28:58.641217] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:32:21.409 17:28:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:21.409 17:28:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:32:21.409 17:28:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:32:21.409 17:28:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:32:21.409 17:28:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:32:21.409 17:28:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:32:21.409 17:28:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:32:21.409 17:28:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:32:21.409 17:28:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:32:21.409 17:28:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:32:21.409 17:28:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:32:21.409 17:28:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:32:21.409 17:28:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:21.409 17:28:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:32:21.409 17:28:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:32:21.410 17:28:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:21.410 17:28:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:32:21.410 "name": "Existed_Raid", 00:32:21.410 "uuid": "00000000-0000-0000-0000-000000000000", 00:32:21.410 "strip_size_kb": 64, 00:32:21.410 "state": "configuring", 00:32:21.410 "raid_level": "concat", 00:32:21.410 "superblock": false, 00:32:21.410 "num_base_bdevs": 4, 00:32:21.410 "num_base_bdevs_discovered": 0, 00:32:21.410 "num_base_bdevs_operational": 4, 00:32:21.410 "base_bdevs_list": [ 00:32:21.410 { 00:32:21.410 "name": "BaseBdev1", 00:32:21.410 "uuid": "00000000-0000-0000-0000-000000000000", 00:32:21.410 "is_configured": false, 00:32:21.410 "data_offset": 0, 00:32:21.410 "data_size": 0 00:32:21.410 }, 00:32:21.410 { 00:32:21.410 "name": "BaseBdev2", 00:32:21.410 "uuid": "00000000-0000-0000-0000-000000000000", 00:32:21.410 "is_configured": false, 00:32:21.410 "data_offset": 0, 00:32:21.410 "data_size": 0 00:32:21.410 }, 00:32:21.410 { 00:32:21.410 "name": "BaseBdev3", 00:32:21.410 "uuid": "00000000-0000-0000-0000-000000000000", 00:32:21.410 "is_configured": false, 00:32:21.410 "data_offset": 0, 00:32:21.410 "data_size": 0 00:32:21.410 }, 00:32:21.410 { 00:32:21.410 "name": "BaseBdev4", 00:32:21.410 "uuid": "00000000-0000-0000-0000-000000000000", 00:32:21.410 "is_configured": false, 00:32:21.410 "data_offset": 0, 00:32:21.410 "data_size": 0 00:32:21.410 } 00:32:21.410 ] 00:32:21.410 }' 00:32:21.410 17:28:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:32:21.410 17:28:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:32:21.674 17:28:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:32:21.674 17:28:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:21.674 17:28:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:32:21.674 [2024-11-26 17:28:59.117147] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:32:21.674 [2024-11-26 17:28:59.117312] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:32:21.934 17:28:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:21.934 17:28:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:32:21.934 17:28:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:21.934 17:28:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:32:21.934 [2024-11-26 17:28:59.129170] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:32:21.934 [2024-11-26 17:28:59.129216] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:32:21.934 [2024-11-26 17:28:59.129227] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:32:21.934 [2024-11-26 17:28:59.129240] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:32:21.934 [2024-11-26 17:28:59.129248] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:32:21.934 [2024-11-26 17:28:59.129261] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:32:21.934 [2024-11-26 17:28:59.129269] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:32:21.934 [2024-11-26 17:28:59.129281] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:32:21.934 17:28:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:21.934 17:28:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:32:21.934 17:28:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:21.934 17:28:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:32:21.934 [2024-11-26 17:28:59.178358] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:32:21.934 BaseBdev1 00:32:21.934 17:28:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:21.934 17:28:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:32:21.934 17:28:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:32:21.934 17:28:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:32:21.934 17:28:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:32:21.934 17:28:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:32:21.934 17:28:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:32:21.934 17:28:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:32:21.934 17:28:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:21.934 17:28:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:32:21.934 17:28:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:21.934 17:28:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:32:21.934 17:28:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:21.934 17:28:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:32:21.934 [ 00:32:21.934 { 00:32:21.934 "name": "BaseBdev1", 00:32:21.934 "aliases": [ 00:32:21.934 "42b513a5-3eab-4e53-9b4c-31908fa8a087" 00:32:21.934 ], 00:32:21.934 "product_name": "Malloc disk", 00:32:21.934 "block_size": 512, 00:32:21.934 "num_blocks": 65536, 00:32:21.934 "uuid": "42b513a5-3eab-4e53-9b4c-31908fa8a087", 00:32:21.934 "assigned_rate_limits": { 00:32:21.934 "rw_ios_per_sec": 0, 00:32:21.934 "rw_mbytes_per_sec": 0, 00:32:21.934 "r_mbytes_per_sec": 0, 00:32:21.934 "w_mbytes_per_sec": 0 00:32:21.934 }, 00:32:21.934 "claimed": true, 00:32:21.934 "claim_type": "exclusive_write", 00:32:21.934 "zoned": false, 00:32:21.934 "supported_io_types": { 00:32:21.934 "read": true, 00:32:21.934 "write": true, 00:32:21.934 "unmap": true, 00:32:21.934 "flush": true, 00:32:21.934 "reset": true, 00:32:21.934 "nvme_admin": false, 00:32:21.934 "nvme_io": false, 00:32:21.934 "nvme_io_md": false, 00:32:21.934 "write_zeroes": true, 00:32:21.934 "zcopy": true, 00:32:21.934 "get_zone_info": false, 00:32:21.934 "zone_management": false, 00:32:21.934 "zone_append": false, 00:32:21.934 "compare": false, 00:32:21.934 "compare_and_write": false, 00:32:21.934 "abort": true, 00:32:21.934 "seek_hole": false, 00:32:21.934 "seek_data": false, 00:32:21.934 "copy": true, 00:32:21.934 "nvme_iov_md": false 00:32:21.934 }, 00:32:21.934 "memory_domains": [ 00:32:21.934 { 00:32:21.934 "dma_device_id": "system", 00:32:21.934 "dma_device_type": 1 00:32:21.934 }, 00:32:21.934 { 00:32:21.934 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:32:21.934 "dma_device_type": 2 00:32:21.934 } 00:32:21.934 ], 00:32:21.934 "driver_specific": {} 00:32:21.934 } 00:32:21.934 ] 00:32:21.934 17:28:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:21.934 17:28:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:32:21.934 17:28:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:32:21.934 17:28:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:32:21.934 17:28:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:32:21.934 17:28:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:32:21.934 17:28:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:32:21.934 17:28:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:32:21.934 17:28:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:32:21.934 17:28:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:32:21.934 17:28:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:32:21.934 17:28:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:32:21.934 17:28:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:32:21.934 17:28:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:21.934 17:28:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:32:21.934 17:28:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:32:21.934 17:28:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:21.934 17:28:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:32:21.934 "name": "Existed_Raid", 00:32:21.934 "uuid": "00000000-0000-0000-0000-000000000000", 00:32:21.934 "strip_size_kb": 64, 00:32:21.934 "state": "configuring", 00:32:21.934 "raid_level": "concat", 00:32:21.934 "superblock": false, 00:32:21.934 "num_base_bdevs": 4, 00:32:21.934 "num_base_bdevs_discovered": 1, 00:32:21.934 "num_base_bdevs_operational": 4, 00:32:21.934 "base_bdevs_list": [ 00:32:21.934 { 00:32:21.934 "name": "BaseBdev1", 00:32:21.934 "uuid": "42b513a5-3eab-4e53-9b4c-31908fa8a087", 00:32:21.934 "is_configured": true, 00:32:21.934 "data_offset": 0, 00:32:21.934 "data_size": 65536 00:32:21.934 }, 00:32:21.934 { 00:32:21.934 "name": "BaseBdev2", 00:32:21.935 "uuid": "00000000-0000-0000-0000-000000000000", 00:32:21.935 "is_configured": false, 00:32:21.935 "data_offset": 0, 00:32:21.935 "data_size": 0 00:32:21.935 }, 00:32:21.935 { 00:32:21.935 "name": "BaseBdev3", 00:32:21.935 "uuid": "00000000-0000-0000-0000-000000000000", 00:32:21.935 "is_configured": false, 00:32:21.935 "data_offset": 0, 00:32:21.935 "data_size": 0 00:32:21.935 }, 00:32:21.935 { 00:32:21.935 "name": "BaseBdev4", 00:32:21.935 "uuid": "00000000-0000-0000-0000-000000000000", 00:32:21.935 "is_configured": false, 00:32:21.935 "data_offset": 0, 00:32:21.935 "data_size": 0 00:32:21.935 } 00:32:21.935 ] 00:32:21.935 }' 00:32:21.935 17:28:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:32:21.935 17:28:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:32:22.550 17:28:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:32:22.550 17:28:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:22.550 17:28:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:32:22.550 [2024-11-26 17:28:59.686523] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:32:22.550 [2024-11-26 17:28:59.686578] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:32:22.550 17:28:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:22.550 17:28:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:32:22.550 17:28:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:22.550 17:28:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:32:22.550 [2024-11-26 17:28:59.694573] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:32:22.550 [2024-11-26 17:28:59.696880] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:32:22.550 [2024-11-26 17:28:59.696929] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:32:22.550 [2024-11-26 17:28:59.696942] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:32:22.550 [2024-11-26 17:28:59.696957] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:32:22.550 [2024-11-26 17:28:59.696966] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:32:22.550 [2024-11-26 17:28:59.696978] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:32:22.550 17:28:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:22.550 17:28:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:32:22.550 17:28:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:32:22.550 17:28:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:32:22.550 17:28:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:32:22.550 17:28:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:32:22.550 17:28:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:32:22.550 17:28:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:32:22.550 17:28:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:32:22.550 17:28:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:32:22.550 17:28:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:32:22.550 17:28:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:32:22.550 17:28:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:32:22.550 17:28:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:32:22.550 17:28:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:22.550 17:28:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:32:22.550 17:28:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:32:22.550 17:28:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:22.550 17:28:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:32:22.550 "name": "Existed_Raid", 00:32:22.550 "uuid": "00000000-0000-0000-0000-000000000000", 00:32:22.550 "strip_size_kb": 64, 00:32:22.550 "state": "configuring", 00:32:22.550 "raid_level": "concat", 00:32:22.550 "superblock": false, 00:32:22.550 "num_base_bdevs": 4, 00:32:22.550 "num_base_bdevs_discovered": 1, 00:32:22.550 "num_base_bdevs_operational": 4, 00:32:22.550 "base_bdevs_list": [ 00:32:22.550 { 00:32:22.550 "name": "BaseBdev1", 00:32:22.550 "uuid": "42b513a5-3eab-4e53-9b4c-31908fa8a087", 00:32:22.550 "is_configured": true, 00:32:22.550 "data_offset": 0, 00:32:22.550 "data_size": 65536 00:32:22.550 }, 00:32:22.550 { 00:32:22.550 "name": "BaseBdev2", 00:32:22.550 "uuid": "00000000-0000-0000-0000-000000000000", 00:32:22.550 "is_configured": false, 00:32:22.550 "data_offset": 0, 00:32:22.550 "data_size": 0 00:32:22.550 }, 00:32:22.550 { 00:32:22.550 "name": "BaseBdev3", 00:32:22.550 "uuid": "00000000-0000-0000-0000-000000000000", 00:32:22.550 "is_configured": false, 00:32:22.550 "data_offset": 0, 00:32:22.550 "data_size": 0 00:32:22.550 }, 00:32:22.550 { 00:32:22.550 "name": "BaseBdev4", 00:32:22.550 "uuid": "00000000-0000-0000-0000-000000000000", 00:32:22.550 "is_configured": false, 00:32:22.550 "data_offset": 0, 00:32:22.550 "data_size": 0 00:32:22.550 } 00:32:22.550 ] 00:32:22.550 }' 00:32:22.550 17:28:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:32:22.550 17:28:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:32:22.808 17:29:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:32:22.808 17:29:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:22.808 17:29:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:32:22.808 [2024-11-26 17:29:00.195418] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:32:22.808 BaseBdev2 00:32:22.808 17:29:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:22.808 17:29:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:32:22.808 17:29:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:32:22.808 17:29:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:32:22.808 17:29:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:32:22.808 17:29:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:32:22.808 17:29:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:32:22.808 17:29:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:32:22.808 17:29:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:22.809 17:29:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:32:22.809 17:29:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:22.809 17:29:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:32:22.809 17:29:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:22.809 17:29:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:32:22.809 [ 00:32:22.809 { 00:32:22.809 "name": "BaseBdev2", 00:32:22.809 "aliases": [ 00:32:22.809 "168b8ddd-6bcd-42d2-bbf3-08d96565dbd3" 00:32:22.809 ], 00:32:22.809 "product_name": "Malloc disk", 00:32:22.809 "block_size": 512, 00:32:22.809 "num_blocks": 65536, 00:32:22.809 "uuid": "168b8ddd-6bcd-42d2-bbf3-08d96565dbd3", 00:32:22.809 "assigned_rate_limits": { 00:32:22.809 "rw_ios_per_sec": 0, 00:32:22.809 "rw_mbytes_per_sec": 0, 00:32:22.809 "r_mbytes_per_sec": 0, 00:32:22.809 "w_mbytes_per_sec": 0 00:32:22.809 }, 00:32:22.809 "claimed": true, 00:32:22.809 "claim_type": "exclusive_write", 00:32:22.809 "zoned": false, 00:32:22.809 "supported_io_types": { 00:32:22.809 "read": true, 00:32:22.809 "write": true, 00:32:22.809 "unmap": true, 00:32:22.809 "flush": true, 00:32:22.809 "reset": true, 00:32:22.809 "nvme_admin": false, 00:32:22.809 "nvme_io": false, 00:32:22.809 "nvme_io_md": false, 00:32:22.809 "write_zeroes": true, 00:32:22.809 "zcopy": true, 00:32:22.809 "get_zone_info": false, 00:32:22.809 "zone_management": false, 00:32:22.809 "zone_append": false, 00:32:22.809 "compare": false, 00:32:22.809 "compare_and_write": false, 00:32:22.809 "abort": true, 00:32:22.809 "seek_hole": false, 00:32:22.809 "seek_data": false, 00:32:22.809 "copy": true, 00:32:22.809 "nvme_iov_md": false 00:32:22.809 }, 00:32:22.809 "memory_domains": [ 00:32:22.809 { 00:32:22.809 "dma_device_id": "system", 00:32:22.809 "dma_device_type": 1 00:32:22.809 }, 00:32:22.809 { 00:32:22.809 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:32:22.809 "dma_device_type": 2 00:32:22.809 } 00:32:22.809 ], 00:32:22.809 "driver_specific": {} 00:32:22.809 } 00:32:22.809 ] 00:32:22.809 17:29:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:22.809 17:29:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:32:22.809 17:29:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:32:22.809 17:29:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:32:22.809 17:29:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:32:22.809 17:29:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:32:22.809 17:29:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:32:22.809 17:29:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:32:22.809 17:29:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:32:22.809 17:29:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:32:22.809 17:29:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:32:22.809 17:29:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:32:22.809 17:29:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:32:22.809 17:29:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:32:22.809 17:29:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:32:22.809 17:29:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:32:22.809 17:29:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:22.809 17:29:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:32:22.809 17:29:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:23.067 17:29:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:32:23.067 "name": "Existed_Raid", 00:32:23.067 "uuid": "00000000-0000-0000-0000-000000000000", 00:32:23.067 "strip_size_kb": 64, 00:32:23.067 "state": "configuring", 00:32:23.067 "raid_level": "concat", 00:32:23.067 "superblock": false, 00:32:23.067 "num_base_bdevs": 4, 00:32:23.067 "num_base_bdevs_discovered": 2, 00:32:23.067 "num_base_bdevs_operational": 4, 00:32:23.067 "base_bdevs_list": [ 00:32:23.067 { 00:32:23.067 "name": "BaseBdev1", 00:32:23.067 "uuid": "42b513a5-3eab-4e53-9b4c-31908fa8a087", 00:32:23.067 "is_configured": true, 00:32:23.067 "data_offset": 0, 00:32:23.067 "data_size": 65536 00:32:23.067 }, 00:32:23.067 { 00:32:23.067 "name": "BaseBdev2", 00:32:23.067 "uuid": "168b8ddd-6bcd-42d2-bbf3-08d96565dbd3", 00:32:23.067 "is_configured": true, 00:32:23.067 "data_offset": 0, 00:32:23.067 "data_size": 65536 00:32:23.067 }, 00:32:23.067 { 00:32:23.067 "name": "BaseBdev3", 00:32:23.067 "uuid": "00000000-0000-0000-0000-000000000000", 00:32:23.067 "is_configured": false, 00:32:23.067 "data_offset": 0, 00:32:23.067 "data_size": 0 00:32:23.067 }, 00:32:23.067 { 00:32:23.067 "name": "BaseBdev4", 00:32:23.067 "uuid": "00000000-0000-0000-0000-000000000000", 00:32:23.067 "is_configured": false, 00:32:23.067 "data_offset": 0, 00:32:23.067 "data_size": 0 00:32:23.067 } 00:32:23.067 ] 00:32:23.067 }' 00:32:23.067 17:29:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:32:23.067 17:29:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:32:23.326 17:29:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:32:23.326 17:29:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:23.326 17:29:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:32:23.326 [2024-11-26 17:29:00.749793] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:32:23.326 BaseBdev3 00:32:23.326 17:29:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:23.326 17:29:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:32:23.326 17:29:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:32:23.326 17:29:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:32:23.326 17:29:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:32:23.326 17:29:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:32:23.326 17:29:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:32:23.326 17:29:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:32:23.326 17:29:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:23.326 17:29:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:32:23.326 17:29:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:23.326 17:29:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:32:23.326 17:29:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:23.326 17:29:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:32:23.585 [ 00:32:23.585 { 00:32:23.585 "name": "BaseBdev3", 00:32:23.585 "aliases": [ 00:32:23.585 "8a435bb6-0ed2-47c6-bbf4-f24bd60f679d" 00:32:23.585 ], 00:32:23.585 "product_name": "Malloc disk", 00:32:23.585 "block_size": 512, 00:32:23.585 "num_blocks": 65536, 00:32:23.585 "uuid": "8a435bb6-0ed2-47c6-bbf4-f24bd60f679d", 00:32:23.585 "assigned_rate_limits": { 00:32:23.585 "rw_ios_per_sec": 0, 00:32:23.585 "rw_mbytes_per_sec": 0, 00:32:23.585 "r_mbytes_per_sec": 0, 00:32:23.585 "w_mbytes_per_sec": 0 00:32:23.585 }, 00:32:23.585 "claimed": true, 00:32:23.585 "claim_type": "exclusive_write", 00:32:23.585 "zoned": false, 00:32:23.585 "supported_io_types": { 00:32:23.585 "read": true, 00:32:23.585 "write": true, 00:32:23.585 "unmap": true, 00:32:23.585 "flush": true, 00:32:23.585 "reset": true, 00:32:23.585 "nvme_admin": false, 00:32:23.585 "nvme_io": false, 00:32:23.585 "nvme_io_md": false, 00:32:23.585 "write_zeroes": true, 00:32:23.585 "zcopy": true, 00:32:23.585 "get_zone_info": false, 00:32:23.585 "zone_management": false, 00:32:23.585 "zone_append": false, 00:32:23.585 "compare": false, 00:32:23.585 "compare_and_write": false, 00:32:23.585 "abort": true, 00:32:23.585 "seek_hole": false, 00:32:23.585 "seek_data": false, 00:32:23.585 "copy": true, 00:32:23.585 "nvme_iov_md": false 00:32:23.585 }, 00:32:23.585 "memory_domains": [ 00:32:23.585 { 00:32:23.585 "dma_device_id": "system", 00:32:23.585 "dma_device_type": 1 00:32:23.585 }, 00:32:23.585 { 00:32:23.585 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:32:23.585 "dma_device_type": 2 00:32:23.585 } 00:32:23.585 ], 00:32:23.585 "driver_specific": {} 00:32:23.585 } 00:32:23.585 ] 00:32:23.585 17:29:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:23.585 17:29:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:32:23.585 17:29:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:32:23.585 17:29:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:32:23.585 17:29:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:32:23.586 17:29:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:32:23.586 17:29:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:32:23.586 17:29:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:32:23.586 17:29:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:32:23.586 17:29:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:32:23.586 17:29:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:32:23.586 17:29:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:32:23.586 17:29:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:32:23.586 17:29:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:32:23.586 17:29:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:32:23.586 17:29:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:32:23.586 17:29:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:23.586 17:29:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:32:23.586 17:29:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:23.586 17:29:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:32:23.586 "name": "Existed_Raid", 00:32:23.586 "uuid": "00000000-0000-0000-0000-000000000000", 00:32:23.586 "strip_size_kb": 64, 00:32:23.586 "state": "configuring", 00:32:23.586 "raid_level": "concat", 00:32:23.586 "superblock": false, 00:32:23.586 "num_base_bdevs": 4, 00:32:23.586 "num_base_bdevs_discovered": 3, 00:32:23.586 "num_base_bdevs_operational": 4, 00:32:23.586 "base_bdevs_list": [ 00:32:23.586 { 00:32:23.586 "name": "BaseBdev1", 00:32:23.586 "uuid": "42b513a5-3eab-4e53-9b4c-31908fa8a087", 00:32:23.586 "is_configured": true, 00:32:23.586 "data_offset": 0, 00:32:23.586 "data_size": 65536 00:32:23.586 }, 00:32:23.586 { 00:32:23.586 "name": "BaseBdev2", 00:32:23.586 "uuid": "168b8ddd-6bcd-42d2-bbf3-08d96565dbd3", 00:32:23.586 "is_configured": true, 00:32:23.586 "data_offset": 0, 00:32:23.586 "data_size": 65536 00:32:23.586 }, 00:32:23.586 { 00:32:23.586 "name": "BaseBdev3", 00:32:23.586 "uuid": "8a435bb6-0ed2-47c6-bbf4-f24bd60f679d", 00:32:23.586 "is_configured": true, 00:32:23.586 "data_offset": 0, 00:32:23.586 "data_size": 65536 00:32:23.586 }, 00:32:23.586 { 00:32:23.586 "name": "BaseBdev4", 00:32:23.586 "uuid": "00000000-0000-0000-0000-000000000000", 00:32:23.586 "is_configured": false, 00:32:23.586 "data_offset": 0, 00:32:23.586 "data_size": 0 00:32:23.586 } 00:32:23.586 ] 00:32:23.586 }' 00:32:23.586 17:29:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:32:23.586 17:29:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:32:23.844 17:29:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:32:23.844 17:29:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:23.844 17:29:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:32:23.844 [2024-11-26 17:29:01.274153] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:32:23.844 [2024-11-26 17:29:01.274379] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:32:23.844 [2024-11-26 17:29:01.274400] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 262144, blocklen 512 00:32:23.844 [2024-11-26 17:29:01.274721] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:32:23.844 [2024-11-26 17:29:01.274886] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:32:23.844 [2024-11-26 17:29:01.274900] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:32:23.844 [2024-11-26 17:29:01.275224] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:32:23.844 BaseBdev4 00:32:23.844 17:29:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:23.844 17:29:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev4 00:32:23.844 17:29:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev4 00:32:23.844 17:29:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:32:23.844 17:29:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:32:23.844 17:29:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:32:23.844 17:29:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:32:23.844 17:29:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:32:23.844 17:29:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:23.844 17:29:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:32:23.844 17:29:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:23.844 17:29:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:32:23.844 17:29:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:23.844 17:29:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:32:24.104 [ 00:32:24.104 { 00:32:24.104 "name": "BaseBdev4", 00:32:24.104 "aliases": [ 00:32:24.104 "6d6e3984-9471-41f1-9e1d-ddb43dd6fca1" 00:32:24.104 ], 00:32:24.104 "product_name": "Malloc disk", 00:32:24.104 "block_size": 512, 00:32:24.104 "num_blocks": 65536, 00:32:24.104 "uuid": "6d6e3984-9471-41f1-9e1d-ddb43dd6fca1", 00:32:24.104 "assigned_rate_limits": { 00:32:24.104 "rw_ios_per_sec": 0, 00:32:24.104 "rw_mbytes_per_sec": 0, 00:32:24.104 "r_mbytes_per_sec": 0, 00:32:24.104 "w_mbytes_per_sec": 0 00:32:24.104 }, 00:32:24.104 "claimed": true, 00:32:24.104 "claim_type": "exclusive_write", 00:32:24.104 "zoned": false, 00:32:24.104 "supported_io_types": { 00:32:24.104 "read": true, 00:32:24.104 "write": true, 00:32:24.104 "unmap": true, 00:32:24.104 "flush": true, 00:32:24.104 "reset": true, 00:32:24.104 "nvme_admin": false, 00:32:24.104 "nvme_io": false, 00:32:24.104 "nvme_io_md": false, 00:32:24.104 "write_zeroes": true, 00:32:24.104 "zcopy": true, 00:32:24.104 "get_zone_info": false, 00:32:24.104 "zone_management": false, 00:32:24.104 "zone_append": false, 00:32:24.104 "compare": false, 00:32:24.104 "compare_and_write": false, 00:32:24.104 "abort": true, 00:32:24.104 "seek_hole": false, 00:32:24.104 "seek_data": false, 00:32:24.104 "copy": true, 00:32:24.104 "nvme_iov_md": false 00:32:24.104 }, 00:32:24.104 "memory_domains": [ 00:32:24.104 { 00:32:24.104 "dma_device_id": "system", 00:32:24.104 "dma_device_type": 1 00:32:24.104 }, 00:32:24.104 { 00:32:24.104 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:32:24.104 "dma_device_type": 2 00:32:24.104 } 00:32:24.104 ], 00:32:24.104 "driver_specific": {} 00:32:24.104 } 00:32:24.104 ] 00:32:24.104 17:29:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:24.104 17:29:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:32:24.104 17:29:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:32:24.104 17:29:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:32:24.104 17:29:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online concat 64 4 00:32:24.104 17:29:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:32:24.104 17:29:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:32:24.104 17:29:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:32:24.104 17:29:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:32:24.104 17:29:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:32:24.104 17:29:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:32:24.104 17:29:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:32:24.104 17:29:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:32:24.104 17:29:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:32:24.104 17:29:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:32:24.104 17:29:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:32:24.104 17:29:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:24.104 17:29:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:32:24.104 17:29:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:24.104 17:29:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:32:24.104 "name": "Existed_Raid", 00:32:24.104 "uuid": "cee3f1d0-b64a-4fcc-98ff-a4a5665d7405", 00:32:24.104 "strip_size_kb": 64, 00:32:24.104 "state": "online", 00:32:24.104 "raid_level": "concat", 00:32:24.104 "superblock": false, 00:32:24.104 "num_base_bdevs": 4, 00:32:24.104 "num_base_bdevs_discovered": 4, 00:32:24.104 "num_base_bdevs_operational": 4, 00:32:24.104 "base_bdevs_list": [ 00:32:24.104 { 00:32:24.104 "name": "BaseBdev1", 00:32:24.104 "uuid": "42b513a5-3eab-4e53-9b4c-31908fa8a087", 00:32:24.104 "is_configured": true, 00:32:24.104 "data_offset": 0, 00:32:24.104 "data_size": 65536 00:32:24.104 }, 00:32:24.104 { 00:32:24.104 "name": "BaseBdev2", 00:32:24.104 "uuid": "168b8ddd-6bcd-42d2-bbf3-08d96565dbd3", 00:32:24.104 "is_configured": true, 00:32:24.104 "data_offset": 0, 00:32:24.104 "data_size": 65536 00:32:24.104 }, 00:32:24.104 { 00:32:24.104 "name": "BaseBdev3", 00:32:24.104 "uuid": "8a435bb6-0ed2-47c6-bbf4-f24bd60f679d", 00:32:24.104 "is_configured": true, 00:32:24.104 "data_offset": 0, 00:32:24.104 "data_size": 65536 00:32:24.104 }, 00:32:24.104 { 00:32:24.104 "name": "BaseBdev4", 00:32:24.104 "uuid": "6d6e3984-9471-41f1-9e1d-ddb43dd6fca1", 00:32:24.104 "is_configured": true, 00:32:24.104 "data_offset": 0, 00:32:24.104 "data_size": 65536 00:32:24.104 } 00:32:24.104 ] 00:32:24.104 }' 00:32:24.104 17:29:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:32:24.104 17:29:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:32:24.364 17:29:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:32:24.364 17:29:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:32:24.364 17:29:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:32:24.364 17:29:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:32:24.364 17:29:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:32:24.364 17:29:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:32:24.364 17:29:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:32:24.364 17:29:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:24.364 17:29:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:32:24.364 17:29:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:32:24.364 [2024-11-26 17:29:01.758682] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:32:24.364 17:29:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:24.364 17:29:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:32:24.364 "name": "Existed_Raid", 00:32:24.364 "aliases": [ 00:32:24.364 "cee3f1d0-b64a-4fcc-98ff-a4a5665d7405" 00:32:24.364 ], 00:32:24.364 "product_name": "Raid Volume", 00:32:24.364 "block_size": 512, 00:32:24.364 "num_blocks": 262144, 00:32:24.364 "uuid": "cee3f1d0-b64a-4fcc-98ff-a4a5665d7405", 00:32:24.364 "assigned_rate_limits": { 00:32:24.364 "rw_ios_per_sec": 0, 00:32:24.364 "rw_mbytes_per_sec": 0, 00:32:24.364 "r_mbytes_per_sec": 0, 00:32:24.364 "w_mbytes_per_sec": 0 00:32:24.364 }, 00:32:24.364 "claimed": false, 00:32:24.364 "zoned": false, 00:32:24.364 "supported_io_types": { 00:32:24.364 "read": true, 00:32:24.364 "write": true, 00:32:24.364 "unmap": true, 00:32:24.364 "flush": true, 00:32:24.364 "reset": true, 00:32:24.364 "nvme_admin": false, 00:32:24.364 "nvme_io": false, 00:32:24.364 "nvme_io_md": false, 00:32:24.364 "write_zeroes": true, 00:32:24.364 "zcopy": false, 00:32:24.364 "get_zone_info": false, 00:32:24.364 "zone_management": false, 00:32:24.364 "zone_append": false, 00:32:24.364 "compare": false, 00:32:24.364 "compare_and_write": false, 00:32:24.364 "abort": false, 00:32:24.364 "seek_hole": false, 00:32:24.364 "seek_data": false, 00:32:24.364 "copy": false, 00:32:24.364 "nvme_iov_md": false 00:32:24.364 }, 00:32:24.364 "memory_domains": [ 00:32:24.364 { 00:32:24.364 "dma_device_id": "system", 00:32:24.364 "dma_device_type": 1 00:32:24.364 }, 00:32:24.364 { 00:32:24.364 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:32:24.364 "dma_device_type": 2 00:32:24.364 }, 00:32:24.364 { 00:32:24.364 "dma_device_id": "system", 00:32:24.364 "dma_device_type": 1 00:32:24.364 }, 00:32:24.364 { 00:32:24.364 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:32:24.364 "dma_device_type": 2 00:32:24.364 }, 00:32:24.364 { 00:32:24.364 "dma_device_id": "system", 00:32:24.364 "dma_device_type": 1 00:32:24.364 }, 00:32:24.364 { 00:32:24.364 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:32:24.364 "dma_device_type": 2 00:32:24.364 }, 00:32:24.364 { 00:32:24.364 "dma_device_id": "system", 00:32:24.364 "dma_device_type": 1 00:32:24.364 }, 00:32:24.364 { 00:32:24.364 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:32:24.364 "dma_device_type": 2 00:32:24.364 } 00:32:24.364 ], 00:32:24.364 "driver_specific": { 00:32:24.364 "raid": { 00:32:24.364 "uuid": "cee3f1d0-b64a-4fcc-98ff-a4a5665d7405", 00:32:24.364 "strip_size_kb": 64, 00:32:24.364 "state": "online", 00:32:24.364 "raid_level": "concat", 00:32:24.364 "superblock": false, 00:32:24.364 "num_base_bdevs": 4, 00:32:24.364 "num_base_bdevs_discovered": 4, 00:32:24.364 "num_base_bdevs_operational": 4, 00:32:24.364 "base_bdevs_list": [ 00:32:24.364 { 00:32:24.364 "name": "BaseBdev1", 00:32:24.364 "uuid": "42b513a5-3eab-4e53-9b4c-31908fa8a087", 00:32:24.364 "is_configured": true, 00:32:24.364 "data_offset": 0, 00:32:24.364 "data_size": 65536 00:32:24.364 }, 00:32:24.364 { 00:32:24.364 "name": "BaseBdev2", 00:32:24.364 "uuid": "168b8ddd-6bcd-42d2-bbf3-08d96565dbd3", 00:32:24.364 "is_configured": true, 00:32:24.364 "data_offset": 0, 00:32:24.364 "data_size": 65536 00:32:24.364 }, 00:32:24.364 { 00:32:24.364 "name": "BaseBdev3", 00:32:24.364 "uuid": "8a435bb6-0ed2-47c6-bbf4-f24bd60f679d", 00:32:24.364 "is_configured": true, 00:32:24.364 "data_offset": 0, 00:32:24.364 "data_size": 65536 00:32:24.364 }, 00:32:24.364 { 00:32:24.364 "name": "BaseBdev4", 00:32:24.364 "uuid": "6d6e3984-9471-41f1-9e1d-ddb43dd6fca1", 00:32:24.364 "is_configured": true, 00:32:24.364 "data_offset": 0, 00:32:24.364 "data_size": 65536 00:32:24.364 } 00:32:24.364 ] 00:32:24.364 } 00:32:24.364 } 00:32:24.364 }' 00:32:24.364 17:29:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:32:24.623 17:29:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:32:24.623 BaseBdev2 00:32:24.623 BaseBdev3 00:32:24.623 BaseBdev4' 00:32:24.623 17:29:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:32:24.623 17:29:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:32:24.623 17:29:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:32:24.623 17:29:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:32:24.623 17:29:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:24.623 17:29:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:32:24.623 17:29:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:32:24.623 17:29:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:24.623 17:29:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:32:24.623 17:29:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:32:24.623 17:29:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:32:24.623 17:29:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:32:24.623 17:29:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:24.623 17:29:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:32:24.623 17:29:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:32:24.623 17:29:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:24.623 17:29:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:32:24.623 17:29:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:32:24.623 17:29:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:32:24.623 17:29:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:32:24.623 17:29:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:24.623 17:29:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:32:24.623 17:29:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:32:24.623 17:29:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:24.623 17:29:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:32:24.623 17:29:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:32:24.623 17:29:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:32:24.623 17:29:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:32:24.623 17:29:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:32:24.623 17:29:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:24.623 17:29:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:32:24.882 17:29:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:24.882 17:29:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:32:24.882 17:29:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:32:24.882 17:29:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:32:24.882 17:29:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:24.882 17:29:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:32:24.882 [2024-11-26 17:29:02.098423] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:32:24.882 [2024-11-26 17:29:02.098567] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:32:24.882 [2024-11-26 17:29:02.098642] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:32:24.882 17:29:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:24.882 17:29:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@260 -- # local expected_state 00:32:24.882 17:29:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@261 -- # has_redundancy concat 00:32:24.882 17:29:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:32:24.882 17:29:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # return 1 00:32:24.882 17:29:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@262 -- # expected_state=offline 00:32:24.882 17:29:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid offline concat 64 3 00:32:24.882 17:29:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:32:24.882 17:29:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=offline 00:32:24.882 17:29:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:32:24.882 17:29:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:32:24.882 17:29:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:32:24.882 17:29:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:32:24.882 17:29:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:32:24.882 17:29:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:32:24.882 17:29:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:32:24.882 17:29:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:32:24.882 17:29:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:32:24.882 17:29:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:24.882 17:29:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:32:24.882 17:29:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:24.882 17:29:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:32:24.882 "name": "Existed_Raid", 00:32:24.882 "uuid": "cee3f1d0-b64a-4fcc-98ff-a4a5665d7405", 00:32:24.882 "strip_size_kb": 64, 00:32:24.882 "state": "offline", 00:32:24.882 "raid_level": "concat", 00:32:24.882 "superblock": false, 00:32:24.882 "num_base_bdevs": 4, 00:32:24.882 "num_base_bdevs_discovered": 3, 00:32:24.882 "num_base_bdevs_operational": 3, 00:32:24.882 "base_bdevs_list": [ 00:32:24.882 { 00:32:24.882 "name": null, 00:32:24.882 "uuid": "00000000-0000-0000-0000-000000000000", 00:32:24.882 "is_configured": false, 00:32:24.882 "data_offset": 0, 00:32:24.882 "data_size": 65536 00:32:24.882 }, 00:32:24.882 { 00:32:24.882 "name": "BaseBdev2", 00:32:24.882 "uuid": "168b8ddd-6bcd-42d2-bbf3-08d96565dbd3", 00:32:24.882 "is_configured": true, 00:32:24.882 "data_offset": 0, 00:32:24.882 "data_size": 65536 00:32:24.882 }, 00:32:24.882 { 00:32:24.882 "name": "BaseBdev3", 00:32:24.882 "uuid": "8a435bb6-0ed2-47c6-bbf4-f24bd60f679d", 00:32:24.882 "is_configured": true, 00:32:24.882 "data_offset": 0, 00:32:24.882 "data_size": 65536 00:32:24.882 }, 00:32:24.882 { 00:32:24.882 "name": "BaseBdev4", 00:32:24.882 "uuid": "6d6e3984-9471-41f1-9e1d-ddb43dd6fca1", 00:32:24.882 "is_configured": true, 00:32:24.882 "data_offset": 0, 00:32:24.882 "data_size": 65536 00:32:24.882 } 00:32:24.882 ] 00:32:24.882 }' 00:32:24.882 17:29:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:32:24.882 17:29:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:32:25.449 17:29:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:32:25.449 17:29:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:32:25.449 17:29:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:32:25.449 17:29:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:25.449 17:29:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:32:25.449 17:29:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:32:25.449 17:29:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:25.449 17:29:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:32:25.449 17:29:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:32:25.449 17:29:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:32:25.449 17:29:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:25.449 17:29:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:32:25.449 [2024-11-26 17:29:02.679198] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:32:25.449 17:29:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:25.449 17:29:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:32:25.449 17:29:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:32:25.449 17:29:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:32:25.449 17:29:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:32:25.449 17:29:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:25.449 17:29:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:32:25.449 17:29:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:25.449 17:29:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:32:25.449 17:29:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:32:25.449 17:29:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:32:25.449 17:29:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:25.449 17:29:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:32:25.449 [2024-11-26 17:29:02.836185] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:32:25.708 17:29:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:25.708 17:29:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:32:25.708 17:29:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:32:25.708 17:29:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:32:25.708 17:29:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:25.708 17:29:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:32:25.708 17:29:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:32:25.708 17:29:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:25.708 17:29:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:32:25.708 17:29:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:32:25.708 17:29:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev4 00:32:25.708 17:29:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:25.708 17:29:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:32:25.708 [2024-11-26 17:29:02.997089] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev4 00:32:25.708 [2024-11-26 17:29:02.997161] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:32:25.708 17:29:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:25.708 17:29:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:32:25.708 17:29:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:32:25.708 17:29:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:32:25.708 17:29:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:32:25.708 17:29:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:25.708 17:29:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:32:25.708 17:29:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:25.708 17:29:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:32:25.708 17:29:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:32:25.709 17:29:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@284 -- # '[' 4 -gt 2 ']' 00:32:25.709 17:29:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:32:25.709 17:29:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:32:25.709 17:29:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:32:25.968 17:29:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:25.968 17:29:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:32:25.968 BaseBdev2 00:32:25.968 17:29:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:25.968 17:29:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:32:25.968 17:29:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:32:25.968 17:29:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:32:25.968 17:29:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:32:25.968 17:29:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:32:25.968 17:29:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:32:25.968 17:29:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:32:25.968 17:29:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:25.968 17:29:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:32:25.968 17:29:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:25.968 17:29:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:32:25.968 17:29:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:25.968 17:29:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:32:25.968 [ 00:32:25.968 { 00:32:25.968 "name": "BaseBdev2", 00:32:25.968 "aliases": [ 00:32:25.968 "fb9c778a-090f-4cbb-96de-43594bea6923" 00:32:25.968 ], 00:32:25.968 "product_name": "Malloc disk", 00:32:25.968 "block_size": 512, 00:32:25.968 "num_blocks": 65536, 00:32:25.968 "uuid": "fb9c778a-090f-4cbb-96de-43594bea6923", 00:32:25.968 "assigned_rate_limits": { 00:32:25.968 "rw_ios_per_sec": 0, 00:32:25.968 "rw_mbytes_per_sec": 0, 00:32:25.968 "r_mbytes_per_sec": 0, 00:32:25.968 "w_mbytes_per_sec": 0 00:32:25.968 }, 00:32:25.968 "claimed": false, 00:32:25.968 "zoned": false, 00:32:25.968 "supported_io_types": { 00:32:25.968 "read": true, 00:32:25.968 "write": true, 00:32:25.968 "unmap": true, 00:32:25.968 "flush": true, 00:32:25.968 "reset": true, 00:32:25.968 "nvme_admin": false, 00:32:25.968 "nvme_io": false, 00:32:25.968 "nvme_io_md": false, 00:32:25.968 "write_zeroes": true, 00:32:25.968 "zcopy": true, 00:32:25.968 "get_zone_info": false, 00:32:25.968 "zone_management": false, 00:32:25.968 "zone_append": false, 00:32:25.968 "compare": false, 00:32:25.968 "compare_and_write": false, 00:32:25.968 "abort": true, 00:32:25.968 "seek_hole": false, 00:32:25.968 "seek_data": false, 00:32:25.968 "copy": true, 00:32:25.968 "nvme_iov_md": false 00:32:25.968 }, 00:32:25.968 "memory_domains": [ 00:32:25.968 { 00:32:25.968 "dma_device_id": "system", 00:32:25.968 "dma_device_type": 1 00:32:25.968 }, 00:32:25.968 { 00:32:25.968 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:32:25.968 "dma_device_type": 2 00:32:25.968 } 00:32:25.968 ], 00:32:25.968 "driver_specific": {} 00:32:25.968 } 00:32:25.968 ] 00:32:25.968 17:29:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:25.968 17:29:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:32:25.968 17:29:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:32:25.968 17:29:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:32:25.968 17:29:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:32:25.968 17:29:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:25.968 17:29:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:32:25.968 BaseBdev3 00:32:25.968 17:29:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:25.968 17:29:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:32:25.968 17:29:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:32:25.968 17:29:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:32:25.968 17:29:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:32:25.968 17:29:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:32:25.968 17:29:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:32:25.968 17:29:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:32:25.968 17:29:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:25.968 17:29:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:32:25.968 17:29:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:25.968 17:29:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:32:25.968 17:29:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:25.968 17:29:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:32:25.968 [ 00:32:25.968 { 00:32:25.968 "name": "BaseBdev3", 00:32:25.968 "aliases": [ 00:32:25.968 "12e92500-3a48-400f-af20-42a57ff659e8" 00:32:25.968 ], 00:32:25.968 "product_name": "Malloc disk", 00:32:25.968 "block_size": 512, 00:32:25.968 "num_blocks": 65536, 00:32:25.968 "uuid": "12e92500-3a48-400f-af20-42a57ff659e8", 00:32:25.968 "assigned_rate_limits": { 00:32:25.968 "rw_ios_per_sec": 0, 00:32:25.968 "rw_mbytes_per_sec": 0, 00:32:25.968 "r_mbytes_per_sec": 0, 00:32:25.968 "w_mbytes_per_sec": 0 00:32:25.968 }, 00:32:25.968 "claimed": false, 00:32:25.968 "zoned": false, 00:32:25.968 "supported_io_types": { 00:32:25.968 "read": true, 00:32:25.968 "write": true, 00:32:25.968 "unmap": true, 00:32:25.968 "flush": true, 00:32:25.968 "reset": true, 00:32:25.968 "nvme_admin": false, 00:32:25.968 "nvme_io": false, 00:32:25.968 "nvme_io_md": false, 00:32:25.968 "write_zeroes": true, 00:32:25.968 "zcopy": true, 00:32:25.968 "get_zone_info": false, 00:32:25.968 "zone_management": false, 00:32:25.968 "zone_append": false, 00:32:25.968 "compare": false, 00:32:25.969 "compare_and_write": false, 00:32:25.969 "abort": true, 00:32:25.969 "seek_hole": false, 00:32:25.969 "seek_data": false, 00:32:25.969 "copy": true, 00:32:25.969 "nvme_iov_md": false 00:32:25.969 }, 00:32:25.969 "memory_domains": [ 00:32:25.969 { 00:32:25.969 "dma_device_id": "system", 00:32:25.969 "dma_device_type": 1 00:32:25.969 }, 00:32:25.969 { 00:32:25.969 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:32:25.969 "dma_device_type": 2 00:32:25.969 } 00:32:25.969 ], 00:32:25.969 "driver_specific": {} 00:32:25.969 } 00:32:25.969 ] 00:32:25.969 17:29:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:25.969 17:29:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:32:25.969 17:29:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:32:25.969 17:29:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:32:25.969 17:29:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:32:25.969 17:29:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:25.969 17:29:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:32:25.969 BaseBdev4 00:32:25.969 17:29:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:25.969 17:29:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev4 00:32:25.969 17:29:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev4 00:32:25.969 17:29:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:32:25.969 17:29:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:32:25.969 17:29:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:32:25.969 17:29:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:32:25.969 17:29:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:32:25.969 17:29:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:25.969 17:29:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:32:25.969 17:29:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:25.969 17:29:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:32:25.969 17:29:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:25.969 17:29:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:32:25.969 [ 00:32:25.969 { 00:32:25.969 "name": "BaseBdev4", 00:32:25.969 "aliases": [ 00:32:25.969 "cd1f5656-d468-4c70-9e35-66f9275133dc" 00:32:25.969 ], 00:32:25.969 "product_name": "Malloc disk", 00:32:25.969 "block_size": 512, 00:32:25.969 "num_blocks": 65536, 00:32:25.969 "uuid": "cd1f5656-d468-4c70-9e35-66f9275133dc", 00:32:25.969 "assigned_rate_limits": { 00:32:25.969 "rw_ios_per_sec": 0, 00:32:25.969 "rw_mbytes_per_sec": 0, 00:32:25.969 "r_mbytes_per_sec": 0, 00:32:25.969 "w_mbytes_per_sec": 0 00:32:25.969 }, 00:32:25.969 "claimed": false, 00:32:25.969 "zoned": false, 00:32:25.969 "supported_io_types": { 00:32:25.969 "read": true, 00:32:25.969 "write": true, 00:32:25.969 "unmap": true, 00:32:25.969 "flush": true, 00:32:25.969 "reset": true, 00:32:25.969 "nvme_admin": false, 00:32:25.969 "nvme_io": false, 00:32:25.969 "nvme_io_md": false, 00:32:25.969 "write_zeroes": true, 00:32:25.969 "zcopy": true, 00:32:25.969 "get_zone_info": false, 00:32:25.969 "zone_management": false, 00:32:25.969 "zone_append": false, 00:32:25.969 "compare": false, 00:32:25.969 "compare_and_write": false, 00:32:25.969 "abort": true, 00:32:25.969 "seek_hole": false, 00:32:25.969 "seek_data": false, 00:32:25.969 "copy": true, 00:32:25.969 "nvme_iov_md": false 00:32:25.969 }, 00:32:25.969 "memory_domains": [ 00:32:25.969 { 00:32:25.969 "dma_device_id": "system", 00:32:25.969 "dma_device_type": 1 00:32:25.969 }, 00:32:25.969 { 00:32:25.969 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:32:25.969 "dma_device_type": 2 00:32:25.969 } 00:32:25.969 ], 00:32:25.969 "driver_specific": {} 00:32:25.969 } 00:32:25.969 ] 00:32:25.969 17:29:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:25.969 17:29:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:32:25.969 17:29:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:32:25.969 17:29:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:32:25.969 17:29:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:32:25.969 17:29:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:25.969 17:29:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:32:25.969 [2024-11-26 17:29:03.407295] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:32:25.969 [2024-11-26 17:29:03.407454] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:32:25.969 [2024-11-26 17:29:03.407589] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:32:25.969 [2024-11-26 17:29:03.409841] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:32:25.969 [2024-11-26 17:29:03.410029] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:32:25.969 17:29:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:25.969 17:29:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:32:25.969 17:29:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:32:25.969 17:29:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:32:25.969 17:29:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:32:26.228 17:29:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:32:26.228 17:29:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:32:26.228 17:29:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:32:26.228 17:29:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:32:26.228 17:29:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:32:26.228 17:29:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:32:26.228 17:29:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:32:26.228 17:29:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:32:26.228 17:29:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:26.228 17:29:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:32:26.228 17:29:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:26.228 17:29:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:32:26.228 "name": "Existed_Raid", 00:32:26.228 "uuid": "00000000-0000-0000-0000-000000000000", 00:32:26.228 "strip_size_kb": 64, 00:32:26.228 "state": "configuring", 00:32:26.228 "raid_level": "concat", 00:32:26.228 "superblock": false, 00:32:26.228 "num_base_bdevs": 4, 00:32:26.228 "num_base_bdevs_discovered": 3, 00:32:26.228 "num_base_bdevs_operational": 4, 00:32:26.228 "base_bdevs_list": [ 00:32:26.228 { 00:32:26.228 "name": "BaseBdev1", 00:32:26.228 "uuid": "00000000-0000-0000-0000-000000000000", 00:32:26.228 "is_configured": false, 00:32:26.228 "data_offset": 0, 00:32:26.228 "data_size": 0 00:32:26.228 }, 00:32:26.228 { 00:32:26.228 "name": "BaseBdev2", 00:32:26.228 "uuid": "fb9c778a-090f-4cbb-96de-43594bea6923", 00:32:26.228 "is_configured": true, 00:32:26.229 "data_offset": 0, 00:32:26.229 "data_size": 65536 00:32:26.229 }, 00:32:26.229 { 00:32:26.229 "name": "BaseBdev3", 00:32:26.229 "uuid": "12e92500-3a48-400f-af20-42a57ff659e8", 00:32:26.229 "is_configured": true, 00:32:26.229 "data_offset": 0, 00:32:26.229 "data_size": 65536 00:32:26.229 }, 00:32:26.229 { 00:32:26.229 "name": "BaseBdev4", 00:32:26.229 "uuid": "cd1f5656-d468-4c70-9e35-66f9275133dc", 00:32:26.229 "is_configured": true, 00:32:26.229 "data_offset": 0, 00:32:26.229 "data_size": 65536 00:32:26.229 } 00:32:26.229 ] 00:32:26.229 }' 00:32:26.229 17:29:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:32:26.229 17:29:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:32:26.488 17:29:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:32:26.488 17:29:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:26.488 17:29:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:32:26.488 [2024-11-26 17:29:03.851378] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:32:26.488 17:29:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:26.488 17:29:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:32:26.488 17:29:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:32:26.488 17:29:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:32:26.488 17:29:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:32:26.488 17:29:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:32:26.488 17:29:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:32:26.488 17:29:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:32:26.488 17:29:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:32:26.488 17:29:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:32:26.488 17:29:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:32:26.488 17:29:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:32:26.488 17:29:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:26.488 17:29:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:32:26.488 17:29:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:32:26.488 17:29:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:26.488 17:29:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:32:26.488 "name": "Existed_Raid", 00:32:26.488 "uuid": "00000000-0000-0000-0000-000000000000", 00:32:26.488 "strip_size_kb": 64, 00:32:26.488 "state": "configuring", 00:32:26.488 "raid_level": "concat", 00:32:26.488 "superblock": false, 00:32:26.488 "num_base_bdevs": 4, 00:32:26.488 "num_base_bdevs_discovered": 2, 00:32:26.488 "num_base_bdevs_operational": 4, 00:32:26.488 "base_bdevs_list": [ 00:32:26.488 { 00:32:26.488 "name": "BaseBdev1", 00:32:26.488 "uuid": "00000000-0000-0000-0000-000000000000", 00:32:26.488 "is_configured": false, 00:32:26.488 "data_offset": 0, 00:32:26.488 "data_size": 0 00:32:26.488 }, 00:32:26.488 { 00:32:26.488 "name": null, 00:32:26.488 "uuid": "fb9c778a-090f-4cbb-96de-43594bea6923", 00:32:26.488 "is_configured": false, 00:32:26.488 "data_offset": 0, 00:32:26.488 "data_size": 65536 00:32:26.488 }, 00:32:26.488 { 00:32:26.488 "name": "BaseBdev3", 00:32:26.488 "uuid": "12e92500-3a48-400f-af20-42a57ff659e8", 00:32:26.488 "is_configured": true, 00:32:26.488 "data_offset": 0, 00:32:26.488 "data_size": 65536 00:32:26.488 }, 00:32:26.488 { 00:32:26.488 "name": "BaseBdev4", 00:32:26.488 "uuid": "cd1f5656-d468-4c70-9e35-66f9275133dc", 00:32:26.488 "is_configured": true, 00:32:26.488 "data_offset": 0, 00:32:26.488 "data_size": 65536 00:32:26.488 } 00:32:26.488 ] 00:32:26.488 }' 00:32:26.488 17:29:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:32:26.488 17:29:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:32:27.057 17:29:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:32:27.057 17:29:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:27.057 17:29:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:32:27.057 17:29:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:32:27.057 17:29:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:27.057 17:29:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:32:27.057 17:29:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:32:27.057 17:29:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:27.057 17:29:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:32:27.057 [2024-11-26 17:29:04.402433] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:32:27.057 BaseBdev1 00:32:27.057 17:29:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:27.057 17:29:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:32:27.057 17:29:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:32:27.057 17:29:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:32:27.057 17:29:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:32:27.057 17:29:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:32:27.057 17:29:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:32:27.057 17:29:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:32:27.057 17:29:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:27.057 17:29:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:32:27.057 17:29:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:27.057 17:29:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:32:27.057 17:29:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:27.057 17:29:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:32:27.057 [ 00:32:27.057 { 00:32:27.057 "name": "BaseBdev1", 00:32:27.057 "aliases": [ 00:32:27.057 "dfbc1958-0eb1-4ab4-9f55-1a4213149c85" 00:32:27.057 ], 00:32:27.057 "product_name": "Malloc disk", 00:32:27.057 "block_size": 512, 00:32:27.057 "num_blocks": 65536, 00:32:27.057 "uuid": "dfbc1958-0eb1-4ab4-9f55-1a4213149c85", 00:32:27.057 "assigned_rate_limits": { 00:32:27.057 "rw_ios_per_sec": 0, 00:32:27.057 "rw_mbytes_per_sec": 0, 00:32:27.057 "r_mbytes_per_sec": 0, 00:32:27.057 "w_mbytes_per_sec": 0 00:32:27.057 }, 00:32:27.057 "claimed": true, 00:32:27.057 "claim_type": "exclusive_write", 00:32:27.057 "zoned": false, 00:32:27.057 "supported_io_types": { 00:32:27.057 "read": true, 00:32:27.057 "write": true, 00:32:27.057 "unmap": true, 00:32:27.057 "flush": true, 00:32:27.057 "reset": true, 00:32:27.057 "nvme_admin": false, 00:32:27.057 "nvme_io": false, 00:32:27.057 "nvme_io_md": false, 00:32:27.057 "write_zeroes": true, 00:32:27.057 "zcopy": true, 00:32:27.057 "get_zone_info": false, 00:32:27.057 "zone_management": false, 00:32:27.057 "zone_append": false, 00:32:27.057 "compare": false, 00:32:27.057 "compare_and_write": false, 00:32:27.057 "abort": true, 00:32:27.057 "seek_hole": false, 00:32:27.057 "seek_data": false, 00:32:27.057 "copy": true, 00:32:27.057 "nvme_iov_md": false 00:32:27.057 }, 00:32:27.057 "memory_domains": [ 00:32:27.057 { 00:32:27.057 "dma_device_id": "system", 00:32:27.057 "dma_device_type": 1 00:32:27.057 }, 00:32:27.057 { 00:32:27.057 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:32:27.057 "dma_device_type": 2 00:32:27.057 } 00:32:27.057 ], 00:32:27.057 "driver_specific": {} 00:32:27.057 } 00:32:27.057 ] 00:32:27.057 17:29:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:27.057 17:29:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:32:27.057 17:29:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:32:27.057 17:29:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:32:27.057 17:29:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:32:27.057 17:29:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:32:27.057 17:29:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:32:27.057 17:29:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:32:27.057 17:29:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:32:27.057 17:29:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:32:27.057 17:29:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:32:27.057 17:29:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:32:27.057 17:29:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:32:27.057 17:29:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:27.057 17:29:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:32:27.058 17:29:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:32:27.058 17:29:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:27.058 17:29:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:32:27.058 "name": "Existed_Raid", 00:32:27.058 "uuid": "00000000-0000-0000-0000-000000000000", 00:32:27.058 "strip_size_kb": 64, 00:32:27.058 "state": "configuring", 00:32:27.058 "raid_level": "concat", 00:32:27.058 "superblock": false, 00:32:27.058 "num_base_bdevs": 4, 00:32:27.058 "num_base_bdevs_discovered": 3, 00:32:27.058 "num_base_bdevs_operational": 4, 00:32:27.058 "base_bdevs_list": [ 00:32:27.058 { 00:32:27.058 "name": "BaseBdev1", 00:32:27.058 "uuid": "dfbc1958-0eb1-4ab4-9f55-1a4213149c85", 00:32:27.058 "is_configured": true, 00:32:27.058 "data_offset": 0, 00:32:27.058 "data_size": 65536 00:32:27.058 }, 00:32:27.058 { 00:32:27.058 "name": null, 00:32:27.058 "uuid": "fb9c778a-090f-4cbb-96de-43594bea6923", 00:32:27.058 "is_configured": false, 00:32:27.058 "data_offset": 0, 00:32:27.058 "data_size": 65536 00:32:27.058 }, 00:32:27.058 { 00:32:27.058 "name": "BaseBdev3", 00:32:27.058 "uuid": "12e92500-3a48-400f-af20-42a57ff659e8", 00:32:27.058 "is_configured": true, 00:32:27.058 "data_offset": 0, 00:32:27.058 "data_size": 65536 00:32:27.058 }, 00:32:27.058 { 00:32:27.058 "name": "BaseBdev4", 00:32:27.058 "uuid": "cd1f5656-d468-4c70-9e35-66f9275133dc", 00:32:27.058 "is_configured": true, 00:32:27.058 "data_offset": 0, 00:32:27.058 "data_size": 65536 00:32:27.058 } 00:32:27.058 ] 00:32:27.058 }' 00:32:27.058 17:29:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:32:27.058 17:29:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:32:27.626 17:29:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:32:27.626 17:29:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:27.626 17:29:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:32:27.626 17:29:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:32:27.626 17:29:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:27.626 17:29:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:32:27.626 17:29:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:32:27.626 17:29:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:27.626 17:29:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:32:27.626 [2024-11-26 17:29:04.922641] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:32:27.626 17:29:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:27.626 17:29:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:32:27.626 17:29:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:32:27.626 17:29:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:32:27.626 17:29:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:32:27.626 17:29:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:32:27.626 17:29:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:32:27.626 17:29:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:32:27.626 17:29:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:32:27.626 17:29:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:32:27.626 17:29:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:32:27.626 17:29:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:32:27.626 17:29:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:27.626 17:29:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:32:27.626 17:29:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:32:27.626 17:29:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:27.626 17:29:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:32:27.626 "name": "Existed_Raid", 00:32:27.626 "uuid": "00000000-0000-0000-0000-000000000000", 00:32:27.626 "strip_size_kb": 64, 00:32:27.626 "state": "configuring", 00:32:27.626 "raid_level": "concat", 00:32:27.626 "superblock": false, 00:32:27.626 "num_base_bdevs": 4, 00:32:27.626 "num_base_bdevs_discovered": 2, 00:32:27.626 "num_base_bdevs_operational": 4, 00:32:27.626 "base_bdevs_list": [ 00:32:27.626 { 00:32:27.626 "name": "BaseBdev1", 00:32:27.626 "uuid": "dfbc1958-0eb1-4ab4-9f55-1a4213149c85", 00:32:27.626 "is_configured": true, 00:32:27.626 "data_offset": 0, 00:32:27.626 "data_size": 65536 00:32:27.626 }, 00:32:27.626 { 00:32:27.626 "name": null, 00:32:27.626 "uuid": "fb9c778a-090f-4cbb-96de-43594bea6923", 00:32:27.626 "is_configured": false, 00:32:27.626 "data_offset": 0, 00:32:27.626 "data_size": 65536 00:32:27.626 }, 00:32:27.626 { 00:32:27.626 "name": null, 00:32:27.626 "uuid": "12e92500-3a48-400f-af20-42a57ff659e8", 00:32:27.626 "is_configured": false, 00:32:27.626 "data_offset": 0, 00:32:27.626 "data_size": 65536 00:32:27.626 }, 00:32:27.626 { 00:32:27.626 "name": "BaseBdev4", 00:32:27.626 "uuid": "cd1f5656-d468-4c70-9e35-66f9275133dc", 00:32:27.626 "is_configured": true, 00:32:27.626 "data_offset": 0, 00:32:27.626 "data_size": 65536 00:32:27.626 } 00:32:27.626 ] 00:32:27.626 }' 00:32:27.626 17:29:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:32:27.626 17:29:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:32:28.195 17:29:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:32:28.195 17:29:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:28.195 17:29:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:32:28.195 17:29:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:32:28.195 17:29:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:28.195 17:29:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:32:28.195 17:29:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:32:28.195 17:29:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:28.195 17:29:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:32:28.195 [2024-11-26 17:29:05.418731] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:32:28.195 17:29:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:28.195 17:29:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:32:28.195 17:29:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:32:28.195 17:29:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:32:28.195 17:29:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:32:28.195 17:29:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:32:28.195 17:29:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:32:28.195 17:29:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:32:28.195 17:29:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:32:28.195 17:29:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:32:28.195 17:29:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:32:28.195 17:29:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:32:28.195 17:29:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:32:28.195 17:29:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:28.195 17:29:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:32:28.195 17:29:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:28.195 17:29:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:32:28.195 "name": "Existed_Raid", 00:32:28.195 "uuid": "00000000-0000-0000-0000-000000000000", 00:32:28.195 "strip_size_kb": 64, 00:32:28.195 "state": "configuring", 00:32:28.195 "raid_level": "concat", 00:32:28.195 "superblock": false, 00:32:28.195 "num_base_bdevs": 4, 00:32:28.195 "num_base_bdevs_discovered": 3, 00:32:28.195 "num_base_bdevs_operational": 4, 00:32:28.195 "base_bdevs_list": [ 00:32:28.195 { 00:32:28.196 "name": "BaseBdev1", 00:32:28.196 "uuid": "dfbc1958-0eb1-4ab4-9f55-1a4213149c85", 00:32:28.196 "is_configured": true, 00:32:28.196 "data_offset": 0, 00:32:28.196 "data_size": 65536 00:32:28.196 }, 00:32:28.196 { 00:32:28.196 "name": null, 00:32:28.196 "uuid": "fb9c778a-090f-4cbb-96de-43594bea6923", 00:32:28.196 "is_configured": false, 00:32:28.196 "data_offset": 0, 00:32:28.196 "data_size": 65536 00:32:28.196 }, 00:32:28.196 { 00:32:28.196 "name": "BaseBdev3", 00:32:28.196 "uuid": "12e92500-3a48-400f-af20-42a57ff659e8", 00:32:28.196 "is_configured": true, 00:32:28.196 "data_offset": 0, 00:32:28.196 "data_size": 65536 00:32:28.196 }, 00:32:28.196 { 00:32:28.196 "name": "BaseBdev4", 00:32:28.196 "uuid": "cd1f5656-d468-4c70-9e35-66f9275133dc", 00:32:28.196 "is_configured": true, 00:32:28.196 "data_offset": 0, 00:32:28.196 "data_size": 65536 00:32:28.196 } 00:32:28.196 ] 00:32:28.196 }' 00:32:28.196 17:29:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:32:28.196 17:29:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:32:28.455 17:29:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:32:28.455 17:29:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:28.455 17:29:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:32:28.455 17:29:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:32:28.455 17:29:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:28.455 17:29:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:32:28.455 17:29:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:32:28.455 17:29:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:28.455 17:29:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:32:28.455 [2024-11-26 17:29:05.874862] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:32:28.714 17:29:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:28.714 17:29:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:32:28.714 17:29:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:32:28.714 17:29:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:32:28.714 17:29:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:32:28.714 17:29:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:32:28.714 17:29:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:32:28.714 17:29:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:32:28.714 17:29:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:32:28.714 17:29:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:32:28.714 17:29:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:32:28.714 17:29:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:32:28.714 17:29:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:28.714 17:29:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:32:28.714 17:29:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:32:28.714 17:29:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:28.714 17:29:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:32:28.714 "name": "Existed_Raid", 00:32:28.715 "uuid": "00000000-0000-0000-0000-000000000000", 00:32:28.715 "strip_size_kb": 64, 00:32:28.715 "state": "configuring", 00:32:28.715 "raid_level": "concat", 00:32:28.715 "superblock": false, 00:32:28.715 "num_base_bdevs": 4, 00:32:28.715 "num_base_bdevs_discovered": 2, 00:32:28.715 "num_base_bdevs_operational": 4, 00:32:28.715 "base_bdevs_list": [ 00:32:28.715 { 00:32:28.715 "name": null, 00:32:28.715 "uuid": "dfbc1958-0eb1-4ab4-9f55-1a4213149c85", 00:32:28.715 "is_configured": false, 00:32:28.715 "data_offset": 0, 00:32:28.715 "data_size": 65536 00:32:28.715 }, 00:32:28.715 { 00:32:28.715 "name": null, 00:32:28.715 "uuid": "fb9c778a-090f-4cbb-96de-43594bea6923", 00:32:28.715 "is_configured": false, 00:32:28.715 "data_offset": 0, 00:32:28.715 "data_size": 65536 00:32:28.715 }, 00:32:28.715 { 00:32:28.715 "name": "BaseBdev3", 00:32:28.715 "uuid": "12e92500-3a48-400f-af20-42a57ff659e8", 00:32:28.715 "is_configured": true, 00:32:28.715 "data_offset": 0, 00:32:28.715 "data_size": 65536 00:32:28.715 }, 00:32:28.715 { 00:32:28.715 "name": "BaseBdev4", 00:32:28.715 "uuid": "cd1f5656-d468-4c70-9e35-66f9275133dc", 00:32:28.715 "is_configured": true, 00:32:28.715 "data_offset": 0, 00:32:28.715 "data_size": 65536 00:32:28.715 } 00:32:28.715 ] 00:32:28.715 }' 00:32:28.715 17:29:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:32:28.715 17:29:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:32:29.293 17:29:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:32:29.293 17:29:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:29.293 17:29:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:32:29.293 17:29:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:32:29.293 17:29:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:29.293 17:29:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:32:29.293 17:29:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:32:29.293 17:29:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:29.294 17:29:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:32:29.294 [2024-11-26 17:29:06.456567] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:32:29.294 17:29:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:29.294 17:29:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:32:29.294 17:29:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:32:29.294 17:29:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:32:29.294 17:29:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:32:29.294 17:29:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:32:29.294 17:29:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:32:29.294 17:29:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:32:29.294 17:29:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:32:29.294 17:29:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:32:29.294 17:29:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:32:29.294 17:29:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:32:29.294 17:29:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:32:29.294 17:29:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:29.294 17:29:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:32:29.294 17:29:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:29.294 17:29:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:32:29.294 "name": "Existed_Raid", 00:32:29.294 "uuid": "00000000-0000-0000-0000-000000000000", 00:32:29.294 "strip_size_kb": 64, 00:32:29.294 "state": "configuring", 00:32:29.294 "raid_level": "concat", 00:32:29.294 "superblock": false, 00:32:29.294 "num_base_bdevs": 4, 00:32:29.294 "num_base_bdevs_discovered": 3, 00:32:29.294 "num_base_bdevs_operational": 4, 00:32:29.294 "base_bdevs_list": [ 00:32:29.294 { 00:32:29.294 "name": null, 00:32:29.294 "uuid": "dfbc1958-0eb1-4ab4-9f55-1a4213149c85", 00:32:29.294 "is_configured": false, 00:32:29.294 "data_offset": 0, 00:32:29.294 "data_size": 65536 00:32:29.294 }, 00:32:29.294 { 00:32:29.294 "name": "BaseBdev2", 00:32:29.294 "uuid": "fb9c778a-090f-4cbb-96de-43594bea6923", 00:32:29.294 "is_configured": true, 00:32:29.294 "data_offset": 0, 00:32:29.294 "data_size": 65536 00:32:29.294 }, 00:32:29.294 { 00:32:29.294 "name": "BaseBdev3", 00:32:29.294 "uuid": "12e92500-3a48-400f-af20-42a57ff659e8", 00:32:29.294 "is_configured": true, 00:32:29.294 "data_offset": 0, 00:32:29.294 "data_size": 65536 00:32:29.294 }, 00:32:29.294 { 00:32:29.294 "name": "BaseBdev4", 00:32:29.294 "uuid": "cd1f5656-d468-4c70-9e35-66f9275133dc", 00:32:29.294 "is_configured": true, 00:32:29.294 "data_offset": 0, 00:32:29.294 "data_size": 65536 00:32:29.294 } 00:32:29.294 ] 00:32:29.294 }' 00:32:29.294 17:29:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:32:29.294 17:29:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:32:29.552 17:29:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:32:29.552 17:29:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:32:29.552 17:29:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:29.552 17:29:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:32:29.552 17:29:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:29.552 17:29:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:32:29.552 17:29:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:32:29.552 17:29:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:32:29.552 17:29:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:29.552 17:29:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:32:29.552 17:29:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:29.552 17:29:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u dfbc1958-0eb1-4ab4-9f55-1a4213149c85 00:32:29.552 17:29:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:29.552 17:29:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:32:29.811 [2024-11-26 17:29:07.019289] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:32:29.811 [2024-11-26 17:29:07.019351] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:32:29.811 [2024-11-26 17:29:07.019360] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 262144, blocklen 512 00:32:29.811 [2024-11-26 17:29:07.019641] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000063c0 00:32:29.811 [2024-11-26 17:29:07.019809] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:32:29.811 [2024-11-26 17:29:07.019826] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000008200 00:32:29.811 [2024-11-26 17:29:07.020121] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:32:29.811 NewBaseBdev 00:32:29.811 17:29:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:29.811 17:29:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:32:29.811 17:29:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=NewBaseBdev 00:32:29.811 17:29:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:32:29.811 17:29:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:32:29.811 17:29:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:32:29.811 17:29:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:32:29.811 17:29:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:32:29.811 17:29:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:29.811 17:29:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:32:29.811 17:29:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:29.811 17:29:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:32:29.811 17:29:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:29.811 17:29:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:32:29.811 [ 00:32:29.811 { 00:32:29.811 "name": "NewBaseBdev", 00:32:29.811 "aliases": [ 00:32:29.811 "dfbc1958-0eb1-4ab4-9f55-1a4213149c85" 00:32:29.811 ], 00:32:29.811 "product_name": "Malloc disk", 00:32:29.811 "block_size": 512, 00:32:29.811 "num_blocks": 65536, 00:32:29.811 "uuid": "dfbc1958-0eb1-4ab4-9f55-1a4213149c85", 00:32:29.811 "assigned_rate_limits": { 00:32:29.811 "rw_ios_per_sec": 0, 00:32:29.811 "rw_mbytes_per_sec": 0, 00:32:29.811 "r_mbytes_per_sec": 0, 00:32:29.811 "w_mbytes_per_sec": 0 00:32:29.811 }, 00:32:29.811 "claimed": true, 00:32:29.811 "claim_type": "exclusive_write", 00:32:29.811 "zoned": false, 00:32:29.811 "supported_io_types": { 00:32:29.811 "read": true, 00:32:29.811 "write": true, 00:32:29.811 "unmap": true, 00:32:29.811 "flush": true, 00:32:29.811 "reset": true, 00:32:29.811 "nvme_admin": false, 00:32:29.811 "nvme_io": false, 00:32:29.811 "nvme_io_md": false, 00:32:29.811 "write_zeroes": true, 00:32:29.811 "zcopy": true, 00:32:29.811 "get_zone_info": false, 00:32:29.811 "zone_management": false, 00:32:29.811 "zone_append": false, 00:32:29.811 "compare": false, 00:32:29.811 "compare_and_write": false, 00:32:29.811 "abort": true, 00:32:29.811 "seek_hole": false, 00:32:29.811 "seek_data": false, 00:32:29.811 "copy": true, 00:32:29.811 "nvme_iov_md": false 00:32:29.811 }, 00:32:29.811 "memory_domains": [ 00:32:29.811 { 00:32:29.811 "dma_device_id": "system", 00:32:29.811 "dma_device_type": 1 00:32:29.811 }, 00:32:29.811 { 00:32:29.811 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:32:29.811 "dma_device_type": 2 00:32:29.811 } 00:32:29.811 ], 00:32:29.811 "driver_specific": {} 00:32:29.811 } 00:32:29.811 ] 00:32:29.811 17:29:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:29.811 17:29:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:32:29.811 17:29:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online concat 64 4 00:32:29.811 17:29:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:32:29.811 17:29:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:32:29.811 17:29:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:32:29.811 17:29:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:32:29.811 17:29:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:32:29.811 17:29:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:32:29.811 17:29:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:32:29.811 17:29:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:32:29.811 17:29:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:32:29.811 17:29:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:32:29.811 17:29:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:29.812 17:29:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:32:29.812 17:29:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:32:29.812 17:29:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:29.812 17:29:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:32:29.812 "name": "Existed_Raid", 00:32:29.812 "uuid": "bdffc491-1466-4379-8fee-3b97b01d8bf5", 00:32:29.812 "strip_size_kb": 64, 00:32:29.812 "state": "online", 00:32:29.812 "raid_level": "concat", 00:32:29.812 "superblock": false, 00:32:29.812 "num_base_bdevs": 4, 00:32:29.812 "num_base_bdevs_discovered": 4, 00:32:29.812 "num_base_bdevs_operational": 4, 00:32:29.812 "base_bdevs_list": [ 00:32:29.812 { 00:32:29.812 "name": "NewBaseBdev", 00:32:29.812 "uuid": "dfbc1958-0eb1-4ab4-9f55-1a4213149c85", 00:32:29.812 "is_configured": true, 00:32:29.812 "data_offset": 0, 00:32:29.812 "data_size": 65536 00:32:29.812 }, 00:32:29.812 { 00:32:29.812 "name": "BaseBdev2", 00:32:29.812 "uuid": "fb9c778a-090f-4cbb-96de-43594bea6923", 00:32:29.812 "is_configured": true, 00:32:29.812 "data_offset": 0, 00:32:29.812 "data_size": 65536 00:32:29.812 }, 00:32:29.812 { 00:32:29.812 "name": "BaseBdev3", 00:32:29.812 "uuid": "12e92500-3a48-400f-af20-42a57ff659e8", 00:32:29.812 "is_configured": true, 00:32:29.812 "data_offset": 0, 00:32:29.812 "data_size": 65536 00:32:29.812 }, 00:32:29.812 { 00:32:29.812 "name": "BaseBdev4", 00:32:29.812 "uuid": "cd1f5656-d468-4c70-9e35-66f9275133dc", 00:32:29.812 "is_configured": true, 00:32:29.812 "data_offset": 0, 00:32:29.812 "data_size": 65536 00:32:29.812 } 00:32:29.812 ] 00:32:29.812 }' 00:32:29.812 17:29:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:32:29.812 17:29:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:32:30.069 17:29:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:32:30.069 17:29:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:32:30.069 17:29:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:32:30.069 17:29:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:32:30.069 17:29:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:32:30.069 17:29:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:32:30.069 17:29:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:32:30.069 17:29:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:32:30.069 17:29:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:30.069 17:29:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:32:30.069 [2024-11-26 17:29:07.503800] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:32:30.328 17:29:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:30.328 17:29:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:32:30.328 "name": "Existed_Raid", 00:32:30.328 "aliases": [ 00:32:30.328 "bdffc491-1466-4379-8fee-3b97b01d8bf5" 00:32:30.328 ], 00:32:30.328 "product_name": "Raid Volume", 00:32:30.328 "block_size": 512, 00:32:30.328 "num_blocks": 262144, 00:32:30.328 "uuid": "bdffc491-1466-4379-8fee-3b97b01d8bf5", 00:32:30.328 "assigned_rate_limits": { 00:32:30.328 "rw_ios_per_sec": 0, 00:32:30.328 "rw_mbytes_per_sec": 0, 00:32:30.328 "r_mbytes_per_sec": 0, 00:32:30.328 "w_mbytes_per_sec": 0 00:32:30.328 }, 00:32:30.328 "claimed": false, 00:32:30.328 "zoned": false, 00:32:30.328 "supported_io_types": { 00:32:30.328 "read": true, 00:32:30.328 "write": true, 00:32:30.328 "unmap": true, 00:32:30.328 "flush": true, 00:32:30.328 "reset": true, 00:32:30.328 "nvme_admin": false, 00:32:30.328 "nvme_io": false, 00:32:30.328 "nvme_io_md": false, 00:32:30.328 "write_zeroes": true, 00:32:30.328 "zcopy": false, 00:32:30.328 "get_zone_info": false, 00:32:30.328 "zone_management": false, 00:32:30.328 "zone_append": false, 00:32:30.328 "compare": false, 00:32:30.328 "compare_and_write": false, 00:32:30.328 "abort": false, 00:32:30.328 "seek_hole": false, 00:32:30.328 "seek_data": false, 00:32:30.328 "copy": false, 00:32:30.328 "nvme_iov_md": false 00:32:30.328 }, 00:32:30.328 "memory_domains": [ 00:32:30.328 { 00:32:30.328 "dma_device_id": "system", 00:32:30.328 "dma_device_type": 1 00:32:30.328 }, 00:32:30.328 { 00:32:30.328 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:32:30.328 "dma_device_type": 2 00:32:30.328 }, 00:32:30.328 { 00:32:30.328 "dma_device_id": "system", 00:32:30.328 "dma_device_type": 1 00:32:30.328 }, 00:32:30.328 { 00:32:30.328 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:32:30.328 "dma_device_type": 2 00:32:30.328 }, 00:32:30.328 { 00:32:30.328 "dma_device_id": "system", 00:32:30.328 "dma_device_type": 1 00:32:30.328 }, 00:32:30.328 { 00:32:30.328 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:32:30.328 "dma_device_type": 2 00:32:30.328 }, 00:32:30.328 { 00:32:30.328 "dma_device_id": "system", 00:32:30.328 "dma_device_type": 1 00:32:30.328 }, 00:32:30.328 { 00:32:30.328 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:32:30.328 "dma_device_type": 2 00:32:30.328 } 00:32:30.328 ], 00:32:30.328 "driver_specific": { 00:32:30.328 "raid": { 00:32:30.328 "uuid": "bdffc491-1466-4379-8fee-3b97b01d8bf5", 00:32:30.328 "strip_size_kb": 64, 00:32:30.328 "state": "online", 00:32:30.328 "raid_level": "concat", 00:32:30.328 "superblock": false, 00:32:30.328 "num_base_bdevs": 4, 00:32:30.328 "num_base_bdevs_discovered": 4, 00:32:30.328 "num_base_bdevs_operational": 4, 00:32:30.328 "base_bdevs_list": [ 00:32:30.328 { 00:32:30.328 "name": "NewBaseBdev", 00:32:30.328 "uuid": "dfbc1958-0eb1-4ab4-9f55-1a4213149c85", 00:32:30.328 "is_configured": true, 00:32:30.328 "data_offset": 0, 00:32:30.328 "data_size": 65536 00:32:30.328 }, 00:32:30.328 { 00:32:30.328 "name": "BaseBdev2", 00:32:30.328 "uuid": "fb9c778a-090f-4cbb-96de-43594bea6923", 00:32:30.328 "is_configured": true, 00:32:30.328 "data_offset": 0, 00:32:30.328 "data_size": 65536 00:32:30.328 }, 00:32:30.328 { 00:32:30.328 "name": "BaseBdev3", 00:32:30.328 "uuid": "12e92500-3a48-400f-af20-42a57ff659e8", 00:32:30.328 "is_configured": true, 00:32:30.328 "data_offset": 0, 00:32:30.328 "data_size": 65536 00:32:30.328 }, 00:32:30.328 { 00:32:30.328 "name": "BaseBdev4", 00:32:30.328 "uuid": "cd1f5656-d468-4c70-9e35-66f9275133dc", 00:32:30.328 "is_configured": true, 00:32:30.328 "data_offset": 0, 00:32:30.328 "data_size": 65536 00:32:30.328 } 00:32:30.328 ] 00:32:30.328 } 00:32:30.328 } 00:32:30.328 }' 00:32:30.328 17:29:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:32:30.328 17:29:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:32:30.328 BaseBdev2 00:32:30.328 BaseBdev3 00:32:30.328 BaseBdev4' 00:32:30.328 17:29:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:32:30.328 17:29:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:32:30.328 17:29:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:32:30.328 17:29:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:32:30.328 17:29:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:30.328 17:29:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:32:30.328 17:29:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:32:30.328 17:29:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:30.328 17:29:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:32:30.328 17:29:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:32:30.328 17:29:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:32:30.328 17:29:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:32:30.328 17:29:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:32:30.328 17:29:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:30.328 17:29:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:32:30.328 17:29:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:30.328 17:29:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:32:30.328 17:29:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:32:30.328 17:29:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:32:30.328 17:29:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:32:30.328 17:29:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:30.328 17:29:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:32:30.329 17:29:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:32:30.329 17:29:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:30.587 17:29:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:32:30.587 17:29:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:32:30.587 17:29:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:32:30.587 17:29:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:32:30.587 17:29:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:32:30.587 17:29:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:30.587 17:29:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:32:30.587 17:29:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:30.587 17:29:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:32:30.587 17:29:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:32:30.587 17:29:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:32:30.587 17:29:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:30.587 17:29:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:32:30.587 [2024-11-26 17:29:07.819499] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:32:30.587 [2024-11-26 17:29:07.819636] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:32:30.587 [2024-11-26 17:29:07.819734] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:32:30.587 [2024-11-26 17:29:07.819805] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:32:30.587 [2024-11-26 17:29:07.819817] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name Existed_Raid, state offline 00:32:30.587 17:29:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:30.587 17:29:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@326 -- # killprocess 71713 00:32:30.587 17:29:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@954 -- # '[' -z 71713 ']' 00:32:30.587 17:29:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@958 -- # kill -0 71713 00:32:30.587 17:29:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@959 -- # uname 00:32:30.588 17:29:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:32:30.588 17:29:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 71713 00:32:30.588 killing process with pid 71713 00:32:30.588 17:29:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:32:30.588 17:29:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:32:30.588 17:29:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 71713' 00:32:30.588 17:29:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@973 -- # kill 71713 00:32:30.588 [2024-11-26 17:29:07.863293] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:32:30.588 17:29:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@978 -- # wait 71713 00:32:30.845 [2024-11-26 17:29:08.272580] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:32:32.215 17:29:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@328 -- # return 0 00:32:32.215 00:32:32.215 real 0m11.850s 00:32:32.215 user 0m18.938s 00:32:32.215 sys 0m2.203s 00:32:32.215 17:29:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:32:32.215 17:29:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:32:32.215 ************************************ 00:32:32.215 END TEST raid_state_function_test 00:32:32.215 ************************************ 00:32:32.215 17:29:09 bdev_raid -- bdev/bdev_raid.sh@969 -- # run_test raid_state_function_test_sb raid_state_function_test concat 4 true 00:32:32.215 17:29:09 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:32:32.215 17:29:09 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:32:32.215 17:29:09 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:32:32.215 ************************************ 00:32:32.215 START TEST raid_state_function_test_sb 00:32:32.215 ************************************ 00:32:32.215 17:29:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1129 -- # raid_state_function_test concat 4 true 00:32:32.215 17:29:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # local raid_level=concat 00:32:32.215 17:29:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=4 00:32:32.215 17:29:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:32:32.215 17:29:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:32:32.215 17:29:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:32:32.215 17:29:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:32:32.215 17:29:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:32:32.215 17:29:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:32:32.215 17:29:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:32:32.215 17:29:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:32:32.215 17:29:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:32:32.215 17:29:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:32:32.215 17:29:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:32:32.215 17:29:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:32:32.215 17:29:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:32:32.215 17:29:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev4 00:32:32.215 17:29:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:32:32.215 17:29:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:32:32.215 17:29:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:32:32.215 17:29:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:32:32.215 17:29:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:32:32.215 17:29:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # local strip_size 00:32:32.215 17:29:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:32:32.215 17:29:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:32:32.215 17:29:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@215 -- # '[' concat '!=' raid1 ']' 00:32:32.215 17:29:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:32:32.215 17:29:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:32:32.215 17:29:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:32:32.215 17:29:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:32:32.215 17:29:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:32:32.215 17:29:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@229 -- # raid_pid=72384 00:32:32.215 17:29:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 72384' 00:32:32.215 Process raid pid: 72384 00:32:32.215 17:29:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # waitforlisten 72384 00:32:32.215 17:29:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@835 -- # '[' -z 72384 ']' 00:32:32.215 17:29:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:32:32.215 17:29:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@840 -- # local max_retries=100 00:32:32.215 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:32:32.215 17:29:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:32:32.215 17:29:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@844 -- # xtrace_disable 00:32:32.215 17:29:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:32:32.216 [2024-11-26 17:29:09.617414] Starting SPDK v25.01-pre git sha1 f7ce15267 / DPDK 24.03.0 initialization... 00:32:32.216 [2024-11-26 17:29:09.617597] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:32:32.473 [2024-11-26 17:29:09.809649] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:32.732 [2024-11-26 17:29:09.925941] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:32:32.732 [2024-11-26 17:29:10.142180] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:32:32.732 [2024-11-26 17:29:10.142226] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:32:33.299 17:29:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:32:33.299 17:29:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@868 -- # return 0 00:32:33.299 17:29:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -s -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:32:33.299 17:29:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:33.299 17:29:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:32:33.299 [2024-11-26 17:29:10.553670] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:32:33.299 [2024-11-26 17:29:10.553728] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:32:33.299 [2024-11-26 17:29:10.553740] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:32:33.299 [2024-11-26 17:29:10.553754] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:32:33.299 [2024-11-26 17:29:10.553768] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:32:33.299 [2024-11-26 17:29:10.553781] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:32:33.299 [2024-11-26 17:29:10.553789] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:32:33.299 [2024-11-26 17:29:10.553801] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:32:33.299 17:29:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:33.299 17:29:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:32:33.299 17:29:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:32:33.299 17:29:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:32:33.299 17:29:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:32:33.299 17:29:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:32:33.299 17:29:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:32:33.299 17:29:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:32:33.299 17:29:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:32:33.299 17:29:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:32:33.299 17:29:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:32:33.299 17:29:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:32:33.299 17:29:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:32:33.299 17:29:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:33.299 17:29:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:32:33.299 17:29:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:33.299 17:29:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:32:33.299 "name": "Existed_Raid", 00:32:33.299 "uuid": "40ac5cce-e422-4168-8864-357f030f997d", 00:32:33.299 "strip_size_kb": 64, 00:32:33.299 "state": "configuring", 00:32:33.299 "raid_level": "concat", 00:32:33.299 "superblock": true, 00:32:33.299 "num_base_bdevs": 4, 00:32:33.299 "num_base_bdevs_discovered": 0, 00:32:33.299 "num_base_bdevs_operational": 4, 00:32:33.299 "base_bdevs_list": [ 00:32:33.299 { 00:32:33.299 "name": "BaseBdev1", 00:32:33.299 "uuid": "00000000-0000-0000-0000-000000000000", 00:32:33.299 "is_configured": false, 00:32:33.299 "data_offset": 0, 00:32:33.299 "data_size": 0 00:32:33.299 }, 00:32:33.299 { 00:32:33.299 "name": "BaseBdev2", 00:32:33.299 "uuid": "00000000-0000-0000-0000-000000000000", 00:32:33.299 "is_configured": false, 00:32:33.299 "data_offset": 0, 00:32:33.299 "data_size": 0 00:32:33.299 }, 00:32:33.299 { 00:32:33.299 "name": "BaseBdev3", 00:32:33.299 "uuid": "00000000-0000-0000-0000-000000000000", 00:32:33.299 "is_configured": false, 00:32:33.299 "data_offset": 0, 00:32:33.299 "data_size": 0 00:32:33.299 }, 00:32:33.299 { 00:32:33.299 "name": "BaseBdev4", 00:32:33.299 "uuid": "00000000-0000-0000-0000-000000000000", 00:32:33.299 "is_configured": false, 00:32:33.299 "data_offset": 0, 00:32:33.299 "data_size": 0 00:32:33.299 } 00:32:33.299 ] 00:32:33.299 }' 00:32:33.299 17:29:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:32:33.299 17:29:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:32:33.867 17:29:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:32:33.867 17:29:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:33.867 17:29:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:32:33.867 [2024-11-26 17:29:11.033707] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:32:33.867 [2024-11-26 17:29:11.033885] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:32:33.867 17:29:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:33.867 17:29:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -s -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:32:33.867 17:29:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:33.867 17:29:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:32:33.867 [2024-11-26 17:29:11.045707] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:32:33.867 [2024-11-26 17:29:11.045856] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:32:33.867 [2024-11-26 17:29:11.045990] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:32:33.867 [2024-11-26 17:29:11.046015] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:32:33.867 [2024-11-26 17:29:11.046034] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:32:33.867 [2024-11-26 17:29:11.046065] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:32:33.867 [2024-11-26 17:29:11.046074] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:32:33.867 [2024-11-26 17:29:11.046087] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:32:33.867 17:29:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:33.867 17:29:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:32:33.867 17:29:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:33.867 17:29:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:32:33.867 [2024-11-26 17:29:11.091147] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:32:33.867 BaseBdev1 00:32:33.867 17:29:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:33.867 17:29:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:32:33.867 17:29:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:32:33.867 17:29:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:32:33.867 17:29:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:32:33.867 17:29:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:32:33.867 17:29:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:32:33.867 17:29:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:32:33.867 17:29:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:33.867 17:29:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:32:33.867 17:29:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:33.867 17:29:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:32:33.867 17:29:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:33.867 17:29:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:32:33.867 [ 00:32:33.867 { 00:32:33.867 "name": "BaseBdev1", 00:32:33.867 "aliases": [ 00:32:33.867 "2dbda38d-02c8-4650-85c8-9f961e866afc" 00:32:33.867 ], 00:32:33.867 "product_name": "Malloc disk", 00:32:33.867 "block_size": 512, 00:32:33.867 "num_blocks": 65536, 00:32:33.867 "uuid": "2dbda38d-02c8-4650-85c8-9f961e866afc", 00:32:33.867 "assigned_rate_limits": { 00:32:33.867 "rw_ios_per_sec": 0, 00:32:33.867 "rw_mbytes_per_sec": 0, 00:32:33.867 "r_mbytes_per_sec": 0, 00:32:33.867 "w_mbytes_per_sec": 0 00:32:33.867 }, 00:32:33.867 "claimed": true, 00:32:33.867 "claim_type": "exclusive_write", 00:32:33.867 "zoned": false, 00:32:33.867 "supported_io_types": { 00:32:33.867 "read": true, 00:32:33.867 "write": true, 00:32:33.867 "unmap": true, 00:32:33.867 "flush": true, 00:32:33.867 "reset": true, 00:32:33.867 "nvme_admin": false, 00:32:33.867 "nvme_io": false, 00:32:33.867 "nvme_io_md": false, 00:32:33.867 "write_zeroes": true, 00:32:33.867 "zcopy": true, 00:32:33.867 "get_zone_info": false, 00:32:33.867 "zone_management": false, 00:32:33.867 "zone_append": false, 00:32:33.867 "compare": false, 00:32:33.868 "compare_and_write": false, 00:32:33.868 "abort": true, 00:32:33.868 "seek_hole": false, 00:32:33.868 "seek_data": false, 00:32:33.868 "copy": true, 00:32:33.868 "nvme_iov_md": false 00:32:33.868 }, 00:32:33.868 "memory_domains": [ 00:32:33.868 { 00:32:33.868 "dma_device_id": "system", 00:32:33.868 "dma_device_type": 1 00:32:33.868 }, 00:32:33.868 { 00:32:33.868 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:32:33.868 "dma_device_type": 2 00:32:33.868 } 00:32:33.868 ], 00:32:33.868 "driver_specific": {} 00:32:33.868 } 00:32:33.868 ] 00:32:33.868 17:29:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:33.868 17:29:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:32:33.868 17:29:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:32:33.868 17:29:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:32:33.868 17:29:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:32:33.868 17:29:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:32:33.868 17:29:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:32:33.868 17:29:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:32:33.868 17:29:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:32:33.868 17:29:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:32:33.868 17:29:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:32:33.868 17:29:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:32:33.868 17:29:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:32:33.868 17:29:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:32:33.868 17:29:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:33.868 17:29:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:32:33.868 17:29:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:33.868 17:29:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:32:33.868 "name": "Existed_Raid", 00:32:33.868 "uuid": "0ac642fc-f2ee-439a-a901-3e534f836965", 00:32:33.868 "strip_size_kb": 64, 00:32:33.868 "state": "configuring", 00:32:33.868 "raid_level": "concat", 00:32:33.868 "superblock": true, 00:32:33.868 "num_base_bdevs": 4, 00:32:33.868 "num_base_bdevs_discovered": 1, 00:32:33.868 "num_base_bdevs_operational": 4, 00:32:33.868 "base_bdevs_list": [ 00:32:33.868 { 00:32:33.868 "name": "BaseBdev1", 00:32:33.868 "uuid": "2dbda38d-02c8-4650-85c8-9f961e866afc", 00:32:33.868 "is_configured": true, 00:32:33.868 "data_offset": 2048, 00:32:33.868 "data_size": 63488 00:32:33.868 }, 00:32:33.868 { 00:32:33.868 "name": "BaseBdev2", 00:32:33.868 "uuid": "00000000-0000-0000-0000-000000000000", 00:32:33.868 "is_configured": false, 00:32:33.868 "data_offset": 0, 00:32:33.868 "data_size": 0 00:32:33.868 }, 00:32:33.868 { 00:32:33.868 "name": "BaseBdev3", 00:32:33.868 "uuid": "00000000-0000-0000-0000-000000000000", 00:32:33.868 "is_configured": false, 00:32:33.868 "data_offset": 0, 00:32:33.868 "data_size": 0 00:32:33.868 }, 00:32:33.868 { 00:32:33.868 "name": "BaseBdev4", 00:32:33.868 "uuid": "00000000-0000-0000-0000-000000000000", 00:32:33.868 "is_configured": false, 00:32:33.868 "data_offset": 0, 00:32:33.868 "data_size": 0 00:32:33.868 } 00:32:33.868 ] 00:32:33.868 }' 00:32:33.868 17:29:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:32:33.868 17:29:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:32:34.127 17:29:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:32:34.127 17:29:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:34.127 17:29:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:32:34.127 [2024-11-26 17:29:11.527293] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:32:34.127 [2024-11-26 17:29:11.527347] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:32:34.127 17:29:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:34.127 17:29:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -s -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:32:34.127 17:29:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:34.127 17:29:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:32:34.127 [2024-11-26 17:29:11.539359] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:32:34.127 [2024-11-26 17:29:11.541603] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:32:34.127 [2024-11-26 17:29:11.541751] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:32:34.127 [2024-11-26 17:29:11.541881] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:32:34.127 [2024-11-26 17:29:11.541933] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:32:34.127 [2024-11-26 17:29:11.541963] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:32:34.127 [2024-11-26 17:29:11.541997] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:32:34.127 17:29:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:34.127 17:29:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:32:34.127 17:29:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:32:34.127 17:29:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:32:34.127 17:29:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:32:34.127 17:29:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:32:34.127 17:29:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:32:34.127 17:29:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:32:34.127 17:29:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:32:34.127 17:29:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:32:34.127 17:29:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:32:34.127 17:29:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:32:34.127 17:29:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:32:34.127 17:29:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:32:34.127 17:29:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:34.127 17:29:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:32:34.127 17:29:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:32:34.386 17:29:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:34.386 17:29:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:32:34.386 "name": "Existed_Raid", 00:32:34.386 "uuid": "dca22561-f657-45a6-8c6b-29a0e69453f2", 00:32:34.386 "strip_size_kb": 64, 00:32:34.386 "state": "configuring", 00:32:34.386 "raid_level": "concat", 00:32:34.386 "superblock": true, 00:32:34.386 "num_base_bdevs": 4, 00:32:34.386 "num_base_bdevs_discovered": 1, 00:32:34.386 "num_base_bdevs_operational": 4, 00:32:34.386 "base_bdevs_list": [ 00:32:34.386 { 00:32:34.386 "name": "BaseBdev1", 00:32:34.386 "uuid": "2dbda38d-02c8-4650-85c8-9f961e866afc", 00:32:34.386 "is_configured": true, 00:32:34.386 "data_offset": 2048, 00:32:34.386 "data_size": 63488 00:32:34.386 }, 00:32:34.386 { 00:32:34.386 "name": "BaseBdev2", 00:32:34.386 "uuid": "00000000-0000-0000-0000-000000000000", 00:32:34.386 "is_configured": false, 00:32:34.386 "data_offset": 0, 00:32:34.386 "data_size": 0 00:32:34.386 }, 00:32:34.386 { 00:32:34.386 "name": "BaseBdev3", 00:32:34.386 "uuid": "00000000-0000-0000-0000-000000000000", 00:32:34.386 "is_configured": false, 00:32:34.386 "data_offset": 0, 00:32:34.386 "data_size": 0 00:32:34.386 }, 00:32:34.386 { 00:32:34.386 "name": "BaseBdev4", 00:32:34.386 "uuid": "00000000-0000-0000-0000-000000000000", 00:32:34.386 "is_configured": false, 00:32:34.386 "data_offset": 0, 00:32:34.386 "data_size": 0 00:32:34.386 } 00:32:34.386 ] 00:32:34.386 }' 00:32:34.386 17:29:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:32:34.387 17:29:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:32:34.646 17:29:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:32:34.646 17:29:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:34.646 17:29:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:32:34.646 [2024-11-26 17:29:12.058888] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:32:34.646 BaseBdev2 00:32:34.646 17:29:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:34.646 17:29:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:32:34.646 17:29:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:32:34.646 17:29:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:32:34.646 17:29:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:32:34.646 17:29:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:32:34.646 17:29:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:32:34.646 17:29:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:32:34.646 17:29:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:34.646 17:29:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:32:34.646 17:29:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:34.646 17:29:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:32:34.646 17:29:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:34.646 17:29:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:32:34.646 [ 00:32:34.646 { 00:32:34.646 "name": "BaseBdev2", 00:32:34.646 "aliases": [ 00:32:34.646 "2ec4279e-11c5-4b2c-a764-a49049f981f6" 00:32:34.646 ], 00:32:34.646 "product_name": "Malloc disk", 00:32:34.646 "block_size": 512, 00:32:34.646 "num_blocks": 65536, 00:32:34.646 "uuid": "2ec4279e-11c5-4b2c-a764-a49049f981f6", 00:32:34.646 "assigned_rate_limits": { 00:32:34.646 "rw_ios_per_sec": 0, 00:32:34.646 "rw_mbytes_per_sec": 0, 00:32:34.646 "r_mbytes_per_sec": 0, 00:32:34.646 "w_mbytes_per_sec": 0 00:32:34.646 }, 00:32:34.646 "claimed": true, 00:32:34.646 "claim_type": "exclusive_write", 00:32:34.646 "zoned": false, 00:32:34.646 "supported_io_types": { 00:32:34.646 "read": true, 00:32:34.646 "write": true, 00:32:34.646 "unmap": true, 00:32:34.646 "flush": true, 00:32:34.646 "reset": true, 00:32:34.646 "nvme_admin": false, 00:32:34.646 "nvme_io": false, 00:32:34.646 "nvme_io_md": false, 00:32:34.646 "write_zeroes": true, 00:32:34.646 "zcopy": true, 00:32:34.646 "get_zone_info": false, 00:32:34.646 "zone_management": false, 00:32:34.646 "zone_append": false, 00:32:34.646 "compare": false, 00:32:34.646 "compare_and_write": false, 00:32:34.646 "abort": true, 00:32:34.646 "seek_hole": false, 00:32:34.646 "seek_data": false, 00:32:34.646 "copy": true, 00:32:34.646 "nvme_iov_md": false 00:32:34.646 }, 00:32:34.905 "memory_domains": [ 00:32:34.905 { 00:32:34.905 "dma_device_id": "system", 00:32:34.905 "dma_device_type": 1 00:32:34.905 }, 00:32:34.905 { 00:32:34.905 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:32:34.905 "dma_device_type": 2 00:32:34.905 } 00:32:34.905 ], 00:32:34.905 "driver_specific": {} 00:32:34.905 } 00:32:34.905 ] 00:32:34.905 17:29:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:34.905 17:29:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:32:34.905 17:29:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:32:34.905 17:29:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:32:34.905 17:29:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:32:34.905 17:29:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:32:34.905 17:29:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:32:34.905 17:29:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:32:34.905 17:29:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:32:34.905 17:29:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:32:34.905 17:29:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:32:34.905 17:29:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:32:34.905 17:29:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:32:34.905 17:29:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:32:34.905 17:29:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:32:34.905 17:29:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:32:34.905 17:29:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:34.905 17:29:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:32:34.905 17:29:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:34.905 17:29:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:32:34.905 "name": "Existed_Raid", 00:32:34.905 "uuid": "dca22561-f657-45a6-8c6b-29a0e69453f2", 00:32:34.905 "strip_size_kb": 64, 00:32:34.905 "state": "configuring", 00:32:34.905 "raid_level": "concat", 00:32:34.905 "superblock": true, 00:32:34.905 "num_base_bdevs": 4, 00:32:34.905 "num_base_bdevs_discovered": 2, 00:32:34.905 "num_base_bdevs_operational": 4, 00:32:34.905 "base_bdevs_list": [ 00:32:34.905 { 00:32:34.905 "name": "BaseBdev1", 00:32:34.905 "uuid": "2dbda38d-02c8-4650-85c8-9f961e866afc", 00:32:34.905 "is_configured": true, 00:32:34.905 "data_offset": 2048, 00:32:34.905 "data_size": 63488 00:32:34.905 }, 00:32:34.905 { 00:32:34.905 "name": "BaseBdev2", 00:32:34.905 "uuid": "2ec4279e-11c5-4b2c-a764-a49049f981f6", 00:32:34.905 "is_configured": true, 00:32:34.905 "data_offset": 2048, 00:32:34.905 "data_size": 63488 00:32:34.905 }, 00:32:34.905 { 00:32:34.905 "name": "BaseBdev3", 00:32:34.905 "uuid": "00000000-0000-0000-0000-000000000000", 00:32:34.905 "is_configured": false, 00:32:34.905 "data_offset": 0, 00:32:34.905 "data_size": 0 00:32:34.905 }, 00:32:34.905 { 00:32:34.905 "name": "BaseBdev4", 00:32:34.905 "uuid": "00000000-0000-0000-0000-000000000000", 00:32:34.905 "is_configured": false, 00:32:34.905 "data_offset": 0, 00:32:34.905 "data_size": 0 00:32:34.905 } 00:32:34.905 ] 00:32:34.905 }' 00:32:34.905 17:29:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:32:34.905 17:29:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:32:35.165 17:29:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:32:35.165 17:29:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:35.165 17:29:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:32:35.165 [2024-11-26 17:29:12.566037] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:32:35.165 BaseBdev3 00:32:35.165 17:29:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:35.165 17:29:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:32:35.165 17:29:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:32:35.165 17:29:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:32:35.165 17:29:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:32:35.165 17:29:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:32:35.165 17:29:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:32:35.165 17:29:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:32:35.165 17:29:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:35.165 17:29:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:32:35.165 17:29:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:35.165 17:29:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:32:35.165 17:29:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:35.165 17:29:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:32:35.165 [ 00:32:35.165 { 00:32:35.165 "name": "BaseBdev3", 00:32:35.165 "aliases": [ 00:32:35.165 "3733cf9e-ab0a-4a46-a8d9-2cd30b609868" 00:32:35.165 ], 00:32:35.165 "product_name": "Malloc disk", 00:32:35.165 "block_size": 512, 00:32:35.165 "num_blocks": 65536, 00:32:35.165 "uuid": "3733cf9e-ab0a-4a46-a8d9-2cd30b609868", 00:32:35.165 "assigned_rate_limits": { 00:32:35.165 "rw_ios_per_sec": 0, 00:32:35.165 "rw_mbytes_per_sec": 0, 00:32:35.165 "r_mbytes_per_sec": 0, 00:32:35.165 "w_mbytes_per_sec": 0 00:32:35.165 }, 00:32:35.165 "claimed": true, 00:32:35.165 "claim_type": "exclusive_write", 00:32:35.165 "zoned": false, 00:32:35.165 "supported_io_types": { 00:32:35.165 "read": true, 00:32:35.165 "write": true, 00:32:35.165 "unmap": true, 00:32:35.165 "flush": true, 00:32:35.165 "reset": true, 00:32:35.165 "nvme_admin": false, 00:32:35.165 "nvme_io": false, 00:32:35.165 "nvme_io_md": false, 00:32:35.165 "write_zeroes": true, 00:32:35.165 "zcopy": true, 00:32:35.165 "get_zone_info": false, 00:32:35.165 "zone_management": false, 00:32:35.165 "zone_append": false, 00:32:35.165 "compare": false, 00:32:35.165 "compare_and_write": false, 00:32:35.165 "abort": true, 00:32:35.165 "seek_hole": false, 00:32:35.165 "seek_data": false, 00:32:35.165 "copy": true, 00:32:35.165 "nvme_iov_md": false 00:32:35.165 }, 00:32:35.165 "memory_domains": [ 00:32:35.165 { 00:32:35.165 "dma_device_id": "system", 00:32:35.165 "dma_device_type": 1 00:32:35.165 }, 00:32:35.165 { 00:32:35.165 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:32:35.165 "dma_device_type": 2 00:32:35.165 } 00:32:35.165 ], 00:32:35.165 "driver_specific": {} 00:32:35.165 } 00:32:35.165 ] 00:32:35.165 17:29:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:35.437 17:29:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:32:35.437 17:29:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:32:35.437 17:29:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:32:35.437 17:29:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:32:35.437 17:29:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:32:35.437 17:29:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:32:35.437 17:29:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:32:35.437 17:29:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:32:35.437 17:29:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:32:35.437 17:29:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:32:35.437 17:29:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:32:35.437 17:29:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:32:35.437 17:29:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:32:35.437 17:29:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:32:35.437 17:29:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:32:35.437 17:29:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:35.438 17:29:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:32:35.438 17:29:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:35.438 17:29:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:32:35.438 "name": "Existed_Raid", 00:32:35.438 "uuid": "dca22561-f657-45a6-8c6b-29a0e69453f2", 00:32:35.438 "strip_size_kb": 64, 00:32:35.438 "state": "configuring", 00:32:35.438 "raid_level": "concat", 00:32:35.438 "superblock": true, 00:32:35.438 "num_base_bdevs": 4, 00:32:35.438 "num_base_bdevs_discovered": 3, 00:32:35.438 "num_base_bdevs_operational": 4, 00:32:35.438 "base_bdevs_list": [ 00:32:35.438 { 00:32:35.438 "name": "BaseBdev1", 00:32:35.438 "uuid": "2dbda38d-02c8-4650-85c8-9f961e866afc", 00:32:35.438 "is_configured": true, 00:32:35.438 "data_offset": 2048, 00:32:35.438 "data_size": 63488 00:32:35.438 }, 00:32:35.438 { 00:32:35.438 "name": "BaseBdev2", 00:32:35.438 "uuid": "2ec4279e-11c5-4b2c-a764-a49049f981f6", 00:32:35.438 "is_configured": true, 00:32:35.438 "data_offset": 2048, 00:32:35.438 "data_size": 63488 00:32:35.438 }, 00:32:35.438 { 00:32:35.438 "name": "BaseBdev3", 00:32:35.438 "uuid": "3733cf9e-ab0a-4a46-a8d9-2cd30b609868", 00:32:35.438 "is_configured": true, 00:32:35.438 "data_offset": 2048, 00:32:35.438 "data_size": 63488 00:32:35.438 }, 00:32:35.438 { 00:32:35.438 "name": "BaseBdev4", 00:32:35.438 "uuid": "00000000-0000-0000-0000-000000000000", 00:32:35.438 "is_configured": false, 00:32:35.438 "data_offset": 0, 00:32:35.438 "data_size": 0 00:32:35.438 } 00:32:35.438 ] 00:32:35.438 }' 00:32:35.438 17:29:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:32:35.438 17:29:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:32:35.700 17:29:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:32:35.700 17:29:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:35.700 17:29:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:32:35.700 [2024-11-26 17:29:13.099431] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:32:35.700 [2024-11-26 17:29:13.099742] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:32:35.700 [2024-11-26 17:29:13.099761] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:32:35.700 [2024-11-26 17:29:13.100077] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:32:35.700 BaseBdev4 00:32:35.700 [2024-11-26 17:29:13.100229] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:32:35.700 [2024-11-26 17:29:13.100244] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:32:35.700 [2024-11-26 17:29:13.100396] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:32:35.700 17:29:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:35.700 17:29:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev4 00:32:35.700 17:29:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev4 00:32:35.700 17:29:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:32:35.700 17:29:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:32:35.700 17:29:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:32:35.700 17:29:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:32:35.700 17:29:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:32:35.700 17:29:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:35.700 17:29:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:32:35.700 17:29:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:35.700 17:29:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:32:35.700 17:29:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:35.700 17:29:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:32:35.700 [ 00:32:35.700 { 00:32:35.700 "name": "BaseBdev4", 00:32:35.700 "aliases": [ 00:32:35.700 "d46be144-dd0a-4f76-97cd-63cb182b8677" 00:32:35.700 ], 00:32:35.700 "product_name": "Malloc disk", 00:32:35.700 "block_size": 512, 00:32:35.700 "num_blocks": 65536, 00:32:35.700 "uuid": "d46be144-dd0a-4f76-97cd-63cb182b8677", 00:32:35.700 "assigned_rate_limits": { 00:32:35.700 "rw_ios_per_sec": 0, 00:32:35.700 "rw_mbytes_per_sec": 0, 00:32:35.700 "r_mbytes_per_sec": 0, 00:32:35.700 "w_mbytes_per_sec": 0 00:32:35.700 }, 00:32:35.700 "claimed": true, 00:32:35.700 "claim_type": "exclusive_write", 00:32:35.700 "zoned": false, 00:32:35.700 "supported_io_types": { 00:32:35.700 "read": true, 00:32:35.700 "write": true, 00:32:35.700 "unmap": true, 00:32:35.700 "flush": true, 00:32:35.700 "reset": true, 00:32:35.700 "nvme_admin": false, 00:32:35.700 "nvme_io": false, 00:32:35.700 "nvme_io_md": false, 00:32:35.700 "write_zeroes": true, 00:32:35.700 "zcopy": true, 00:32:35.700 "get_zone_info": false, 00:32:35.700 "zone_management": false, 00:32:35.700 "zone_append": false, 00:32:35.700 "compare": false, 00:32:35.700 "compare_and_write": false, 00:32:35.700 "abort": true, 00:32:35.700 "seek_hole": false, 00:32:35.700 "seek_data": false, 00:32:35.700 "copy": true, 00:32:35.700 "nvme_iov_md": false 00:32:35.700 }, 00:32:35.700 "memory_domains": [ 00:32:35.700 { 00:32:35.700 "dma_device_id": "system", 00:32:35.700 "dma_device_type": 1 00:32:35.700 }, 00:32:35.700 { 00:32:35.700 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:32:35.700 "dma_device_type": 2 00:32:35.700 } 00:32:35.700 ], 00:32:35.700 "driver_specific": {} 00:32:35.700 } 00:32:35.700 ] 00:32:35.700 17:29:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:35.700 17:29:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:32:35.700 17:29:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:32:35.700 17:29:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:32:35.700 17:29:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online concat 64 4 00:32:35.700 17:29:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:32:35.700 17:29:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:32:35.700 17:29:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:32:35.700 17:29:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:32:35.700 17:29:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:32:35.700 17:29:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:32:35.700 17:29:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:32:35.700 17:29:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:32:35.960 17:29:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:32:35.960 17:29:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:32:35.960 17:29:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:35.960 17:29:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:32:35.960 17:29:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:32:35.960 17:29:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:35.960 17:29:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:32:35.960 "name": "Existed_Raid", 00:32:35.960 "uuid": "dca22561-f657-45a6-8c6b-29a0e69453f2", 00:32:35.960 "strip_size_kb": 64, 00:32:35.960 "state": "online", 00:32:35.960 "raid_level": "concat", 00:32:35.960 "superblock": true, 00:32:35.960 "num_base_bdevs": 4, 00:32:35.960 "num_base_bdevs_discovered": 4, 00:32:35.960 "num_base_bdevs_operational": 4, 00:32:35.960 "base_bdevs_list": [ 00:32:35.960 { 00:32:35.960 "name": "BaseBdev1", 00:32:35.960 "uuid": "2dbda38d-02c8-4650-85c8-9f961e866afc", 00:32:35.960 "is_configured": true, 00:32:35.960 "data_offset": 2048, 00:32:35.960 "data_size": 63488 00:32:35.960 }, 00:32:35.960 { 00:32:35.960 "name": "BaseBdev2", 00:32:35.960 "uuid": "2ec4279e-11c5-4b2c-a764-a49049f981f6", 00:32:35.960 "is_configured": true, 00:32:35.960 "data_offset": 2048, 00:32:35.960 "data_size": 63488 00:32:35.960 }, 00:32:35.960 { 00:32:35.960 "name": "BaseBdev3", 00:32:35.960 "uuid": "3733cf9e-ab0a-4a46-a8d9-2cd30b609868", 00:32:35.960 "is_configured": true, 00:32:35.960 "data_offset": 2048, 00:32:35.960 "data_size": 63488 00:32:35.960 }, 00:32:35.960 { 00:32:35.960 "name": "BaseBdev4", 00:32:35.960 "uuid": "d46be144-dd0a-4f76-97cd-63cb182b8677", 00:32:35.960 "is_configured": true, 00:32:35.960 "data_offset": 2048, 00:32:35.960 "data_size": 63488 00:32:35.960 } 00:32:35.960 ] 00:32:35.960 }' 00:32:35.960 17:29:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:32:35.960 17:29:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:32:36.220 17:29:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:32:36.220 17:29:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:32:36.220 17:29:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:32:36.220 17:29:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:32:36.220 17:29:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:32:36.220 17:29:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:32:36.220 17:29:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:32:36.220 17:29:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:32:36.220 17:29:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:36.220 17:29:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:32:36.220 [2024-11-26 17:29:13.612006] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:32:36.220 17:29:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:36.220 17:29:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:32:36.220 "name": "Existed_Raid", 00:32:36.220 "aliases": [ 00:32:36.220 "dca22561-f657-45a6-8c6b-29a0e69453f2" 00:32:36.220 ], 00:32:36.220 "product_name": "Raid Volume", 00:32:36.220 "block_size": 512, 00:32:36.220 "num_blocks": 253952, 00:32:36.220 "uuid": "dca22561-f657-45a6-8c6b-29a0e69453f2", 00:32:36.220 "assigned_rate_limits": { 00:32:36.220 "rw_ios_per_sec": 0, 00:32:36.220 "rw_mbytes_per_sec": 0, 00:32:36.220 "r_mbytes_per_sec": 0, 00:32:36.220 "w_mbytes_per_sec": 0 00:32:36.220 }, 00:32:36.220 "claimed": false, 00:32:36.220 "zoned": false, 00:32:36.220 "supported_io_types": { 00:32:36.220 "read": true, 00:32:36.220 "write": true, 00:32:36.220 "unmap": true, 00:32:36.220 "flush": true, 00:32:36.220 "reset": true, 00:32:36.220 "nvme_admin": false, 00:32:36.220 "nvme_io": false, 00:32:36.220 "nvme_io_md": false, 00:32:36.220 "write_zeroes": true, 00:32:36.220 "zcopy": false, 00:32:36.220 "get_zone_info": false, 00:32:36.220 "zone_management": false, 00:32:36.220 "zone_append": false, 00:32:36.220 "compare": false, 00:32:36.220 "compare_and_write": false, 00:32:36.220 "abort": false, 00:32:36.220 "seek_hole": false, 00:32:36.220 "seek_data": false, 00:32:36.220 "copy": false, 00:32:36.220 "nvme_iov_md": false 00:32:36.220 }, 00:32:36.220 "memory_domains": [ 00:32:36.220 { 00:32:36.220 "dma_device_id": "system", 00:32:36.220 "dma_device_type": 1 00:32:36.220 }, 00:32:36.220 { 00:32:36.220 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:32:36.220 "dma_device_type": 2 00:32:36.220 }, 00:32:36.220 { 00:32:36.220 "dma_device_id": "system", 00:32:36.220 "dma_device_type": 1 00:32:36.220 }, 00:32:36.220 { 00:32:36.220 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:32:36.220 "dma_device_type": 2 00:32:36.220 }, 00:32:36.220 { 00:32:36.220 "dma_device_id": "system", 00:32:36.220 "dma_device_type": 1 00:32:36.220 }, 00:32:36.220 { 00:32:36.220 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:32:36.220 "dma_device_type": 2 00:32:36.220 }, 00:32:36.220 { 00:32:36.220 "dma_device_id": "system", 00:32:36.220 "dma_device_type": 1 00:32:36.220 }, 00:32:36.220 { 00:32:36.220 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:32:36.220 "dma_device_type": 2 00:32:36.220 } 00:32:36.220 ], 00:32:36.220 "driver_specific": { 00:32:36.220 "raid": { 00:32:36.220 "uuid": "dca22561-f657-45a6-8c6b-29a0e69453f2", 00:32:36.220 "strip_size_kb": 64, 00:32:36.220 "state": "online", 00:32:36.220 "raid_level": "concat", 00:32:36.220 "superblock": true, 00:32:36.220 "num_base_bdevs": 4, 00:32:36.220 "num_base_bdevs_discovered": 4, 00:32:36.220 "num_base_bdevs_operational": 4, 00:32:36.220 "base_bdevs_list": [ 00:32:36.220 { 00:32:36.220 "name": "BaseBdev1", 00:32:36.220 "uuid": "2dbda38d-02c8-4650-85c8-9f961e866afc", 00:32:36.220 "is_configured": true, 00:32:36.220 "data_offset": 2048, 00:32:36.220 "data_size": 63488 00:32:36.220 }, 00:32:36.220 { 00:32:36.220 "name": "BaseBdev2", 00:32:36.220 "uuid": "2ec4279e-11c5-4b2c-a764-a49049f981f6", 00:32:36.220 "is_configured": true, 00:32:36.220 "data_offset": 2048, 00:32:36.220 "data_size": 63488 00:32:36.220 }, 00:32:36.220 { 00:32:36.220 "name": "BaseBdev3", 00:32:36.220 "uuid": "3733cf9e-ab0a-4a46-a8d9-2cd30b609868", 00:32:36.220 "is_configured": true, 00:32:36.220 "data_offset": 2048, 00:32:36.220 "data_size": 63488 00:32:36.220 }, 00:32:36.220 { 00:32:36.220 "name": "BaseBdev4", 00:32:36.220 "uuid": "d46be144-dd0a-4f76-97cd-63cb182b8677", 00:32:36.220 "is_configured": true, 00:32:36.220 "data_offset": 2048, 00:32:36.220 "data_size": 63488 00:32:36.220 } 00:32:36.220 ] 00:32:36.220 } 00:32:36.220 } 00:32:36.220 }' 00:32:36.220 17:29:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:32:36.480 17:29:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:32:36.480 BaseBdev2 00:32:36.480 BaseBdev3 00:32:36.480 BaseBdev4' 00:32:36.480 17:29:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:32:36.480 17:29:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:32:36.480 17:29:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:32:36.480 17:29:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:32:36.480 17:29:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:36.480 17:29:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:32:36.480 17:29:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:32:36.480 17:29:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:36.480 17:29:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:32:36.480 17:29:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:32:36.480 17:29:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:32:36.480 17:29:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:32:36.480 17:29:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:36.480 17:29:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:32:36.480 17:29:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:32:36.480 17:29:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:36.480 17:29:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:32:36.480 17:29:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:32:36.480 17:29:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:32:36.480 17:29:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:32:36.480 17:29:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:36.480 17:29:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:32:36.480 17:29:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:32:36.480 17:29:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:36.480 17:29:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:32:36.480 17:29:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:32:36.480 17:29:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:32:36.480 17:29:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:32:36.480 17:29:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:36.480 17:29:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:32:36.480 17:29:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:32:36.480 17:29:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:36.480 17:29:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:32:36.480 17:29:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:32:36.480 17:29:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:32:36.480 17:29:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:36.480 17:29:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:32:36.480 [2024-11-26 17:29:13.915787] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:32:36.480 [2024-11-26 17:29:13.915823] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:32:36.480 [2024-11-26 17:29:13.915878] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:32:36.740 17:29:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:36.740 17:29:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # local expected_state 00:32:36.740 17:29:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@261 -- # has_redundancy concat 00:32:36.740 17:29:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # case $1 in 00:32:36.740 17:29:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # return 1 00:32:36.740 17:29:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@262 -- # expected_state=offline 00:32:36.740 17:29:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid offline concat 64 3 00:32:36.740 17:29:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:32:36.740 17:29:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=offline 00:32:36.740 17:29:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:32:36.740 17:29:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:32:36.740 17:29:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:32:36.740 17:29:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:32:36.740 17:29:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:32:36.740 17:29:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:32:36.740 17:29:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:32:36.740 17:29:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:32:36.740 17:29:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:36.740 17:29:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:32:36.740 17:29:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:32:36.740 17:29:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:36.740 17:29:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:32:36.740 "name": "Existed_Raid", 00:32:36.740 "uuid": "dca22561-f657-45a6-8c6b-29a0e69453f2", 00:32:36.741 "strip_size_kb": 64, 00:32:36.741 "state": "offline", 00:32:36.741 "raid_level": "concat", 00:32:36.741 "superblock": true, 00:32:36.741 "num_base_bdevs": 4, 00:32:36.741 "num_base_bdevs_discovered": 3, 00:32:36.741 "num_base_bdevs_operational": 3, 00:32:36.741 "base_bdevs_list": [ 00:32:36.741 { 00:32:36.741 "name": null, 00:32:36.741 "uuid": "00000000-0000-0000-0000-000000000000", 00:32:36.741 "is_configured": false, 00:32:36.741 "data_offset": 0, 00:32:36.741 "data_size": 63488 00:32:36.741 }, 00:32:36.741 { 00:32:36.741 "name": "BaseBdev2", 00:32:36.741 "uuid": "2ec4279e-11c5-4b2c-a764-a49049f981f6", 00:32:36.741 "is_configured": true, 00:32:36.741 "data_offset": 2048, 00:32:36.741 "data_size": 63488 00:32:36.741 }, 00:32:36.741 { 00:32:36.741 "name": "BaseBdev3", 00:32:36.741 "uuid": "3733cf9e-ab0a-4a46-a8d9-2cd30b609868", 00:32:36.741 "is_configured": true, 00:32:36.741 "data_offset": 2048, 00:32:36.741 "data_size": 63488 00:32:36.741 }, 00:32:36.741 { 00:32:36.741 "name": "BaseBdev4", 00:32:36.741 "uuid": "d46be144-dd0a-4f76-97cd-63cb182b8677", 00:32:36.741 "is_configured": true, 00:32:36.741 "data_offset": 2048, 00:32:36.741 "data_size": 63488 00:32:36.741 } 00:32:36.741 ] 00:32:36.741 }' 00:32:36.741 17:29:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:32:36.741 17:29:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:32:37.308 17:29:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:32:37.308 17:29:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:32:37.308 17:29:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:32:37.308 17:29:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:32:37.308 17:29:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:37.308 17:29:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:32:37.308 17:29:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:37.308 17:29:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:32:37.308 17:29:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:32:37.308 17:29:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:32:37.308 17:29:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:37.308 17:29:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:32:37.308 [2024-11-26 17:29:14.571596] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:32:37.308 17:29:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:37.308 17:29:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:32:37.308 17:29:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:32:37.308 17:29:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:32:37.308 17:29:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:32:37.308 17:29:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:37.308 17:29:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:32:37.308 17:29:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:37.308 17:29:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:32:37.309 17:29:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:32:37.309 17:29:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:32:37.309 17:29:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:37.309 17:29:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:32:37.309 [2024-11-26 17:29:14.738296] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:32:37.567 17:29:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:37.567 17:29:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:32:37.567 17:29:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:32:37.567 17:29:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:32:37.567 17:29:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:32:37.567 17:29:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:37.567 17:29:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:32:37.567 17:29:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:37.567 17:29:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:32:37.567 17:29:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:32:37.567 17:29:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev4 00:32:37.567 17:29:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:37.567 17:29:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:32:37.567 [2024-11-26 17:29:14.901285] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev4 00:32:37.567 [2024-11-26 17:29:14.901339] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:32:37.826 17:29:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:37.826 17:29:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:32:37.826 17:29:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:32:37.826 17:29:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:32:37.826 17:29:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:37.826 17:29:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:32:37.826 17:29:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:32:37.826 17:29:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:37.826 17:29:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:32:37.826 17:29:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:32:37.826 17:29:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@284 -- # '[' 4 -gt 2 ']' 00:32:37.826 17:29:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:32:37.826 17:29:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:32:37.826 17:29:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:32:37.826 17:29:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:37.826 17:29:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:32:37.826 BaseBdev2 00:32:37.826 17:29:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:37.826 17:29:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:32:37.826 17:29:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:32:37.826 17:29:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:32:37.826 17:29:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:32:37.826 17:29:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:32:37.826 17:29:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:32:37.826 17:29:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:32:37.826 17:29:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:37.826 17:29:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:32:37.826 17:29:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:37.826 17:29:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:32:37.826 17:29:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:37.826 17:29:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:32:37.826 [ 00:32:37.826 { 00:32:37.826 "name": "BaseBdev2", 00:32:37.826 "aliases": [ 00:32:37.826 "7f20f31e-2802-49ff-8788-c902f26ee874" 00:32:37.826 ], 00:32:37.826 "product_name": "Malloc disk", 00:32:37.827 "block_size": 512, 00:32:37.827 "num_blocks": 65536, 00:32:37.827 "uuid": "7f20f31e-2802-49ff-8788-c902f26ee874", 00:32:37.827 "assigned_rate_limits": { 00:32:37.827 "rw_ios_per_sec": 0, 00:32:37.827 "rw_mbytes_per_sec": 0, 00:32:37.827 "r_mbytes_per_sec": 0, 00:32:37.827 "w_mbytes_per_sec": 0 00:32:37.827 }, 00:32:37.827 "claimed": false, 00:32:37.827 "zoned": false, 00:32:37.827 "supported_io_types": { 00:32:37.827 "read": true, 00:32:37.827 "write": true, 00:32:37.827 "unmap": true, 00:32:37.827 "flush": true, 00:32:37.827 "reset": true, 00:32:37.827 "nvme_admin": false, 00:32:37.827 "nvme_io": false, 00:32:37.827 "nvme_io_md": false, 00:32:37.827 "write_zeroes": true, 00:32:37.827 "zcopy": true, 00:32:37.827 "get_zone_info": false, 00:32:37.827 "zone_management": false, 00:32:37.827 "zone_append": false, 00:32:37.827 "compare": false, 00:32:37.827 "compare_and_write": false, 00:32:37.827 "abort": true, 00:32:37.827 "seek_hole": false, 00:32:37.827 "seek_data": false, 00:32:37.827 "copy": true, 00:32:37.827 "nvme_iov_md": false 00:32:37.827 }, 00:32:37.827 "memory_domains": [ 00:32:37.827 { 00:32:37.827 "dma_device_id": "system", 00:32:37.827 "dma_device_type": 1 00:32:37.827 }, 00:32:37.827 { 00:32:37.827 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:32:37.827 "dma_device_type": 2 00:32:37.827 } 00:32:37.827 ], 00:32:37.827 "driver_specific": {} 00:32:37.827 } 00:32:37.827 ] 00:32:37.827 17:29:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:37.827 17:29:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:32:37.827 17:29:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:32:37.827 17:29:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:32:37.827 17:29:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:32:37.827 17:29:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:37.827 17:29:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:32:37.827 BaseBdev3 00:32:37.827 17:29:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:37.827 17:29:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:32:37.827 17:29:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:32:37.827 17:29:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:32:37.827 17:29:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:32:37.827 17:29:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:32:37.827 17:29:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:32:37.827 17:29:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:32:37.827 17:29:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:37.827 17:29:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:32:37.827 17:29:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:37.827 17:29:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:32:37.827 17:29:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:37.827 17:29:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:32:37.827 [ 00:32:37.827 { 00:32:37.827 "name": "BaseBdev3", 00:32:37.827 "aliases": [ 00:32:37.827 "dc4b485d-3a27-459f-bde4-1e5df6152436" 00:32:37.827 ], 00:32:37.827 "product_name": "Malloc disk", 00:32:37.827 "block_size": 512, 00:32:37.827 "num_blocks": 65536, 00:32:37.827 "uuid": "dc4b485d-3a27-459f-bde4-1e5df6152436", 00:32:37.827 "assigned_rate_limits": { 00:32:37.827 "rw_ios_per_sec": 0, 00:32:37.827 "rw_mbytes_per_sec": 0, 00:32:37.827 "r_mbytes_per_sec": 0, 00:32:37.827 "w_mbytes_per_sec": 0 00:32:37.827 }, 00:32:37.827 "claimed": false, 00:32:37.827 "zoned": false, 00:32:37.827 "supported_io_types": { 00:32:37.827 "read": true, 00:32:37.827 "write": true, 00:32:37.827 "unmap": true, 00:32:37.827 "flush": true, 00:32:37.827 "reset": true, 00:32:37.827 "nvme_admin": false, 00:32:37.827 "nvme_io": false, 00:32:37.827 "nvme_io_md": false, 00:32:37.827 "write_zeroes": true, 00:32:37.827 "zcopy": true, 00:32:37.827 "get_zone_info": false, 00:32:37.827 "zone_management": false, 00:32:37.827 "zone_append": false, 00:32:37.827 "compare": false, 00:32:37.827 "compare_and_write": false, 00:32:37.827 "abort": true, 00:32:37.827 "seek_hole": false, 00:32:37.827 "seek_data": false, 00:32:37.827 "copy": true, 00:32:37.827 "nvme_iov_md": false 00:32:37.827 }, 00:32:37.827 "memory_domains": [ 00:32:37.827 { 00:32:37.827 "dma_device_id": "system", 00:32:37.827 "dma_device_type": 1 00:32:37.827 }, 00:32:37.827 { 00:32:37.827 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:32:37.827 "dma_device_type": 2 00:32:37.827 } 00:32:37.827 ], 00:32:37.827 "driver_specific": {} 00:32:37.827 } 00:32:37.827 ] 00:32:37.827 17:29:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:37.827 17:29:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:32:37.827 17:29:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:32:37.827 17:29:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:32:37.827 17:29:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:32:37.827 17:29:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:37.827 17:29:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:32:38.086 BaseBdev4 00:32:38.086 17:29:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:38.086 17:29:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev4 00:32:38.086 17:29:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev4 00:32:38.086 17:29:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:32:38.086 17:29:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:32:38.086 17:29:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:32:38.086 17:29:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:32:38.086 17:29:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:32:38.087 17:29:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:38.087 17:29:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:32:38.087 17:29:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:38.087 17:29:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:32:38.087 17:29:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:38.087 17:29:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:32:38.087 [ 00:32:38.087 { 00:32:38.087 "name": "BaseBdev4", 00:32:38.087 "aliases": [ 00:32:38.087 "75115ce6-77a6-41b0-a27e-3fd94f37c74b" 00:32:38.087 ], 00:32:38.087 "product_name": "Malloc disk", 00:32:38.087 "block_size": 512, 00:32:38.087 "num_blocks": 65536, 00:32:38.087 "uuid": "75115ce6-77a6-41b0-a27e-3fd94f37c74b", 00:32:38.087 "assigned_rate_limits": { 00:32:38.087 "rw_ios_per_sec": 0, 00:32:38.087 "rw_mbytes_per_sec": 0, 00:32:38.087 "r_mbytes_per_sec": 0, 00:32:38.087 "w_mbytes_per_sec": 0 00:32:38.087 }, 00:32:38.087 "claimed": false, 00:32:38.087 "zoned": false, 00:32:38.087 "supported_io_types": { 00:32:38.087 "read": true, 00:32:38.087 "write": true, 00:32:38.087 "unmap": true, 00:32:38.087 "flush": true, 00:32:38.087 "reset": true, 00:32:38.087 "nvme_admin": false, 00:32:38.087 "nvme_io": false, 00:32:38.087 "nvme_io_md": false, 00:32:38.087 "write_zeroes": true, 00:32:38.087 "zcopy": true, 00:32:38.087 "get_zone_info": false, 00:32:38.087 "zone_management": false, 00:32:38.087 "zone_append": false, 00:32:38.087 "compare": false, 00:32:38.087 "compare_and_write": false, 00:32:38.087 "abort": true, 00:32:38.087 "seek_hole": false, 00:32:38.087 "seek_data": false, 00:32:38.087 "copy": true, 00:32:38.087 "nvme_iov_md": false 00:32:38.087 }, 00:32:38.087 "memory_domains": [ 00:32:38.087 { 00:32:38.087 "dma_device_id": "system", 00:32:38.087 "dma_device_type": 1 00:32:38.087 }, 00:32:38.087 { 00:32:38.087 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:32:38.087 "dma_device_type": 2 00:32:38.087 } 00:32:38.087 ], 00:32:38.087 "driver_specific": {} 00:32:38.087 } 00:32:38.087 ] 00:32:38.087 17:29:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:38.087 17:29:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:32:38.087 17:29:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:32:38.087 17:29:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:32:38.087 17:29:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -z 64 -s -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:32:38.087 17:29:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:38.087 17:29:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:32:38.087 [2024-11-26 17:29:15.332603] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:32:38.087 [2024-11-26 17:29:15.332658] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:32:38.087 [2024-11-26 17:29:15.332687] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:32:38.087 [2024-11-26 17:29:15.335203] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:32:38.087 [2024-11-26 17:29:15.335435] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:32:38.087 17:29:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:38.087 17:29:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:32:38.087 17:29:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:32:38.087 17:29:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:32:38.087 17:29:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:32:38.087 17:29:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:32:38.087 17:29:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:32:38.087 17:29:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:32:38.087 17:29:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:32:38.087 17:29:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:32:38.087 17:29:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:32:38.087 17:29:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:32:38.087 17:29:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:32:38.087 17:29:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:38.087 17:29:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:32:38.087 17:29:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:38.087 17:29:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:32:38.087 "name": "Existed_Raid", 00:32:38.087 "uuid": "70c758b5-0b76-40fd-b8fd-6c58324436c1", 00:32:38.087 "strip_size_kb": 64, 00:32:38.087 "state": "configuring", 00:32:38.087 "raid_level": "concat", 00:32:38.087 "superblock": true, 00:32:38.087 "num_base_bdevs": 4, 00:32:38.087 "num_base_bdevs_discovered": 3, 00:32:38.087 "num_base_bdevs_operational": 4, 00:32:38.087 "base_bdevs_list": [ 00:32:38.087 { 00:32:38.087 "name": "BaseBdev1", 00:32:38.087 "uuid": "00000000-0000-0000-0000-000000000000", 00:32:38.087 "is_configured": false, 00:32:38.087 "data_offset": 0, 00:32:38.087 "data_size": 0 00:32:38.087 }, 00:32:38.087 { 00:32:38.087 "name": "BaseBdev2", 00:32:38.087 "uuid": "7f20f31e-2802-49ff-8788-c902f26ee874", 00:32:38.087 "is_configured": true, 00:32:38.087 "data_offset": 2048, 00:32:38.087 "data_size": 63488 00:32:38.087 }, 00:32:38.087 { 00:32:38.087 "name": "BaseBdev3", 00:32:38.087 "uuid": "dc4b485d-3a27-459f-bde4-1e5df6152436", 00:32:38.087 "is_configured": true, 00:32:38.087 "data_offset": 2048, 00:32:38.087 "data_size": 63488 00:32:38.087 }, 00:32:38.087 { 00:32:38.087 "name": "BaseBdev4", 00:32:38.087 "uuid": "75115ce6-77a6-41b0-a27e-3fd94f37c74b", 00:32:38.087 "is_configured": true, 00:32:38.087 "data_offset": 2048, 00:32:38.087 "data_size": 63488 00:32:38.087 } 00:32:38.087 ] 00:32:38.087 }' 00:32:38.087 17:29:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:32:38.087 17:29:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:32:38.655 17:29:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:32:38.655 17:29:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:38.655 17:29:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:32:38.655 [2024-11-26 17:29:15.808725] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:32:38.655 17:29:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:38.655 17:29:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:32:38.655 17:29:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:32:38.655 17:29:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:32:38.655 17:29:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:32:38.655 17:29:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:32:38.655 17:29:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:32:38.655 17:29:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:32:38.655 17:29:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:32:38.655 17:29:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:32:38.655 17:29:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:32:38.655 17:29:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:32:38.655 17:29:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:38.655 17:29:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:32:38.655 17:29:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:32:38.655 17:29:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:38.655 17:29:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:32:38.655 "name": "Existed_Raid", 00:32:38.655 "uuid": "70c758b5-0b76-40fd-b8fd-6c58324436c1", 00:32:38.655 "strip_size_kb": 64, 00:32:38.655 "state": "configuring", 00:32:38.655 "raid_level": "concat", 00:32:38.655 "superblock": true, 00:32:38.655 "num_base_bdevs": 4, 00:32:38.655 "num_base_bdevs_discovered": 2, 00:32:38.655 "num_base_bdevs_operational": 4, 00:32:38.655 "base_bdevs_list": [ 00:32:38.655 { 00:32:38.655 "name": "BaseBdev1", 00:32:38.655 "uuid": "00000000-0000-0000-0000-000000000000", 00:32:38.655 "is_configured": false, 00:32:38.655 "data_offset": 0, 00:32:38.655 "data_size": 0 00:32:38.655 }, 00:32:38.655 { 00:32:38.655 "name": null, 00:32:38.655 "uuid": "7f20f31e-2802-49ff-8788-c902f26ee874", 00:32:38.655 "is_configured": false, 00:32:38.655 "data_offset": 0, 00:32:38.655 "data_size": 63488 00:32:38.655 }, 00:32:38.655 { 00:32:38.655 "name": "BaseBdev3", 00:32:38.655 "uuid": "dc4b485d-3a27-459f-bde4-1e5df6152436", 00:32:38.655 "is_configured": true, 00:32:38.655 "data_offset": 2048, 00:32:38.655 "data_size": 63488 00:32:38.655 }, 00:32:38.655 { 00:32:38.655 "name": "BaseBdev4", 00:32:38.655 "uuid": "75115ce6-77a6-41b0-a27e-3fd94f37c74b", 00:32:38.655 "is_configured": true, 00:32:38.655 "data_offset": 2048, 00:32:38.655 "data_size": 63488 00:32:38.655 } 00:32:38.655 ] 00:32:38.655 }' 00:32:38.655 17:29:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:32:38.655 17:29:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:32:38.915 17:29:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:32:38.915 17:29:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:38.915 17:29:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:32:38.915 17:29:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:32:38.915 17:29:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:38.915 17:29:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:32:38.915 17:29:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:32:38.915 17:29:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:38.915 17:29:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:32:39.174 [2024-11-26 17:29:16.379014] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:32:39.174 BaseBdev1 00:32:39.174 17:29:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:39.174 17:29:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:32:39.174 17:29:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:32:39.174 17:29:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:32:39.174 17:29:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:32:39.174 17:29:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:32:39.174 17:29:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:32:39.174 17:29:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:32:39.174 17:29:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:39.174 17:29:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:32:39.174 17:29:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:39.174 17:29:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:32:39.174 17:29:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:39.174 17:29:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:32:39.174 [ 00:32:39.174 { 00:32:39.174 "name": "BaseBdev1", 00:32:39.174 "aliases": [ 00:32:39.174 "16c4deb5-a085-4ada-a81f-d6bf6d160804" 00:32:39.174 ], 00:32:39.174 "product_name": "Malloc disk", 00:32:39.174 "block_size": 512, 00:32:39.174 "num_blocks": 65536, 00:32:39.174 "uuid": "16c4deb5-a085-4ada-a81f-d6bf6d160804", 00:32:39.174 "assigned_rate_limits": { 00:32:39.174 "rw_ios_per_sec": 0, 00:32:39.174 "rw_mbytes_per_sec": 0, 00:32:39.174 "r_mbytes_per_sec": 0, 00:32:39.174 "w_mbytes_per_sec": 0 00:32:39.174 }, 00:32:39.174 "claimed": true, 00:32:39.174 "claim_type": "exclusive_write", 00:32:39.174 "zoned": false, 00:32:39.174 "supported_io_types": { 00:32:39.174 "read": true, 00:32:39.174 "write": true, 00:32:39.174 "unmap": true, 00:32:39.174 "flush": true, 00:32:39.174 "reset": true, 00:32:39.174 "nvme_admin": false, 00:32:39.174 "nvme_io": false, 00:32:39.174 "nvme_io_md": false, 00:32:39.174 "write_zeroes": true, 00:32:39.174 "zcopy": true, 00:32:39.174 "get_zone_info": false, 00:32:39.174 "zone_management": false, 00:32:39.174 "zone_append": false, 00:32:39.174 "compare": false, 00:32:39.174 "compare_and_write": false, 00:32:39.174 "abort": true, 00:32:39.174 "seek_hole": false, 00:32:39.174 "seek_data": false, 00:32:39.174 "copy": true, 00:32:39.174 "nvme_iov_md": false 00:32:39.174 }, 00:32:39.174 "memory_domains": [ 00:32:39.174 { 00:32:39.174 "dma_device_id": "system", 00:32:39.174 "dma_device_type": 1 00:32:39.174 }, 00:32:39.174 { 00:32:39.174 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:32:39.174 "dma_device_type": 2 00:32:39.174 } 00:32:39.174 ], 00:32:39.174 "driver_specific": {} 00:32:39.174 } 00:32:39.174 ] 00:32:39.174 17:29:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:39.174 17:29:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:32:39.174 17:29:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:32:39.174 17:29:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:32:39.174 17:29:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:32:39.174 17:29:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:32:39.174 17:29:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:32:39.174 17:29:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:32:39.174 17:29:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:32:39.174 17:29:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:32:39.174 17:29:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:32:39.175 17:29:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:32:39.175 17:29:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:32:39.175 17:29:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:32:39.175 17:29:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:39.175 17:29:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:32:39.175 17:29:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:39.175 17:29:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:32:39.175 "name": "Existed_Raid", 00:32:39.175 "uuid": "70c758b5-0b76-40fd-b8fd-6c58324436c1", 00:32:39.175 "strip_size_kb": 64, 00:32:39.175 "state": "configuring", 00:32:39.175 "raid_level": "concat", 00:32:39.175 "superblock": true, 00:32:39.175 "num_base_bdevs": 4, 00:32:39.175 "num_base_bdevs_discovered": 3, 00:32:39.175 "num_base_bdevs_operational": 4, 00:32:39.175 "base_bdevs_list": [ 00:32:39.175 { 00:32:39.175 "name": "BaseBdev1", 00:32:39.175 "uuid": "16c4deb5-a085-4ada-a81f-d6bf6d160804", 00:32:39.175 "is_configured": true, 00:32:39.175 "data_offset": 2048, 00:32:39.175 "data_size": 63488 00:32:39.175 }, 00:32:39.175 { 00:32:39.175 "name": null, 00:32:39.175 "uuid": "7f20f31e-2802-49ff-8788-c902f26ee874", 00:32:39.175 "is_configured": false, 00:32:39.175 "data_offset": 0, 00:32:39.175 "data_size": 63488 00:32:39.175 }, 00:32:39.175 { 00:32:39.175 "name": "BaseBdev3", 00:32:39.175 "uuid": "dc4b485d-3a27-459f-bde4-1e5df6152436", 00:32:39.175 "is_configured": true, 00:32:39.175 "data_offset": 2048, 00:32:39.175 "data_size": 63488 00:32:39.175 }, 00:32:39.175 { 00:32:39.175 "name": "BaseBdev4", 00:32:39.175 "uuid": "75115ce6-77a6-41b0-a27e-3fd94f37c74b", 00:32:39.175 "is_configured": true, 00:32:39.175 "data_offset": 2048, 00:32:39.175 "data_size": 63488 00:32:39.175 } 00:32:39.175 ] 00:32:39.175 }' 00:32:39.175 17:29:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:32:39.175 17:29:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:32:39.433 17:29:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:32:39.433 17:29:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:39.433 17:29:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:32:39.434 17:29:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:32:39.434 17:29:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:39.692 17:29:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:32:39.692 17:29:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:32:39.692 17:29:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:39.692 17:29:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:32:39.692 [2024-11-26 17:29:16.887320] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:32:39.692 17:29:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:39.692 17:29:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:32:39.692 17:29:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:32:39.692 17:29:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:32:39.692 17:29:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:32:39.692 17:29:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:32:39.692 17:29:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:32:39.692 17:29:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:32:39.692 17:29:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:32:39.692 17:29:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:32:39.692 17:29:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:32:39.692 17:29:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:32:39.692 17:29:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:32:39.692 17:29:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:39.692 17:29:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:32:39.692 17:29:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:39.692 17:29:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:32:39.692 "name": "Existed_Raid", 00:32:39.692 "uuid": "70c758b5-0b76-40fd-b8fd-6c58324436c1", 00:32:39.692 "strip_size_kb": 64, 00:32:39.692 "state": "configuring", 00:32:39.692 "raid_level": "concat", 00:32:39.692 "superblock": true, 00:32:39.692 "num_base_bdevs": 4, 00:32:39.692 "num_base_bdevs_discovered": 2, 00:32:39.692 "num_base_bdevs_operational": 4, 00:32:39.692 "base_bdevs_list": [ 00:32:39.692 { 00:32:39.692 "name": "BaseBdev1", 00:32:39.692 "uuid": "16c4deb5-a085-4ada-a81f-d6bf6d160804", 00:32:39.692 "is_configured": true, 00:32:39.692 "data_offset": 2048, 00:32:39.692 "data_size": 63488 00:32:39.692 }, 00:32:39.692 { 00:32:39.692 "name": null, 00:32:39.692 "uuid": "7f20f31e-2802-49ff-8788-c902f26ee874", 00:32:39.692 "is_configured": false, 00:32:39.692 "data_offset": 0, 00:32:39.692 "data_size": 63488 00:32:39.692 }, 00:32:39.692 { 00:32:39.692 "name": null, 00:32:39.692 "uuid": "dc4b485d-3a27-459f-bde4-1e5df6152436", 00:32:39.692 "is_configured": false, 00:32:39.692 "data_offset": 0, 00:32:39.692 "data_size": 63488 00:32:39.692 }, 00:32:39.692 { 00:32:39.692 "name": "BaseBdev4", 00:32:39.692 "uuid": "75115ce6-77a6-41b0-a27e-3fd94f37c74b", 00:32:39.692 "is_configured": true, 00:32:39.692 "data_offset": 2048, 00:32:39.692 "data_size": 63488 00:32:39.692 } 00:32:39.692 ] 00:32:39.692 }' 00:32:39.692 17:29:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:32:39.692 17:29:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:32:39.950 17:29:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:32:39.950 17:29:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:32:39.950 17:29:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:39.950 17:29:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:32:39.950 17:29:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:39.950 17:29:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:32:39.950 17:29:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:32:39.950 17:29:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:39.950 17:29:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:32:40.209 [2024-11-26 17:29:17.399434] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:32:40.209 17:29:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:40.209 17:29:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:32:40.209 17:29:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:32:40.209 17:29:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:32:40.209 17:29:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:32:40.209 17:29:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:32:40.209 17:29:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:32:40.209 17:29:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:32:40.209 17:29:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:32:40.209 17:29:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:32:40.209 17:29:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:32:40.209 17:29:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:32:40.209 17:29:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:32:40.209 17:29:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:40.209 17:29:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:32:40.209 17:29:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:40.209 17:29:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:32:40.209 "name": "Existed_Raid", 00:32:40.209 "uuid": "70c758b5-0b76-40fd-b8fd-6c58324436c1", 00:32:40.209 "strip_size_kb": 64, 00:32:40.209 "state": "configuring", 00:32:40.209 "raid_level": "concat", 00:32:40.209 "superblock": true, 00:32:40.209 "num_base_bdevs": 4, 00:32:40.209 "num_base_bdevs_discovered": 3, 00:32:40.209 "num_base_bdevs_operational": 4, 00:32:40.209 "base_bdevs_list": [ 00:32:40.209 { 00:32:40.209 "name": "BaseBdev1", 00:32:40.209 "uuid": "16c4deb5-a085-4ada-a81f-d6bf6d160804", 00:32:40.209 "is_configured": true, 00:32:40.209 "data_offset": 2048, 00:32:40.209 "data_size": 63488 00:32:40.209 }, 00:32:40.209 { 00:32:40.209 "name": null, 00:32:40.209 "uuid": "7f20f31e-2802-49ff-8788-c902f26ee874", 00:32:40.209 "is_configured": false, 00:32:40.209 "data_offset": 0, 00:32:40.209 "data_size": 63488 00:32:40.209 }, 00:32:40.209 { 00:32:40.209 "name": "BaseBdev3", 00:32:40.209 "uuid": "dc4b485d-3a27-459f-bde4-1e5df6152436", 00:32:40.209 "is_configured": true, 00:32:40.209 "data_offset": 2048, 00:32:40.209 "data_size": 63488 00:32:40.209 }, 00:32:40.209 { 00:32:40.209 "name": "BaseBdev4", 00:32:40.209 "uuid": "75115ce6-77a6-41b0-a27e-3fd94f37c74b", 00:32:40.209 "is_configured": true, 00:32:40.209 "data_offset": 2048, 00:32:40.209 "data_size": 63488 00:32:40.209 } 00:32:40.209 ] 00:32:40.209 }' 00:32:40.209 17:29:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:32:40.209 17:29:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:32:40.467 17:29:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:32:40.467 17:29:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:40.467 17:29:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:32:40.467 17:29:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:32:40.467 17:29:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:40.467 17:29:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:32:40.467 17:29:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:32:40.467 17:29:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:40.467 17:29:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:32:40.467 [2024-11-26 17:29:17.911674] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:32:40.726 17:29:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:40.726 17:29:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:32:40.726 17:29:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:32:40.726 17:29:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:32:40.726 17:29:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:32:40.726 17:29:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:32:40.726 17:29:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:32:40.726 17:29:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:32:40.726 17:29:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:32:40.726 17:29:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:32:40.726 17:29:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:32:40.726 17:29:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:32:40.726 17:29:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:40.726 17:29:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:32:40.726 17:29:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:32:40.726 17:29:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:40.726 17:29:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:32:40.726 "name": "Existed_Raid", 00:32:40.726 "uuid": "70c758b5-0b76-40fd-b8fd-6c58324436c1", 00:32:40.726 "strip_size_kb": 64, 00:32:40.726 "state": "configuring", 00:32:40.726 "raid_level": "concat", 00:32:40.726 "superblock": true, 00:32:40.726 "num_base_bdevs": 4, 00:32:40.726 "num_base_bdevs_discovered": 2, 00:32:40.726 "num_base_bdevs_operational": 4, 00:32:40.726 "base_bdevs_list": [ 00:32:40.726 { 00:32:40.726 "name": null, 00:32:40.726 "uuid": "16c4deb5-a085-4ada-a81f-d6bf6d160804", 00:32:40.726 "is_configured": false, 00:32:40.726 "data_offset": 0, 00:32:40.726 "data_size": 63488 00:32:40.726 }, 00:32:40.726 { 00:32:40.726 "name": null, 00:32:40.726 "uuid": "7f20f31e-2802-49ff-8788-c902f26ee874", 00:32:40.726 "is_configured": false, 00:32:40.726 "data_offset": 0, 00:32:40.726 "data_size": 63488 00:32:40.726 }, 00:32:40.726 { 00:32:40.726 "name": "BaseBdev3", 00:32:40.726 "uuid": "dc4b485d-3a27-459f-bde4-1e5df6152436", 00:32:40.726 "is_configured": true, 00:32:40.726 "data_offset": 2048, 00:32:40.726 "data_size": 63488 00:32:40.726 }, 00:32:40.726 { 00:32:40.726 "name": "BaseBdev4", 00:32:40.727 "uuid": "75115ce6-77a6-41b0-a27e-3fd94f37c74b", 00:32:40.727 "is_configured": true, 00:32:40.727 "data_offset": 2048, 00:32:40.727 "data_size": 63488 00:32:40.727 } 00:32:40.727 ] 00:32:40.727 }' 00:32:40.727 17:29:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:32:40.727 17:29:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:32:41.294 17:29:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:32:41.294 17:29:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:32:41.294 17:29:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:41.294 17:29:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:32:41.294 17:29:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:41.294 17:29:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:32:41.294 17:29:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:32:41.294 17:29:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:41.294 17:29:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:32:41.294 [2024-11-26 17:29:18.539218] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:32:41.294 17:29:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:41.294 17:29:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:32:41.294 17:29:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:32:41.294 17:29:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:32:41.294 17:29:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:32:41.294 17:29:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:32:41.294 17:29:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:32:41.294 17:29:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:32:41.294 17:29:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:32:41.294 17:29:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:32:41.294 17:29:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:32:41.294 17:29:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:32:41.294 17:29:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:32:41.294 17:29:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:41.294 17:29:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:32:41.294 17:29:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:41.294 17:29:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:32:41.294 "name": "Existed_Raid", 00:32:41.294 "uuid": "70c758b5-0b76-40fd-b8fd-6c58324436c1", 00:32:41.294 "strip_size_kb": 64, 00:32:41.294 "state": "configuring", 00:32:41.294 "raid_level": "concat", 00:32:41.294 "superblock": true, 00:32:41.294 "num_base_bdevs": 4, 00:32:41.294 "num_base_bdevs_discovered": 3, 00:32:41.294 "num_base_bdevs_operational": 4, 00:32:41.294 "base_bdevs_list": [ 00:32:41.294 { 00:32:41.294 "name": null, 00:32:41.294 "uuid": "16c4deb5-a085-4ada-a81f-d6bf6d160804", 00:32:41.294 "is_configured": false, 00:32:41.294 "data_offset": 0, 00:32:41.294 "data_size": 63488 00:32:41.294 }, 00:32:41.294 { 00:32:41.294 "name": "BaseBdev2", 00:32:41.294 "uuid": "7f20f31e-2802-49ff-8788-c902f26ee874", 00:32:41.294 "is_configured": true, 00:32:41.294 "data_offset": 2048, 00:32:41.294 "data_size": 63488 00:32:41.294 }, 00:32:41.294 { 00:32:41.294 "name": "BaseBdev3", 00:32:41.294 "uuid": "dc4b485d-3a27-459f-bde4-1e5df6152436", 00:32:41.294 "is_configured": true, 00:32:41.294 "data_offset": 2048, 00:32:41.294 "data_size": 63488 00:32:41.294 }, 00:32:41.294 { 00:32:41.294 "name": "BaseBdev4", 00:32:41.294 "uuid": "75115ce6-77a6-41b0-a27e-3fd94f37c74b", 00:32:41.294 "is_configured": true, 00:32:41.294 "data_offset": 2048, 00:32:41.294 "data_size": 63488 00:32:41.294 } 00:32:41.294 ] 00:32:41.294 }' 00:32:41.294 17:29:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:32:41.294 17:29:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:32:41.553 17:29:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:32:41.553 17:29:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:41.553 17:29:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:32:41.553 17:29:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:32:41.553 17:29:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:41.553 17:29:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:32:41.553 17:29:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:32:41.553 17:29:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:41.553 17:29:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:32:41.553 17:29:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:32:41.812 17:29:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:41.812 17:29:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u 16c4deb5-a085-4ada-a81f-d6bf6d160804 00:32:41.812 17:29:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:41.812 17:29:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:32:41.812 [2024-11-26 17:29:19.079993] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:32:41.812 [2024-11-26 17:29:19.080308] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:32:41.812 [2024-11-26 17:29:19.080324] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:32:41.812 [2024-11-26 17:29:19.080651] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000063c0 00:32:41.812 [2024-11-26 17:29:19.080794] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:32:41.812 [2024-11-26 17:29:19.080809] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000008200 00:32:41.812 NewBaseBdev 00:32:41.812 [2024-11-26 17:29:19.080945] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:32:41.812 17:29:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:41.812 17:29:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:32:41.812 17:29:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=NewBaseBdev 00:32:41.812 17:29:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:32:41.812 17:29:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:32:41.812 17:29:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:32:41.812 17:29:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:32:41.812 17:29:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:32:41.812 17:29:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:41.812 17:29:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:32:41.812 17:29:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:41.813 17:29:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:32:41.813 17:29:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:41.813 17:29:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:32:41.813 [ 00:32:41.813 { 00:32:41.813 "name": "NewBaseBdev", 00:32:41.813 "aliases": [ 00:32:41.813 "16c4deb5-a085-4ada-a81f-d6bf6d160804" 00:32:41.813 ], 00:32:41.813 "product_name": "Malloc disk", 00:32:41.813 "block_size": 512, 00:32:41.813 "num_blocks": 65536, 00:32:41.813 "uuid": "16c4deb5-a085-4ada-a81f-d6bf6d160804", 00:32:41.813 "assigned_rate_limits": { 00:32:41.813 "rw_ios_per_sec": 0, 00:32:41.813 "rw_mbytes_per_sec": 0, 00:32:41.813 "r_mbytes_per_sec": 0, 00:32:41.813 "w_mbytes_per_sec": 0 00:32:41.813 }, 00:32:41.813 "claimed": true, 00:32:41.813 "claim_type": "exclusive_write", 00:32:41.813 "zoned": false, 00:32:41.813 "supported_io_types": { 00:32:41.813 "read": true, 00:32:41.813 "write": true, 00:32:41.813 "unmap": true, 00:32:41.813 "flush": true, 00:32:41.813 "reset": true, 00:32:41.813 "nvme_admin": false, 00:32:41.813 "nvme_io": false, 00:32:41.813 "nvme_io_md": false, 00:32:41.813 "write_zeroes": true, 00:32:41.813 "zcopy": true, 00:32:41.813 "get_zone_info": false, 00:32:41.813 "zone_management": false, 00:32:41.813 "zone_append": false, 00:32:41.813 "compare": false, 00:32:41.813 "compare_and_write": false, 00:32:41.813 "abort": true, 00:32:41.813 "seek_hole": false, 00:32:41.813 "seek_data": false, 00:32:41.813 "copy": true, 00:32:41.813 "nvme_iov_md": false 00:32:41.813 }, 00:32:41.813 "memory_domains": [ 00:32:41.813 { 00:32:41.813 "dma_device_id": "system", 00:32:41.813 "dma_device_type": 1 00:32:41.813 }, 00:32:41.813 { 00:32:41.813 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:32:41.813 "dma_device_type": 2 00:32:41.813 } 00:32:41.813 ], 00:32:41.813 "driver_specific": {} 00:32:41.813 } 00:32:41.813 ] 00:32:41.813 17:29:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:41.813 17:29:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:32:41.813 17:29:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online concat 64 4 00:32:41.813 17:29:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:32:41.813 17:29:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:32:41.813 17:29:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:32:41.813 17:29:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:32:41.813 17:29:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:32:41.813 17:29:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:32:41.813 17:29:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:32:41.813 17:29:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:32:41.813 17:29:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:32:41.813 17:29:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:32:41.813 17:29:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:32:41.813 17:29:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:41.813 17:29:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:32:41.813 17:29:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:41.813 17:29:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:32:41.813 "name": "Existed_Raid", 00:32:41.813 "uuid": "70c758b5-0b76-40fd-b8fd-6c58324436c1", 00:32:41.813 "strip_size_kb": 64, 00:32:41.813 "state": "online", 00:32:41.813 "raid_level": "concat", 00:32:41.813 "superblock": true, 00:32:41.813 "num_base_bdevs": 4, 00:32:41.813 "num_base_bdevs_discovered": 4, 00:32:41.813 "num_base_bdevs_operational": 4, 00:32:41.813 "base_bdevs_list": [ 00:32:41.813 { 00:32:41.813 "name": "NewBaseBdev", 00:32:41.813 "uuid": "16c4deb5-a085-4ada-a81f-d6bf6d160804", 00:32:41.813 "is_configured": true, 00:32:41.813 "data_offset": 2048, 00:32:41.813 "data_size": 63488 00:32:41.813 }, 00:32:41.813 { 00:32:41.813 "name": "BaseBdev2", 00:32:41.813 "uuid": "7f20f31e-2802-49ff-8788-c902f26ee874", 00:32:41.813 "is_configured": true, 00:32:41.813 "data_offset": 2048, 00:32:41.813 "data_size": 63488 00:32:41.813 }, 00:32:41.813 { 00:32:41.813 "name": "BaseBdev3", 00:32:41.813 "uuid": "dc4b485d-3a27-459f-bde4-1e5df6152436", 00:32:41.813 "is_configured": true, 00:32:41.813 "data_offset": 2048, 00:32:41.813 "data_size": 63488 00:32:41.813 }, 00:32:41.813 { 00:32:41.813 "name": "BaseBdev4", 00:32:41.813 "uuid": "75115ce6-77a6-41b0-a27e-3fd94f37c74b", 00:32:41.813 "is_configured": true, 00:32:41.813 "data_offset": 2048, 00:32:41.813 "data_size": 63488 00:32:41.813 } 00:32:41.813 ] 00:32:41.813 }' 00:32:41.813 17:29:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:32:41.813 17:29:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:32:42.378 17:29:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:32:42.378 17:29:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:32:42.378 17:29:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:32:42.378 17:29:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:32:42.378 17:29:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:32:42.378 17:29:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:32:42.378 17:29:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:32:42.378 17:29:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:32:42.378 17:29:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:42.378 17:29:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:32:42.378 [2024-11-26 17:29:19.576601] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:32:42.378 17:29:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:42.378 17:29:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:32:42.378 "name": "Existed_Raid", 00:32:42.378 "aliases": [ 00:32:42.378 "70c758b5-0b76-40fd-b8fd-6c58324436c1" 00:32:42.378 ], 00:32:42.378 "product_name": "Raid Volume", 00:32:42.378 "block_size": 512, 00:32:42.378 "num_blocks": 253952, 00:32:42.378 "uuid": "70c758b5-0b76-40fd-b8fd-6c58324436c1", 00:32:42.378 "assigned_rate_limits": { 00:32:42.378 "rw_ios_per_sec": 0, 00:32:42.378 "rw_mbytes_per_sec": 0, 00:32:42.378 "r_mbytes_per_sec": 0, 00:32:42.378 "w_mbytes_per_sec": 0 00:32:42.378 }, 00:32:42.378 "claimed": false, 00:32:42.378 "zoned": false, 00:32:42.378 "supported_io_types": { 00:32:42.378 "read": true, 00:32:42.378 "write": true, 00:32:42.378 "unmap": true, 00:32:42.378 "flush": true, 00:32:42.378 "reset": true, 00:32:42.378 "nvme_admin": false, 00:32:42.378 "nvme_io": false, 00:32:42.378 "nvme_io_md": false, 00:32:42.378 "write_zeroes": true, 00:32:42.378 "zcopy": false, 00:32:42.378 "get_zone_info": false, 00:32:42.378 "zone_management": false, 00:32:42.378 "zone_append": false, 00:32:42.378 "compare": false, 00:32:42.378 "compare_and_write": false, 00:32:42.378 "abort": false, 00:32:42.378 "seek_hole": false, 00:32:42.378 "seek_data": false, 00:32:42.378 "copy": false, 00:32:42.378 "nvme_iov_md": false 00:32:42.378 }, 00:32:42.378 "memory_domains": [ 00:32:42.378 { 00:32:42.378 "dma_device_id": "system", 00:32:42.378 "dma_device_type": 1 00:32:42.378 }, 00:32:42.378 { 00:32:42.378 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:32:42.378 "dma_device_type": 2 00:32:42.378 }, 00:32:42.378 { 00:32:42.378 "dma_device_id": "system", 00:32:42.378 "dma_device_type": 1 00:32:42.378 }, 00:32:42.378 { 00:32:42.378 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:32:42.378 "dma_device_type": 2 00:32:42.378 }, 00:32:42.378 { 00:32:42.378 "dma_device_id": "system", 00:32:42.379 "dma_device_type": 1 00:32:42.379 }, 00:32:42.379 { 00:32:42.379 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:32:42.379 "dma_device_type": 2 00:32:42.379 }, 00:32:42.379 { 00:32:42.379 "dma_device_id": "system", 00:32:42.379 "dma_device_type": 1 00:32:42.379 }, 00:32:42.379 { 00:32:42.379 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:32:42.379 "dma_device_type": 2 00:32:42.379 } 00:32:42.379 ], 00:32:42.379 "driver_specific": { 00:32:42.379 "raid": { 00:32:42.379 "uuid": "70c758b5-0b76-40fd-b8fd-6c58324436c1", 00:32:42.379 "strip_size_kb": 64, 00:32:42.379 "state": "online", 00:32:42.379 "raid_level": "concat", 00:32:42.379 "superblock": true, 00:32:42.379 "num_base_bdevs": 4, 00:32:42.379 "num_base_bdevs_discovered": 4, 00:32:42.379 "num_base_bdevs_operational": 4, 00:32:42.379 "base_bdevs_list": [ 00:32:42.379 { 00:32:42.379 "name": "NewBaseBdev", 00:32:42.379 "uuid": "16c4deb5-a085-4ada-a81f-d6bf6d160804", 00:32:42.379 "is_configured": true, 00:32:42.379 "data_offset": 2048, 00:32:42.379 "data_size": 63488 00:32:42.379 }, 00:32:42.379 { 00:32:42.379 "name": "BaseBdev2", 00:32:42.379 "uuid": "7f20f31e-2802-49ff-8788-c902f26ee874", 00:32:42.379 "is_configured": true, 00:32:42.379 "data_offset": 2048, 00:32:42.379 "data_size": 63488 00:32:42.379 }, 00:32:42.379 { 00:32:42.379 "name": "BaseBdev3", 00:32:42.379 "uuid": "dc4b485d-3a27-459f-bde4-1e5df6152436", 00:32:42.379 "is_configured": true, 00:32:42.379 "data_offset": 2048, 00:32:42.379 "data_size": 63488 00:32:42.379 }, 00:32:42.379 { 00:32:42.379 "name": "BaseBdev4", 00:32:42.379 "uuid": "75115ce6-77a6-41b0-a27e-3fd94f37c74b", 00:32:42.379 "is_configured": true, 00:32:42.379 "data_offset": 2048, 00:32:42.379 "data_size": 63488 00:32:42.379 } 00:32:42.379 ] 00:32:42.379 } 00:32:42.379 } 00:32:42.379 }' 00:32:42.379 17:29:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:32:42.379 17:29:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:32:42.379 BaseBdev2 00:32:42.379 BaseBdev3 00:32:42.379 BaseBdev4' 00:32:42.379 17:29:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:32:42.379 17:29:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:32:42.379 17:29:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:32:42.379 17:29:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:32:42.379 17:29:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:32:42.379 17:29:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:42.379 17:29:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:32:42.379 17:29:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:42.379 17:29:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:32:42.379 17:29:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:32:42.379 17:29:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:32:42.379 17:29:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:32:42.379 17:29:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:42.379 17:29:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:32:42.379 17:29:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:32:42.379 17:29:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:42.379 17:29:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:32:42.379 17:29:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:32:42.379 17:29:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:32:42.379 17:29:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:32:42.379 17:29:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:42.379 17:29:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:32:42.379 17:29:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:32:42.379 17:29:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:42.637 17:29:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:32:42.637 17:29:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:32:42.637 17:29:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:32:42.637 17:29:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:32:42.637 17:29:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:42.637 17:29:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:32:42.637 17:29:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:32:42.637 17:29:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:42.637 17:29:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:32:42.637 17:29:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:32:42.637 17:29:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:32:42.637 17:29:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:42.637 17:29:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:32:42.637 [2024-11-26 17:29:19.884307] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:32:42.637 [2024-11-26 17:29:19.884344] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:32:42.637 [2024-11-26 17:29:19.884432] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:32:42.637 [2024-11-26 17:29:19.884509] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:32:42.637 [2024-11-26 17:29:19.884524] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name Existed_Raid, state offline 00:32:42.637 17:29:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:42.637 17:29:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@326 -- # killprocess 72384 00:32:42.637 17:29:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@954 -- # '[' -z 72384 ']' 00:32:42.637 17:29:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@958 -- # kill -0 72384 00:32:42.637 17:29:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@959 -- # uname 00:32:42.637 17:29:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:32:42.637 17:29:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 72384 00:32:42.637 17:29:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:32:42.637 17:29:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:32:42.637 17:29:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@972 -- # echo 'killing process with pid 72384' 00:32:42.637 killing process with pid 72384 00:32:42.637 17:29:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@973 -- # kill 72384 00:32:42.637 [2024-11-26 17:29:19.932080] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:32:42.637 17:29:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@978 -- # wait 72384 00:32:43.203 [2024-11-26 17:29:20.397188] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:32:44.579 17:29:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@328 -- # return 0 00:32:44.579 00:32:44.579 real 0m12.171s 00:32:44.579 user 0m19.281s 00:32:44.579 sys 0m2.216s 00:32:44.579 17:29:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1130 -- # xtrace_disable 00:32:44.579 17:29:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:32:44.579 ************************************ 00:32:44.579 END TEST raid_state_function_test_sb 00:32:44.579 ************************************ 00:32:44.579 17:29:21 bdev_raid -- bdev/bdev_raid.sh@970 -- # run_test raid_superblock_test raid_superblock_test concat 4 00:32:44.579 17:29:21 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:32:44.579 17:29:21 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:32:44.579 17:29:21 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:32:44.579 ************************************ 00:32:44.579 START TEST raid_superblock_test 00:32:44.579 ************************************ 00:32:44.579 17:29:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1129 -- # raid_superblock_test concat 4 00:32:44.579 17:29:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@393 -- # local raid_level=concat 00:32:44.579 17:29:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=4 00:32:44.579 17:29:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:32:44.579 17:29:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:32:44.579 17:29:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:32:44.579 17:29:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:32:44.579 17:29:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:32:44.579 17:29:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:32:44.579 17:29:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:32:44.579 17:29:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size 00:32:44.579 17:29:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:32:44.579 17:29:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:32:44.579 17:29:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:32:44.579 17:29:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@404 -- # '[' concat '!=' raid1 ']' 00:32:44.579 17:29:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@405 -- # strip_size=64 00:32:44.579 17:29:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@406 -- # strip_size_create_arg='-z 64' 00:32:44.580 17:29:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@412 -- # raid_pid=73060 00:32:44.580 17:29:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:32:44.580 17:29:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@413 -- # waitforlisten 73060 00:32:44.580 17:29:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@835 -- # '[' -z 73060 ']' 00:32:44.580 17:29:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:32:44.580 17:29:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:32:44.580 17:29:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:32:44.580 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:32:44.580 17:29:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:32:44.580 17:29:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:32:44.580 [2024-11-26 17:29:21.846174] Starting SPDK v25.01-pre git sha1 f7ce15267 / DPDK 24.03.0 initialization... 00:32:44.580 [2024-11-26 17:29:21.846356] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73060 ] 00:32:44.838 [2024-11-26 17:29:22.040206] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:44.838 [2024-11-26 17:29:22.168979] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:32:45.096 [2024-11-26 17:29:22.400740] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:32:45.096 [2024-11-26 17:29:22.400808] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:32:45.355 17:29:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:32:45.355 17:29:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@868 -- # return 0 00:32:45.355 17:29:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:32:45.355 17:29:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:32:45.355 17:29:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:32:45.355 17:29:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:32:45.355 17:29:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:32:45.355 17:29:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:32:45.355 17:29:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:32:45.355 17:29:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:32:45.355 17:29:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc1 00:32:45.355 17:29:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:45.355 17:29:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:32:45.615 malloc1 00:32:45.615 17:29:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:45.615 17:29:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:32:45.615 17:29:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:45.615 17:29:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:32:45.615 [2024-11-26 17:29:22.810202] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:32:45.615 [2024-11-26 17:29:22.810502] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:32:45.615 [2024-11-26 17:29:22.810563] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:32:45.615 [2024-11-26 17:29:22.810591] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:32:45.615 [2024-11-26 17:29:22.813917] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:32:45.615 [2024-11-26 17:29:22.813964] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:32:45.615 pt1 00:32:45.615 17:29:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:45.615 17:29:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:32:45.615 17:29:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:32:45.615 17:29:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:32:45.615 17:29:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:32:45.615 17:29:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:32:45.615 17:29:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:32:45.615 17:29:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:32:45.615 17:29:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:32:45.615 17:29:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc2 00:32:45.615 17:29:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:45.615 17:29:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:32:45.615 malloc2 00:32:45.615 17:29:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:45.615 17:29:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:32:45.615 17:29:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:45.615 17:29:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:32:45.615 [2024-11-26 17:29:22.869063] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:32:45.615 [2024-11-26 17:29:22.869279] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:32:45.615 [2024-11-26 17:29:22.869358] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:32:45.615 [2024-11-26 17:29:22.869496] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:32:45.615 [2024-11-26 17:29:22.872306] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:32:45.615 [2024-11-26 17:29:22.872474] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:32:45.615 pt2 00:32:45.615 17:29:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:45.615 17:29:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:32:45.615 17:29:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:32:45.615 17:29:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc3 00:32:45.615 17:29:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt3 00:32:45.615 17:29:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000003 00:32:45.615 17:29:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:32:45.615 17:29:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:32:45.615 17:29:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:32:45.615 17:29:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc3 00:32:45.615 17:29:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:45.615 17:29:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:32:45.615 malloc3 00:32:45.615 17:29:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:45.615 17:29:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:32:45.615 17:29:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:45.615 17:29:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:32:45.615 [2024-11-26 17:29:22.945515] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:32:45.615 [2024-11-26 17:29:22.945699] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:32:45.615 [2024-11-26 17:29:22.945739] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:32:45.615 [2024-11-26 17:29:22.945753] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:32:45.615 [2024-11-26 17:29:22.948527] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:32:45.615 [2024-11-26 17:29:22.948570] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:32:45.615 pt3 00:32:45.615 17:29:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:45.615 17:29:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:32:45.615 17:29:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:32:45.615 17:29:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc4 00:32:45.615 17:29:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt4 00:32:45.615 17:29:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000004 00:32:45.615 17:29:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:32:45.615 17:29:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:32:45.615 17:29:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:32:45.615 17:29:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc4 00:32:45.615 17:29:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:45.615 17:29:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:32:45.615 malloc4 00:32:45.615 17:29:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:45.615 17:29:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:32:45.615 17:29:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:45.616 17:29:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:32:45.616 [2024-11-26 17:29:23.004510] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:32:45.616 [2024-11-26 17:29:23.004705] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:32:45.616 [2024-11-26 17:29:23.004741] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:32:45.616 [2024-11-26 17:29:23.004754] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:32:45.616 [2024-11-26 17:29:23.007478] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:32:45.616 [2024-11-26 17:29:23.007634] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:32:45.616 pt4 00:32:45.616 17:29:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:45.616 17:29:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:32:45.616 17:29:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:32:45.616 17:29:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''pt1 pt2 pt3 pt4'\''' -n raid_bdev1 -s 00:32:45.616 17:29:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:45.616 17:29:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:32:45.616 [2024-11-26 17:29:23.016585] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:32:45.616 [2024-11-26 17:29:23.019125] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:32:45.616 [2024-11-26 17:29:23.019382] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:32:45.616 [2024-11-26 17:29:23.019557] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:32:45.616 [2024-11-26 17:29:23.019973] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:32:45.616 [2024-11-26 17:29:23.020129] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:32:45.616 [2024-11-26 17:29:23.020527] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:32:45.616 [2024-11-26 17:29:23.020748] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:32:45.616 [2024-11-26 17:29:23.020768] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:32:45.616 [2024-11-26 17:29:23.021019] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:32:45.616 17:29:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:45.616 17:29:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online concat 64 4 00:32:45.616 17:29:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:32:45.616 17:29:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:32:45.616 17:29:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:32:45.616 17:29:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:32:45.616 17:29:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:32:45.616 17:29:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:32:45.616 17:29:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:32:45.616 17:29:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:32:45.616 17:29:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:32:45.616 17:29:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:32:45.616 17:29:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:32:45.616 17:29:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:45.616 17:29:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:32:45.616 17:29:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:45.891 17:29:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:32:45.891 "name": "raid_bdev1", 00:32:45.891 "uuid": "0324336d-15ba-4729-abaa-03d6010b3868", 00:32:45.891 "strip_size_kb": 64, 00:32:45.891 "state": "online", 00:32:45.891 "raid_level": "concat", 00:32:45.891 "superblock": true, 00:32:45.891 "num_base_bdevs": 4, 00:32:45.891 "num_base_bdevs_discovered": 4, 00:32:45.891 "num_base_bdevs_operational": 4, 00:32:45.891 "base_bdevs_list": [ 00:32:45.891 { 00:32:45.891 "name": "pt1", 00:32:45.891 "uuid": "00000000-0000-0000-0000-000000000001", 00:32:45.891 "is_configured": true, 00:32:45.891 "data_offset": 2048, 00:32:45.891 "data_size": 63488 00:32:45.891 }, 00:32:45.891 { 00:32:45.891 "name": "pt2", 00:32:45.891 "uuid": "00000000-0000-0000-0000-000000000002", 00:32:45.891 "is_configured": true, 00:32:45.891 "data_offset": 2048, 00:32:45.891 "data_size": 63488 00:32:45.891 }, 00:32:45.891 { 00:32:45.891 "name": "pt3", 00:32:45.891 "uuid": "00000000-0000-0000-0000-000000000003", 00:32:45.891 "is_configured": true, 00:32:45.891 "data_offset": 2048, 00:32:45.891 "data_size": 63488 00:32:45.891 }, 00:32:45.891 { 00:32:45.891 "name": "pt4", 00:32:45.891 "uuid": "00000000-0000-0000-0000-000000000004", 00:32:45.891 "is_configured": true, 00:32:45.891 "data_offset": 2048, 00:32:45.891 "data_size": 63488 00:32:45.891 } 00:32:45.891 ] 00:32:45.891 }' 00:32:45.891 17:29:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:32:45.891 17:29:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:32:46.180 17:29:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:32:46.180 17:29:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:32:46.180 17:29:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:32:46.180 17:29:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:32:46.180 17:29:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:32:46.180 17:29:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:32:46.180 17:29:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:32:46.180 17:29:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:46.180 17:29:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:32:46.180 17:29:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:32:46.180 [2024-11-26 17:29:23.473481] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:32:46.180 17:29:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:46.180 17:29:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:32:46.180 "name": "raid_bdev1", 00:32:46.180 "aliases": [ 00:32:46.180 "0324336d-15ba-4729-abaa-03d6010b3868" 00:32:46.180 ], 00:32:46.180 "product_name": "Raid Volume", 00:32:46.180 "block_size": 512, 00:32:46.180 "num_blocks": 253952, 00:32:46.180 "uuid": "0324336d-15ba-4729-abaa-03d6010b3868", 00:32:46.180 "assigned_rate_limits": { 00:32:46.180 "rw_ios_per_sec": 0, 00:32:46.180 "rw_mbytes_per_sec": 0, 00:32:46.180 "r_mbytes_per_sec": 0, 00:32:46.180 "w_mbytes_per_sec": 0 00:32:46.180 }, 00:32:46.180 "claimed": false, 00:32:46.180 "zoned": false, 00:32:46.180 "supported_io_types": { 00:32:46.180 "read": true, 00:32:46.180 "write": true, 00:32:46.180 "unmap": true, 00:32:46.180 "flush": true, 00:32:46.180 "reset": true, 00:32:46.180 "nvme_admin": false, 00:32:46.180 "nvme_io": false, 00:32:46.180 "nvme_io_md": false, 00:32:46.180 "write_zeroes": true, 00:32:46.180 "zcopy": false, 00:32:46.180 "get_zone_info": false, 00:32:46.180 "zone_management": false, 00:32:46.180 "zone_append": false, 00:32:46.180 "compare": false, 00:32:46.180 "compare_and_write": false, 00:32:46.180 "abort": false, 00:32:46.180 "seek_hole": false, 00:32:46.180 "seek_data": false, 00:32:46.180 "copy": false, 00:32:46.180 "nvme_iov_md": false 00:32:46.180 }, 00:32:46.180 "memory_domains": [ 00:32:46.180 { 00:32:46.180 "dma_device_id": "system", 00:32:46.180 "dma_device_type": 1 00:32:46.180 }, 00:32:46.180 { 00:32:46.180 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:32:46.180 "dma_device_type": 2 00:32:46.180 }, 00:32:46.180 { 00:32:46.180 "dma_device_id": "system", 00:32:46.180 "dma_device_type": 1 00:32:46.180 }, 00:32:46.180 { 00:32:46.180 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:32:46.180 "dma_device_type": 2 00:32:46.180 }, 00:32:46.180 { 00:32:46.180 "dma_device_id": "system", 00:32:46.180 "dma_device_type": 1 00:32:46.180 }, 00:32:46.180 { 00:32:46.180 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:32:46.180 "dma_device_type": 2 00:32:46.180 }, 00:32:46.180 { 00:32:46.180 "dma_device_id": "system", 00:32:46.180 "dma_device_type": 1 00:32:46.180 }, 00:32:46.180 { 00:32:46.180 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:32:46.180 "dma_device_type": 2 00:32:46.180 } 00:32:46.180 ], 00:32:46.180 "driver_specific": { 00:32:46.180 "raid": { 00:32:46.180 "uuid": "0324336d-15ba-4729-abaa-03d6010b3868", 00:32:46.180 "strip_size_kb": 64, 00:32:46.180 "state": "online", 00:32:46.180 "raid_level": "concat", 00:32:46.180 "superblock": true, 00:32:46.180 "num_base_bdevs": 4, 00:32:46.180 "num_base_bdevs_discovered": 4, 00:32:46.180 "num_base_bdevs_operational": 4, 00:32:46.180 "base_bdevs_list": [ 00:32:46.180 { 00:32:46.180 "name": "pt1", 00:32:46.180 "uuid": "00000000-0000-0000-0000-000000000001", 00:32:46.180 "is_configured": true, 00:32:46.180 "data_offset": 2048, 00:32:46.180 "data_size": 63488 00:32:46.180 }, 00:32:46.180 { 00:32:46.180 "name": "pt2", 00:32:46.180 "uuid": "00000000-0000-0000-0000-000000000002", 00:32:46.180 "is_configured": true, 00:32:46.180 "data_offset": 2048, 00:32:46.180 "data_size": 63488 00:32:46.180 }, 00:32:46.180 { 00:32:46.180 "name": "pt3", 00:32:46.180 "uuid": "00000000-0000-0000-0000-000000000003", 00:32:46.180 "is_configured": true, 00:32:46.180 "data_offset": 2048, 00:32:46.180 "data_size": 63488 00:32:46.180 }, 00:32:46.180 { 00:32:46.180 "name": "pt4", 00:32:46.180 "uuid": "00000000-0000-0000-0000-000000000004", 00:32:46.180 "is_configured": true, 00:32:46.180 "data_offset": 2048, 00:32:46.180 "data_size": 63488 00:32:46.180 } 00:32:46.180 ] 00:32:46.180 } 00:32:46.180 } 00:32:46.180 }' 00:32:46.180 17:29:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:32:46.180 17:29:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:32:46.180 pt2 00:32:46.180 pt3 00:32:46.180 pt4' 00:32:46.180 17:29:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:32:46.180 17:29:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:32:46.180 17:29:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:32:46.180 17:29:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:32:46.180 17:29:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:46.180 17:29:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:32:46.180 17:29:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:32:46.180 17:29:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:46.445 17:29:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:32:46.445 17:29:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:32:46.445 17:29:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:32:46.446 17:29:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:32:46.446 17:29:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:32:46.446 17:29:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:46.446 17:29:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:32:46.446 17:29:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:46.446 17:29:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:32:46.446 17:29:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:32:46.446 17:29:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:32:46.446 17:29:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:32:46.446 17:29:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:32:46.446 17:29:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:46.446 17:29:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:32:46.446 17:29:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:46.446 17:29:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:32:46.446 17:29:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:32:46.446 17:29:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:32:46.446 17:29:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt4 00:32:46.446 17:29:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:46.446 17:29:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:32:46.446 17:29:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:32:46.446 17:29:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:46.446 17:29:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:32:46.446 17:29:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:32:46.446 17:29:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:32:46.446 17:29:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:32:46.446 17:29:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:46.446 17:29:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:32:46.446 [2024-11-26 17:29:23.781418] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:32:46.446 17:29:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:46.446 17:29:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=0324336d-15ba-4729-abaa-03d6010b3868 00:32:46.446 17:29:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@436 -- # '[' -z 0324336d-15ba-4729-abaa-03d6010b3868 ']' 00:32:46.446 17:29:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:32:46.446 17:29:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:46.446 17:29:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:32:46.446 [2024-11-26 17:29:23.817148] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:32:46.446 [2024-11-26 17:29:23.817283] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:32:46.446 [2024-11-26 17:29:23.817473] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:32:46.446 [2024-11-26 17:29:23.817578] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:32:46.446 [2024-11-26 17:29:23.817803] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:32:46.446 17:29:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:46.446 17:29:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:32:46.446 17:29:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:32:46.446 17:29:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:46.446 17:29:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:32:46.446 17:29:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:46.446 17:29:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:32:46.446 17:29:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:32:46.446 17:29:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:32:46.446 17:29:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:32:46.446 17:29:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:46.446 17:29:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:32:46.446 17:29:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:46.446 17:29:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:32:46.446 17:29:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:32:46.446 17:29:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:46.446 17:29:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:32:46.446 17:29:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:46.446 17:29:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:32:46.446 17:29:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt3 00:32:46.446 17:29:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:46.446 17:29:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:32:46.705 17:29:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:46.705 17:29:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:32:46.705 17:29:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt4 00:32:46.705 17:29:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:46.705 17:29:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:32:46.705 17:29:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:46.705 17:29:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:32:46.705 17:29:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:32:46.705 17:29:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:46.705 17:29:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:32:46.705 17:29:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:46.705 17:29:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:32:46.705 17:29:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''malloc1 malloc2 malloc3 malloc4'\''' -n raid_bdev1 00:32:46.705 17:29:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@652 -- # local es=0 00:32:46.705 17:29:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''malloc1 malloc2 malloc3 malloc4'\''' -n raid_bdev1 00:32:46.705 17:29:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:32:46.705 17:29:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:32:46.705 17:29:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:32:46.705 17:29:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:32:46.705 17:29:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''malloc1 malloc2 malloc3 malloc4'\''' -n raid_bdev1 00:32:46.705 17:29:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:46.705 17:29:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:32:46.705 [2024-11-26 17:29:23.965198] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:32:46.705 [2024-11-26 17:29:23.967458] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:32:46.705 [2024-11-26 17:29:23.967620] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc3 is claimed 00:32:46.706 [2024-11-26 17:29:23.967692] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc4 is claimed 00:32:46.706 [2024-11-26 17:29:23.967827] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:32:46.706 [2024-11-26 17:29:23.968062] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:32:46.706 [2024-11-26 17:29:23.968196] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc3 00:32:46.706 [2024-11-26 17:29:23.968375] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc4 00:32:46.706 [2024-11-26 17:29:23.968476] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:32:46.706 [2024-11-26 17:29:23.968540] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state configuring 00:32:46.706 request: 00:32:46.706 { 00:32:46.706 "name": "raid_bdev1", 00:32:46.706 "raid_level": "concat", 00:32:46.706 "base_bdevs": [ 00:32:46.706 "malloc1", 00:32:46.706 "malloc2", 00:32:46.706 "malloc3", 00:32:46.706 "malloc4" 00:32:46.706 ], 00:32:46.706 "strip_size_kb": 64, 00:32:46.706 "superblock": false, 00:32:46.706 "method": "bdev_raid_create", 00:32:46.706 "req_id": 1 00:32:46.706 } 00:32:46.706 Got JSON-RPC error response 00:32:46.706 response: 00:32:46.706 { 00:32:46.706 "code": -17, 00:32:46.706 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:32:46.706 } 00:32:46.706 17:29:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:32:46.706 17:29:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@655 -- # es=1 00:32:46.706 17:29:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:32:46.706 17:29:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:32:46.706 17:29:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:32:46.706 17:29:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:32:46.706 17:29:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:32:46.706 17:29:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:46.706 17:29:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:32:46.706 17:29:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:46.706 17:29:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:32:46.706 17:29:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:32:46.706 17:29:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:32:46.706 17:29:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:46.706 17:29:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:32:46.706 [2024-11-26 17:29:24.025264] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:32:46.706 [2024-11-26 17:29:24.025454] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:32:46.706 [2024-11-26 17:29:24.025554] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:32:46.706 [2024-11-26 17:29:24.025635] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:32:46.706 [2024-11-26 17:29:24.028386] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:32:46.706 [2024-11-26 17:29:24.028568] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:32:46.706 [2024-11-26 17:29:24.028745] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:32:46.706 [2024-11-26 17:29:24.028911] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:32:46.706 pt1 00:32:46.706 17:29:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:46.706 17:29:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring concat 64 4 00:32:46.706 17:29:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:32:46.706 17:29:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:32:46.706 17:29:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:32:46.706 17:29:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:32:46.706 17:29:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:32:46.706 17:29:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:32:46.706 17:29:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:32:46.706 17:29:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:32:46.706 17:29:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:32:46.706 17:29:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:32:46.706 17:29:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:32:46.706 17:29:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:46.706 17:29:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:32:46.706 17:29:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:46.706 17:29:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:32:46.706 "name": "raid_bdev1", 00:32:46.706 "uuid": "0324336d-15ba-4729-abaa-03d6010b3868", 00:32:46.706 "strip_size_kb": 64, 00:32:46.706 "state": "configuring", 00:32:46.706 "raid_level": "concat", 00:32:46.706 "superblock": true, 00:32:46.706 "num_base_bdevs": 4, 00:32:46.706 "num_base_bdevs_discovered": 1, 00:32:46.706 "num_base_bdevs_operational": 4, 00:32:46.706 "base_bdevs_list": [ 00:32:46.706 { 00:32:46.706 "name": "pt1", 00:32:46.706 "uuid": "00000000-0000-0000-0000-000000000001", 00:32:46.706 "is_configured": true, 00:32:46.706 "data_offset": 2048, 00:32:46.706 "data_size": 63488 00:32:46.706 }, 00:32:46.706 { 00:32:46.706 "name": null, 00:32:46.706 "uuid": "00000000-0000-0000-0000-000000000002", 00:32:46.706 "is_configured": false, 00:32:46.706 "data_offset": 2048, 00:32:46.706 "data_size": 63488 00:32:46.706 }, 00:32:46.706 { 00:32:46.706 "name": null, 00:32:46.706 "uuid": "00000000-0000-0000-0000-000000000003", 00:32:46.706 "is_configured": false, 00:32:46.706 "data_offset": 2048, 00:32:46.706 "data_size": 63488 00:32:46.706 }, 00:32:46.706 { 00:32:46.706 "name": null, 00:32:46.706 "uuid": "00000000-0000-0000-0000-000000000004", 00:32:46.706 "is_configured": false, 00:32:46.706 "data_offset": 2048, 00:32:46.706 "data_size": 63488 00:32:46.706 } 00:32:46.706 ] 00:32:46.706 }' 00:32:46.706 17:29:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:32:46.706 17:29:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:32:47.274 17:29:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@470 -- # '[' 4 -gt 2 ']' 00:32:47.274 17:29:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@472 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:32:47.274 17:29:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:47.274 17:29:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:32:47.274 [2024-11-26 17:29:24.481363] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:32:47.274 [2024-11-26 17:29:24.481440] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:32:47.274 [2024-11-26 17:29:24.481462] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a880 00:32:47.274 [2024-11-26 17:29:24.481477] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:32:47.274 [2024-11-26 17:29:24.481930] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:32:47.274 [2024-11-26 17:29:24.481953] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:32:47.274 [2024-11-26 17:29:24.482034] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:32:47.274 [2024-11-26 17:29:24.482091] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:32:47.274 pt2 00:32:47.274 17:29:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:47.274 17:29:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@473 -- # rpc_cmd bdev_passthru_delete pt2 00:32:47.274 17:29:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:47.274 17:29:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:32:47.274 [2024-11-26 17:29:24.489354] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: pt2 00:32:47.274 17:29:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:47.274 17:29:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@474 -- # verify_raid_bdev_state raid_bdev1 configuring concat 64 4 00:32:47.274 17:29:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:32:47.274 17:29:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:32:47.274 17:29:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:32:47.274 17:29:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:32:47.274 17:29:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:32:47.274 17:29:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:32:47.274 17:29:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:32:47.274 17:29:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:32:47.274 17:29:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:32:47.274 17:29:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:32:47.274 17:29:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:32:47.274 17:29:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:47.274 17:29:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:32:47.274 17:29:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:47.274 17:29:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:32:47.274 "name": "raid_bdev1", 00:32:47.274 "uuid": "0324336d-15ba-4729-abaa-03d6010b3868", 00:32:47.274 "strip_size_kb": 64, 00:32:47.274 "state": "configuring", 00:32:47.274 "raid_level": "concat", 00:32:47.274 "superblock": true, 00:32:47.274 "num_base_bdevs": 4, 00:32:47.274 "num_base_bdevs_discovered": 1, 00:32:47.274 "num_base_bdevs_operational": 4, 00:32:47.274 "base_bdevs_list": [ 00:32:47.274 { 00:32:47.274 "name": "pt1", 00:32:47.274 "uuid": "00000000-0000-0000-0000-000000000001", 00:32:47.274 "is_configured": true, 00:32:47.274 "data_offset": 2048, 00:32:47.274 "data_size": 63488 00:32:47.274 }, 00:32:47.274 { 00:32:47.274 "name": null, 00:32:47.274 "uuid": "00000000-0000-0000-0000-000000000002", 00:32:47.274 "is_configured": false, 00:32:47.274 "data_offset": 0, 00:32:47.274 "data_size": 63488 00:32:47.274 }, 00:32:47.274 { 00:32:47.274 "name": null, 00:32:47.274 "uuid": "00000000-0000-0000-0000-000000000003", 00:32:47.274 "is_configured": false, 00:32:47.274 "data_offset": 2048, 00:32:47.274 "data_size": 63488 00:32:47.274 }, 00:32:47.274 { 00:32:47.274 "name": null, 00:32:47.274 "uuid": "00000000-0000-0000-0000-000000000004", 00:32:47.274 "is_configured": false, 00:32:47.274 "data_offset": 2048, 00:32:47.274 "data_size": 63488 00:32:47.274 } 00:32:47.274 ] 00:32:47.274 }' 00:32:47.274 17:29:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:32:47.274 17:29:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:32:47.534 17:29:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:32:47.534 17:29:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:32:47.534 17:29:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:32:47.534 17:29:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:47.534 17:29:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:32:47.534 [2024-11-26 17:29:24.905457] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:32:47.534 [2024-11-26 17:29:24.905529] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:32:47.534 [2024-11-26 17:29:24.905554] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ab80 00:32:47.534 [2024-11-26 17:29:24.905566] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:32:47.534 [2024-11-26 17:29:24.906033] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:32:47.534 [2024-11-26 17:29:24.906078] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:32:47.534 [2024-11-26 17:29:24.906164] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:32:47.534 [2024-11-26 17:29:24.906188] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:32:47.534 pt2 00:32:47.534 17:29:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:47.534 17:29:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:32:47.534 17:29:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:32:47.534 17:29:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:32:47.534 17:29:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:47.534 17:29:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:32:47.534 [2024-11-26 17:29:24.913428] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:32:47.534 [2024-11-26 17:29:24.913620] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:32:47.534 [2024-11-26 17:29:24.913648] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ae80 00:32:47.534 [2024-11-26 17:29:24.913658] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:32:47.534 [2024-11-26 17:29:24.914034] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:32:47.534 [2024-11-26 17:29:24.914084] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:32:47.534 [2024-11-26 17:29:24.914157] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:32:47.534 [2024-11-26 17:29:24.914183] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:32:47.534 pt3 00:32:47.534 17:29:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:47.534 17:29:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:32:47.534 17:29:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:32:47.534 17:29:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:32:47.534 17:29:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:47.534 17:29:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:32:47.534 [2024-11-26 17:29:24.921412] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:32:47.534 [2024-11-26 17:29:24.921460] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:32:47.534 [2024-11-26 17:29:24.921498] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b180 00:32:47.534 [2024-11-26 17:29:24.921508] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:32:47.534 [2024-11-26 17:29:24.921910] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:32:47.534 [2024-11-26 17:29:24.921934] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:32:47.534 [2024-11-26 17:29:24.922004] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt4 00:32:47.534 [2024-11-26 17:29:24.922027] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:32:47.534 [2024-11-26 17:29:24.922191] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:32:47.534 [2024-11-26 17:29:24.922207] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:32:47.534 [2024-11-26 17:29:24.922456] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:32:47.534 [2024-11-26 17:29:24.922610] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:32:47.534 [2024-11-26 17:29:24.922624] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:32:47.534 [2024-11-26 17:29:24.922768] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:32:47.534 pt4 00:32:47.534 17:29:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:47.534 17:29:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:32:47.534 17:29:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:32:47.534 17:29:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online concat 64 4 00:32:47.534 17:29:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:32:47.534 17:29:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:32:47.534 17:29:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:32:47.534 17:29:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:32:47.534 17:29:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:32:47.534 17:29:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:32:47.534 17:29:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:32:47.534 17:29:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:32:47.534 17:29:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:32:47.534 17:29:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:32:47.534 17:29:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:32:47.534 17:29:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:47.534 17:29:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:32:47.534 17:29:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:47.534 17:29:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:32:47.534 "name": "raid_bdev1", 00:32:47.534 "uuid": "0324336d-15ba-4729-abaa-03d6010b3868", 00:32:47.534 "strip_size_kb": 64, 00:32:47.534 "state": "online", 00:32:47.534 "raid_level": "concat", 00:32:47.534 "superblock": true, 00:32:47.534 "num_base_bdevs": 4, 00:32:47.534 "num_base_bdevs_discovered": 4, 00:32:47.534 "num_base_bdevs_operational": 4, 00:32:47.534 "base_bdevs_list": [ 00:32:47.534 { 00:32:47.534 "name": "pt1", 00:32:47.534 "uuid": "00000000-0000-0000-0000-000000000001", 00:32:47.534 "is_configured": true, 00:32:47.534 "data_offset": 2048, 00:32:47.534 "data_size": 63488 00:32:47.534 }, 00:32:47.534 { 00:32:47.534 "name": "pt2", 00:32:47.534 "uuid": "00000000-0000-0000-0000-000000000002", 00:32:47.534 "is_configured": true, 00:32:47.534 "data_offset": 2048, 00:32:47.534 "data_size": 63488 00:32:47.534 }, 00:32:47.534 { 00:32:47.534 "name": "pt3", 00:32:47.534 "uuid": "00000000-0000-0000-0000-000000000003", 00:32:47.534 "is_configured": true, 00:32:47.534 "data_offset": 2048, 00:32:47.534 "data_size": 63488 00:32:47.534 }, 00:32:47.534 { 00:32:47.534 "name": "pt4", 00:32:47.534 "uuid": "00000000-0000-0000-0000-000000000004", 00:32:47.534 "is_configured": true, 00:32:47.534 "data_offset": 2048, 00:32:47.534 "data_size": 63488 00:32:47.534 } 00:32:47.534 ] 00:32:47.534 }' 00:32:47.534 17:29:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:32:47.534 17:29:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:32:48.102 17:29:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:32:48.102 17:29:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:32:48.102 17:29:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:32:48.102 17:29:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:32:48.102 17:29:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:32:48.102 17:29:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:32:48.102 17:29:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:32:48.102 17:29:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:48.102 17:29:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:32:48.102 17:29:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:32:48.102 [2024-11-26 17:29:25.357907] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:32:48.102 17:29:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:48.102 17:29:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:32:48.102 "name": "raid_bdev1", 00:32:48.102 "aliases": [ 00:32:48.102 "0324336d-15ba-4729-abaa-03d6010b3868" 00:32:48.102 ], 00:32:48.102 "product_name": "Raid Volume", 00:32:48.102 "block_size": 512, 00:32:48.102 "num_blocks": 253952, 00:32:48.102 "uuid": "0324336d-15ba-4729-abaa-03d6010b3868", 00:32:48.102 "assigned_rate_limits": { 00:32:48.102 "rw_ios_per_sec": 0, 00:32:48.102 "rw_mbytes_per_sec": 0, 00:32:48.102 "r_mbytes_per_sec": 0, 00:32:48.102 "w_mbytes_per_sec": 0 00:32:48.102 }, 00:32:48.102 "claimed": false, 00:32:48.102 "zoned": false, 00:32:48.102 "supported_io_types": { 00:32:48.102 "read": true, 00:32:48.102 "write": true, 00:32:48.102 "unmap": true, 00:32:48.102 "flush": true, 00:32:48.102 "reset": true, 00:32:48.102 "nvme_admin": false, 00:32:48.102 "nvme_io": false, 00:32:48.102 "nvme_io_md": false, 00:32:48.102 "write_zeroes": true, 00:32:48.102 "zcopy": false, 00:32:48.102 "get_zone_info": false, 00:32:48.102 "zone_management": false, 00:32:48.102 "zone_append": false, 00:32:48.102 "compare": false, 00:32:48.102 "compare_and_write": false, 00:32:48.102 "abort": false, 00:32:48.102 "seek_hole": false, 00:32:48.102 "seek_data": false, 00:32:48.102 "copy": false, 00:32:48.102 "nvme_iov_md": false 00:32:48.102 }, 00:32:48.102 "memory_domains": [ 00:32:48.102 { 00:32:48.102 "dma_device_id": "system", 00:32:48.102 "dma_device_type": 1 00:32:48.102 }, 00:32:48.102 { 00:32:48.102 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:32:48.102 "dma_device_type": 2 00:32:48.102 }, 00:32:48.102 { 00:32:48.102 "dma_device_id": "system", 00:32:48.102 "dma_device_type": 1 00:32:48.102 }, 00:32:48.102 { 00:32:48.102 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:32:48.102 "dma_device_type": 2 00:32:48.102 }, 00:32:48.102 { 00:32:48.102 "dma_device_id": "system", 00:32:48.102 "dma_device_type": 1 00:32:48.102 }, 00:32:48.102 { 00:32:48.102 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:32:48.102 "dma_device_type": 2 00:32:48.102 }, 00:32:48.102 { 00:32:48.102 "dma_device_id": "system", 00:32:48.102 "dma_device_type": 1 00:32:48.102 }, 00:32:48.102 { 00:32:48.102 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:32:48.102 "dma_device_type": 2 00:32:48.102 } 00:32:48.102 ], 00:32:48.102 "driver_specific": { 00:32:48.102 "raid": { 00:32:48.102 "uuid": "0324336d-15ba-4729-abaa-03d6010b3868", 00:32:48.102 "strip_size_kb": 64, 00:32:48.102 "state": "online", 00:32:48.102 "raid_level": "concat", 00:32:48.102 "superblock": true, 00:32:48.102 "num_base_bdevs": 4, 00:32:48.102 "num_base_bdevs_discovered": 4, 00:32:48.102 "num_base_bdevs_operational": 4, 00:32:48.102 "base_bdevs_list": [ 00:32:48.102 { 00:32:48.102 "name": "pt1", 00:32:48.102 "uuid": "00000000-0000-0000-0000-000000000001", 00:32:48.103 "is_configured": true, 00:32:48.103 "data_offset": 2048, 00:32:48.103 "data_size": 63488 00:32:48.103 }, 00:32:48.103 { 00:32:48.103 "name": "pt2", 00:32:48.103 "uuid": "00000000-0000-0000-0000-000000000002", 00:32:48.103 "is_configured": true, 00:32:48.103 "data_offset": 2048, 00:32:48.103 "data_size": 63488 00:32:48.103 }, 00:32:48.103 { 00:32:48.103 "name": "pt3", 00:32:48.103 "uuid": "00000000-0000-0000-0000-000000000003", 00:32:48.103 "is_configured": true, 00:32:48.103 "data_offset": 2048, 00:32:48.103 "data_size": 63488 00:32:48.103 }, 00:32:48.103 { 00:32:48.103 "name": "pt4", 00:32:48.103 "uuid": "00000000-0000-0000-0000-000000000004", 00:32:48.103 "is_configured": true, 00:32:48.103 "data_offset": 2048, 00:32:48.103 "data_size": 63488 00:32:48.103 } 00:32:48.103 ] 00:32:48.103 } 00:32:48.103 } 00:32:48.103 }' 00:32:48.103 17:29:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:32:48.103 17:29:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:32:48.103 pt2 00:32:48.103 pt3 00:32:48.103 pt4' 00:32:48.103 17:29:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:32:48.103 17:29:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:32:48.103 17:29:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:32:48.103 17:29:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:32:48.103 17:29:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:32:48.103 17:29:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:48.103 17:29:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:32:48.103 17:29:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:48.103 17:29:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:32:48.103 17:29:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:32:48.103 17:29:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:32:48.103 17:29:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:32:48.103 17:29:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:32:48.103 17:29:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:48.103 17:29:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:32:48.103 17:29:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:48.363 17:29:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:32:48.363 17:29:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:32:48.363 17:29:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:32:48.363 17:29:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:32:48.363 17:29:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:32:48.363 17:29:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:48.363 17:29:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:32:48.363 17:29:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:48.363 17:29:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:32:48.363 17:29:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:32:48.363 17:29:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:32:48.363 17:29:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt4 00:32:48.363 17:29:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:48.363 17:29:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:32:48.363 17:29:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:32:48.363 17:29:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:48.363 17:29:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:32:48.363 17:29:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:32:48.363 17:29:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:32:48.363 17:29:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:32:48.363 17:29:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:48.363 17:29:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:32:48.363 [2024-11-26 17:29:25.669884] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:32:48.363 17:29:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:48.363 17:29:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # '[' 0324336d-15ba-4729-abaa-03d6010b3868 '!=' 0324336d-15ba-4729-abaa-03d6010b3868 ']' 00:32:48.363 17:29:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@491 -- # has_redundancy concat 00:32:48.363 17:29:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:32:48.363 17:29:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # return 1 00:32:48.363 17:29:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@563 -- # killprocess 73060 00:32:48.363 17:29:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@954 -- # '[' -z 73060 ']' 00:32:48.363 17:29:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@958 -- # kill -0 73060 00:32:48.363 17:29:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@959 -- # uname 00:32:48.363 17:29:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:32:48.363 17:29:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 73060 00:32:48.363 killing process with pid 73060 00:32:48.363 17:29:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:32:48.363 17:29:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:32:48.363 17:29:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 73060' 00:32:48.363 17:29:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@973 -- # kill 73060 00:32:48.363 [2024-11-26 17:29:25.740462] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:32:48.363 17:29:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@978 -- # wait 73060 00:32:48.363 [2024-11-26 17:29:25.740586] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:32:48.363 [2024-11-26 17:29:25.740703] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:32:48.363 [2024-11-26 17:29:25.740722] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:32:48.930 [2024-11-26 17:29:26.152118] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:32:50.307 17:29:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@565 -- # return 0 00:32:50.307 00:32:50.307 real 0m5.587s 00:32:50.307 user 0m7.955s 00:32:50.307 sys 0m1.039s 00:32:50.307 17:29:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:32:50.307 17:29:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:32:50.307 ************************************ 00:32:50.308 END TEST raid_superblock_test 00:32:50.308 ************************************ 00:32:50.308 17:29:27 bdev_raid -- bdev/bdev_raid.sh@971 -- # run_test raid_read_error_test raid_io_error_test concat 4 read 00:32:50.308 17:29:27 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:32:50.308 17:29:27 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:32:50.308 17:29:27 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:32:50.308 ************************************ 00:32:50.308 START TEST raid_read_error_test 00:32:50.308 ************************************ 00:32:50.308 17:29:27 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1129 -- # raid_io_error_test concat 4 read 00:32:50.308 17:29:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=concat 00:32:50.308 17:29:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=4 00:32:50.308 17:29:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=read 00:32:50.308 17:29:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:32:50.308 17:29:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:32:50.308 17:29:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:32:50.308 17:29:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:32:50.308 17:29:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:32:50.308 17:29:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:32:50.308 17:29:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:32:50.308 17:29:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:32:50.308 17:29:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev3 00:32:50.308 17:29:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:32:50.308 17:29:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:32:50.308 17:29:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev4 00:32:50.308 17:29:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:32:50.308 17:29:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:32:50.308 17:29:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:32:50.308 17:29:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:32:50.308 17:29:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:32:50.308 17:29:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:32:50.308 17:29:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:32:50.308 17:29:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:32:50.308 17:29:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:32:50.308 17:29:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@800 -- # '[' concat '!=' raid1 ']' 00:32:50.308 17:29:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@801 -- # strip_size=64 00:32:50.308 17:29:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@802 -- # create_arg+=' -z 64' 00:32:50.308 17:29:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:32:50.308 17:29:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.eBw12A4tR8 00:32:50.308 17:29:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=73325 00:32:50.308 17:29:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:32:50.308 17:29:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 73325 00:32:50.308 17:29:27 bdev_raid.raid_read_error_test -- common/autotest_common.sh@835 -- # '[' -z 73325 ']' 00:32:50.308 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:32:50.308 17:29:27 bdev_raid.raid_read_error_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:32:50.308 17:29:27 bdev_raid.raid_read_error_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:32:50.308 17:29:27 bdev_raid.raid_read_error_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:32:50.308 17:29:27 bdev_raid.raid_read_error_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:32:50.308 17:29:27 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:32:50.308 [2024-11-26 17:29:27.508582] Starting SPDK v25.01-pre git sha1 f7ce15267 / DPDK 24.03.0 initialization... 00:32:50.308 [2024-11-26 17:29:27.508760] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73325 ] 00:32:50.308 [2024-11-26 17:29:27.702114] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:50.567 [2024-11-26 17:29:27.817411] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:32:50.825 [2024-11-26 17:29:28.032994] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:32:50.825 [2024-11-26 17:29:28.033242] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:32:51.085 17:29:28 bdev_raid.raid_read_error_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:32:51.085 17:29:28 bdev_raid.raid_read_error_test -- common/autotest_common.sh@868 -- # return 0 00:32:51.085 17:29:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:32:51.085 17:29:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:32:51.085 17:29:28 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:51.085 17:29:28 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:32:51.085 BaseBdev1_malloc 00:32:51.085 17:29:28 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:51.085 17:29:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:32:51.085 17:29:28 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:51.085 17:29:28 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:32:51.085 true 00:32:51.085 17:29:28 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:51.085 17:29:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:32:51.085 17:29:28 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:51.085 17:29:28 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:32:51.085 [2024-11-26 17:29:28.502458] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:32:51.085 [2024-11-26 17:29:28.502680] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:32:51.085 [2024-11-26 17:29:28.502720] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:32:51.085 [2024-11-26 17:29:28.502738] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:32:51.085 [2024-11-26 17:29:28.505617] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:32:51.085 [2024-11-26 17:29:28.505668] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:32:51.085 BaseBdev1 00:32:51.085 17:29:28 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:51.085 17:29:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:32:51.085 17:29:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:32:51.085 17:29:28 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:51.085 17:29:28 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:32:51.344 BaseBdev2_malloc 00:32:51.344 17:29:28 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:51.344 17:29:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:32:51.344 17:29:28 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:51.344 17:29:28 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:32:51.344 true 00:32:51.344 17:29:28 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:51.344 17:29:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:32:51.344 17:29:28 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:51.344 17:29:28 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:32:51.344 [2024-11-26 17:29:28.564144] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:32:51.344 [2024-11-26 17:29:28.564205] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:32:51.344 [2024-11-26 17:29:28.564225] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:32:51.344 [2024-11-26 17:29:28.564240] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:32:51.344 [2024-11-26 17:29:28.566840] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:32:51.344 [2024-11-26 17:29:28.567042] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:32:51.344 BaseBdev2 00:32:51.344 17:29:28 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:51.344 17:29:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:32:51.344 17:29:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:32:51.344 17:29:28 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:51.344 17:29:28 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:32:51.344 BaseBdev3_malloc 00:32:51.344 17:29:28 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:51.344 17:29:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev3_malloc 00:32:51.344 17:29:28 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:51.344 17:29:28 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:32:51.344 true 00:32:51.344 17:29:28 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:51.344 17:29:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:32:51.344 17:29:28 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:51.344 17:29:28 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:32:51.344 [2024-11-26 17:29:28.645357] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:32:51.344 [2024-11-26 17:29:28.645420] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:32:51.344 [2024-11-26 17:29:28.645443] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:32:51.344 [2024-11-26 17:29:28.645458] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:32:51.344 [2024-11-26 17:29:28.648120] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:32:51.344 [2024-11-26 17:29:28.648164] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:32:51.344 BaseBdev3 00:32:51.344 17:29:28 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:51.344 17:29:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:32:51.344 17:29:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:32:51.344 17:29:28 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:51.344 17:29:28 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:32:51.344 BaseBdev4_malloc 00:32:51.344 17:29:28 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:51.344 17:29:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev4_malloc 00:32:51.344 17:29:28 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:51.344 17:29:28 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:32:51.344 true 00:32:51.344 17:29:28 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:51.344 17:29:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev4_malloc -p BaseBdev4 00:32:51.344 17:29:28 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:51.344 17:29:28 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:32:51.344 [2024-11-26 17:29:28.713473] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev4_malloc 00:32:51.344 [2024-11-26 17:29:28.713534] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:32:51.344 [2024-11-26 17:29:28.713557] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:32:51.344 [2024-11-26 17:29:28.713572] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:32:51.344 [2024-11-26 17:29:28.716263] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:32:51.344 [2024-11-26 17:29:28.716309] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:32:51.344 BaseBdev4 00:32:51.344 17:29:28 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:51.344 17:29:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n raid_bdev1 -s 00:32:51.344 17:29:28 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:51.344 17:29:28 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:32:51.344 [2024-11-26 17:29:28.721551] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:32:51.344 [2024-11-26 17:29:28.723873] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:32:51.344 [2024-11-26 17:29:28.723959] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:32:51.344 [2024-11-26 17:29:28.724029] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:32:51.344 [2024-11-26 17:29:28.724322] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008580 00:32:51.344 [2024-11-26 17:29:28.724347] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:32:51.344 [2024-11-26 17:29:28.724634] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000068a0 00:32:51.344 [2024-11-26 17:29:28.724818] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008580 00:32:51.344 [2024-11-26 17:29:28.724838] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008580 00:32:51.344 [2024-11-26 17:29:28.724999] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:32:51.344 17:29:28 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:51.344 17:29:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online concat 64 4 00:32:51.344 17:29:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:32:51.344 17:29:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:32:51.344 17:29:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:32:51.344 17:29:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:32:51.344 17:29:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:32:51.344 17:29:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:32:51.344 17:29:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:32:51.344 17:29:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:32:51.344 17:29:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:32:51.344 17:29:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:32:51.344 17:29:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:32:51.344 17:29:28 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:51.344 17:29:28 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:32:51.344 17:29:28 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:51.344 17:29:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:32:51.344 "name": "raid_bdev1", 00:32:51.344 "uuid": "c5059365-a009-435e-88d6-18c0f10471e4", 00:32:51.344 "strip_size_kb": 64, 00:32:51.344 "state": "online", 00:32:51.344 "raid_level": "concat", 00:32:51.344 "superblock": true, 00:32:51.344 "num_base_bdevs": 4, 00:32:51.344 "num_base_bdevs_discovered": 4, 00:32:51.344 "num_base_bdevs_operational": 4, 00:32:51.344 "base_bdevs_list": [ 00:32:51.344 { 00:32:51.344 "name": "BaseBdev1", 00:32:51.344 "uuid": "2b868c53-1870-5f81-9637-6cea2e5171a5", 00:32:51.344 "is_configured": true, 00:32:51.344 "data_offset": 2048, 00:32:51.344 "data_size": 63488 00:32:51.344 }, 00:32:51.344 { 00:32:51.344 "name": "BaseBdev2", 00:32:51.344 "uuid": "93150bed-df4d-57f0-8797-f45911489c18", 00:32:51.344 "is_configured": true, 00:32:51.344 "data_offset": 2048, 00:32:51.344 "data_size": 63488 00:32:51.344 }, 00:32:51.344 { 00:32:51.344 "name": "BaseBdev3", 00:32:51.344 "uuid": "783e39db-6854-5d4a-82f8-e05b1e2be706", 00:32:51.344 "is_configured": true, 00:32:51.344 "data_offset": 2048, 00:32:51.344 "data_size": 63488 00:32:51.344 }, 00:32:51.344 { 00:32:51.344 "name": "BaseBdev4", 00:32:51.344 "uuid": "89581275-ed1f-5d12-bea2-6d46a2029f51", 00:32:51.344 "is_configured": true, 00:32:51.344 "data_offset": 2048, 00:32:51.344 "data_size": 63488 00:32:51.344 } 00:32:51.344 ] 00:32:51.345 }' 00:32:51.345 17:29:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:32:51.345 17:29:28 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:32:51.911 17:29:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:32:51.911 17:29:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:32:51.911 [2024-11-26 17:29:29.299138] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006a40 00:32:52.847 17:29:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc read failure 00:32:52.847 17:29:30 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:52.847 17:29:30 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:32:52.847 17:29:30 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:52.847 17:29:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:32:52.847 17:29:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@832 -- # [[ concat = \r\a\i\d\1 ]] 00:32:52.847 17:29:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=4 00:32:52.847 17:29:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online concat 64 4 00:32:52.847 17:29:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:32:52.847 17:29:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:32:52.847 17:29:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:32:52.847 17:29:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:32:52.847 17:29:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:32:52.847 17:29:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:32:52.847 17:29:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:32:52.847 17:29:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:32:52.847 17:29:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:32:52.847 17:29:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:32:52.847 17:29:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:32:52.847 17:29:30 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:52.847 17:29:30 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:32:52.847 17:29:30 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:52.847 17:29:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:32:52.847 "name": "raid_bdev1", 00:32:52.847 "uuid": "c5059365-a009-435e-88d6-18c0f10471e4", 00:32:52.847 "strip_size_kb": 64, 00:32:52.847 "state": "online", 00:32:52.847 "raid_level": "concat", 00:32:52.847 "superblock": true, 00:32:52.847 "num_base_bdevs": 4, 00:32:52.847 "num_base_bdevs_discovered": 4, 00:32:52.847 "num_base_bdevs_operational": 4, 00:32:52.847 "base_bdevs_list": [ 00:32:52.847 { 00:32:52.847 "name": "BaseBdev1", 00:32:52.847 "uuid": "2b868c53-1870-5f81-9637-6cea2e5171a5", 00:32:52.847 "is_configured": true, 00:32:52.847 "data_offset": 2048, 00:32:52.847 "data_size": 63488 00:32:52.847 }, 00:32:52.847 { 00:32:52.847 "name": "BaseBdev2", 00:32:52.847 "uuid": "93150bed-df4d-57f0-8797-f45911489c18", 00:32:52.847 "is_configured": true, 00:32:52.847 "data_offset": 2048, 00:32:52.847 "data_size": 63488 00:32:52.847 }, 00:32:52.847 { 00:32:52.847 "name": "BaseBdev3", 00:32:52.847 "uuid": "783e39db-6854-5d4a-82f8-e05b1e2be706", 00:32:52.847 "is_configured": true, 00:32:52.847 "data_offset": 2048, 00:32:52.847 "data_size": 63488 00:32:52.847 }, 00:32:52.847 { 00:32:52.847 "name": "BaseBdev4", 00:32:52.847 "uuid": "89581275-ed1f-5d12-bea2-6d46a2029f51", 00:32:52.847 "is_configured": true, 00:32:52.847 "data_offset": 2048, 00:32:52.847 "data_size": 63488 00:32:52.847 } 00:32:52.847 ] 00:32:52.847 }' 00:32:52.847 17:29:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:32:52.847 17:29:30 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:32:53.416 17:29:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:32:53.416 17:29:30 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:53.416 17:29:30 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:32:53.416 [2024-11-26 17:29:30.602498] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:32:53.416 [2024-11-26 17:29:30.602542] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:32:53.416 [2024-11-26 17:29:30.605476] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:32:53.416 [2024-11-26 17:29:30.605550] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:32:53.416 [2024-11-26 17:29:30.605603] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:32:53.416 [2024-11-26 17:29:30.605623] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008580 name raid_bdev1, state offline 00:32:53.416 { 00:32:53.416 "results": [ 00:32:53.416 { 00:32:53.416 "job": "raid_bdev1", 00:32:53.416 "core_mask": "0x1", 00:32:53.416 "workload": "randrw", 00:32:53.416 "percentage": 50, 00:32:53.416 "status": "finished", 00:32:53.416 "queue_depth": 1, 00:32:53.416 "io_size": 131072, 00:32:53.416 "runtime": 1.300832, 00:32:53.416 "iops": 10937.615310816462, 00:32:53.416 "mibps": 1367.2019138520577, 00:32:53.416 "io_failed": 1, 00:32:53.416 "io_timeout": 0, 00:32:53.416 "avg_latency_us": 127.29057665599095, 00:32:53.416 "min_latency_us": 29.50095238095238, 00:32:53.416 "max_latency_us": 1583.7866666666666 00:32:53.416 } 00:32:53.416 ], 00:32:53.416 "core_count": 1 00:32:53.416 } 00:32:53.416 17:29:30 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:53.416 17:29:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 73325 00:32:53.416 17:29:30 bdev_raid.raid_read_error_test -- common/autotest_common.sh@954 -- # '[' -z 73325 ']' 00:32:53.416 17:29:30 bdev_raid.raid_read_error_test -- common/autotest_common.sh@958 -- # kill -0 73325 00:32:53.416 17:29:30 bdev_raid.raid_read_error_test -- common/autotest_common.sh@959 -- # uname 00:32:53.416 17:29:30 bdev_raid.raid_read_error_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:32:53.416 17:29:30 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 73325 00:32:53.416 killing process with pid 73325 00:32:53.416 17:29:30 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:32:53.416 17:29:30 bdev_raid.raid_read_error_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:32:53.416 17:29:30 bdev_raid.raid_read_error_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 73325' 00:32:53.416 17:29:30 bdev_raid.raid_read_error_test -- common/autotest_common.sh@973 -- # kill 73325 00:32:53.416 [2024-11-26 17:29:30.646245] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:32:53.416 17:29:30 bdev_raid.raid_read_error_test -- common/autotest_common.sh@978 -- # wait 73325 00:32:53.676 [2024-11-26 17:29:31.008271] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:32:55.056 17:29:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.eBw12A4tR8 00:32:55.056 17:29:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:32:55.056 17:29:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:32:55.056 17:29:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.77 00:32:55.056 17:29:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy concat 00:32:55.056 17:29:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:32:55.056 17:29:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@200 -- # return 1 00:32:55.056 17:29:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@849 -- # [[ 0.77 != \0\.\0\0 ]] 00:32:55.056 00:32:55.056 real 0m4.893s 00:32:55.056 user 0m5.776s 00:32:55.056 sys 0m0.660s 00:32:55.056 17:29:32 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:32:55.056 17:29:32 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:32:55.056 ************************************ 00:32:55.056 END TEST raid_read_error_test 00:32:55.056 ************************************ 00:32:55.056 17:29:32 bdev_raid -- bdev/bdev_raid.sh@972 -- # run_test raid_write_error_test raid_io_error_test concat 4 write 00:32:55.056 17:29:32 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:32:55.056 17:29:32 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:32:55.056 17:29:32 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:32:55.056 ************************************ 00:32:55.056 START TEST raid_write_error_test 00:32:55.056 ************************************ 00:32:55.056 17:29:32 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1129 -- # raid_io_error_test concat 4 write 00:32:55.056 17:29:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=concat 00:32:55.056 17:29:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=4 00:32:55.056 17:29:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=write 00:32:55.056 17:29:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:32:55.056 17:29:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:32:55.056 17:29:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:32:55.056 17:29:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:32:55.056 17:29:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:32:55.056 17:29:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:32:55.056 17:29:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:32:55.056 17:29:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:32:55.056 17:29:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev3 00:32:55.056 17:29:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:32:55.056 17:29:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:32:55.057 17:29:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev4 00:32:55.057 17:29:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:32:55.057 17:29:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:32:55.057 17:29:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:32:55.057 17:29:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:32:55.057 17:29:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:32:55.057 17:29:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:32:55.057 17:29:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:32:55.057 17:29:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:32:55.057 17:29:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:32:55.057 17:29:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@800 -- # '[' concat '!=' raid1 ']' 00:32:55.057 17:29:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@801 -- # strip_size=64 00:32:55.057 17:29:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@802 -- # create_arg+=' -z 64' 00:32:55.057 17:29:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:32:55.057 17:29:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.Zxn0Ur2Qw9 00:32:55.057 17:29:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=73475 00:32:55.057 17:29:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 73475 00:32:55.057 17:29:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:32:55.057 17:29:32 bdev_raid.raid_write_error_test -- common/autotest_common.sh@835 -- # '[' -z 73475 ']' 00:32:55.057 17:29:32 bdev_raid.raid_write_error_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:32:55.057 17:29:32 bdev_raid.raid_write_error_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:32:55.057 17:29:32 bdev_raid.raid_write_error_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:32:55.057 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:32:55.057 17:29:32 bdev_raid.raid_write_error_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:32:55.057 17:29:32 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:32:55.057 [2024-11-26 17:29:32.465282] Starting SPDK v25.01-pre git sha1 f7ce15267 / DPDK 24.03.0 initialization... 00:32:55.057 [2024-11-26 17:29:32.465957] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73475 ] 00:32:55.325 [2024-11-26 17:29:32.658578] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:55.583 [2024-11-26 17:29:32.779427] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:32:55.583 [2024-11-26 17:29:33.004916] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:32:55.583 [2024-11-26 17:29:33.004984] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:32:56.151 17:29:33 bdev_raid.raid_write_error_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:32:56.151 17:29:33 bdev_raid.raid_write_error_test -- common/autotest_common.sh@868 -- # return 0 00:32:56.151 17:29:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:32:56.151 17:29:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:32:56.151 17:29:33 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:56.151 17:29:33 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:32:56.151 BaseBdev1_malloc 00:32:56.151 17:29:33 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:56.151 17:29:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:32:56.151 17:29:33 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:56.151 17:29:33 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:32:56.151 true 00:32:56.151 17:29:33 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:56.151 17:29:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:32:56.151 17:29:33 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:56.151 17:29:33 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:32:56.151 [2024-11-26 17:29:33.383522] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:32:56.152 [2024-11-26 17:29:33.383581] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:32:56.152 [2024-11-26 17:29:33.383623] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:32:56.152 [2024-11-26 17:29:33.383640] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:32:56.152 [2024-11-26 17:29:33.386180] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:32:56.152 [2024-11-26 17:29:33.386227] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:32:56.152 BaseBdev1 00:32:56.152 17:29:33 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:56.152 17:29:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:32:56.152 17:29:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:32:56.152 17:29:33 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:56.152 17:29:33 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:32:56.152 BaseBdev2_malloc 00:32:56.152 17:29:33 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:56.152 17:29:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:32:56.152 17:29:33 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:56.152 17:29:33 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:32:56.152 true 00:32:56.152 17:29:33 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:56.152 17:29:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:32:56.152 17:29:33 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:56.152 17:29:33 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:32:56.152 [2024-11-26 17:29:33.452419] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:32:56.152 [2024-11-26 17:29:33.452478] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:32:56.152 [2024-11-26 17:29:33.452499] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:32:56.152 [2024-11-26 17:29:33.452514] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:32:56.152 [2024-11-26 17:29:33.455139] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:32:56.152 [2024-11-26 17:29:33.455178] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:32:56.152 BaseBdev2 00:32:56.152 17:29:33 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:56.152 17:29:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:32:56.152 17:29:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:32:56.152 17:29:33 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:56.152 17:29:33 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:32:56.152 BaseBdev3_malloc 00:32:56.152 17:29:33 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:56.152 17:29:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev3_malloc 00:32:56.152 17:29:33 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:56.152 17:29:33 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:32:56.152 true 00:32:56.152 17:29:33 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:56.152 17:29:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:32:56.152 17:29:33 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:56.152 17:29:33 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:32:56.152 [2024-11-26 17:29:33.527196] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:32:56.152 [2024-11-26 17:29:33.527252] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:32:56.152 [2024-11-26 17:29:33.527273] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:32:56.152 [2024-11-26 17:29:33.527289] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:32:56.152 [2024-11-26 17:29:33.529831] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:32:56.152 [2024-11-26 17:29:33.529873] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:32:56.152 BaseBdev3 00:32:56.152 17:29:33 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:56.152 17:29:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:32:56.152 17:29:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:32:56.152 17:29:33 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:56.152 17:29:33 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:32:56.152 BaseBdev4_malloc 00:32:56.152 17:29:33 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:56.152 17:29:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev4_malloc 00:32:56.152 17:29:33 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:56.152 17:29:33 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:32:56.152 true 00:32:56.152 17:29:33 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:56.152 17:29:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev4_malloc -p BaseBdev4 00:32:56.152 17:29:33 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:56.152 17:29:33 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:32:56.152 [2024-11-26 17:29:33.593259] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev4_malloc 00:32:56.152 [2024-11-26 17:29:33.593319] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:32:56.152 [2024-11-26 17:29:33.593340] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:32:56.152 [2024-11-26 17:29:33.593356] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:32:56.152 [2024-11-26 17:29:33.595900] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:32:56.152 [2024-11-26 17:29:33.595942] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:32:56.412 BaseBdev4 00:32:56.412 17:29:33 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:56.412 17:29:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n raid_bdev1 -s 00:32:56.412 17:29:33 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:56.412 17:29:33 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:32:56.412 [2024-11-26 17:29:33.605343] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:32:56.412 [2024-11-26 17:29:33.607580] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:32:56.412 [2024-11-26 17:29:33.607665] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:32:56.412 [2024-11-26 17:29:33.607734] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:32:56.412 [2024-11-26 17:29:33.607960] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008580 00:32:56.412 [2024-11-26 17:29:33.607977] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:32:56.412 [2024-11-26 17:29:33.608268] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000068a0 00:32:56.412 [2024-11-26 17:29:33.608451] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008580 00:32:56.412 [2024-11-26 17:29:33.608472] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008580 00:32:56.412 [2024-11-26 17:29:33.608626] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:32:56.412 17:29:33 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:56.412 17:29:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online concat 64 4 00:32:56.412 17:29:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:32:56.412 17:29:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:32:56.412 17:29:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:32:56.412 17:29:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:32:56.412 17:29:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:32:56.412 17:29:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:32:56.412 17:29:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:32:56.412 17:29:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:32:56.412 17:29:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:32:56.412 17:29:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:32:56.412 17:29:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:32:56.412 17:29:33 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:56.412 17:29:33 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:32:56.412 17:29:33 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:56.412 17:29:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:32:56.412 "name": "raid_bdev1", 00:32:56.412 "uuid": "b150f4d7-7e19-4652-b42c-f8968a8fc33a", 00:32:56.412 "strip_size_kb": 64, 00:32:56.412 "state": "online", 00:32:56.412 "raid_level": "concat", 00:32:56.412 "superblock": true, 00:32:56.412 "num_base_bdevs": 4, 00:32:56.412 "num_base_bdevs_discovered": 4, 00:32:56.412 "num_base_bdevs_operational": 4, 00:32:56.412 "base_bdevs_list": [ 00:32:56.412 { 00:32:56.412 "name": "BaseBdev1", 00:32:56.412 "uuid": "69444743-c22d-5157-a327-fd6dce9d866c", 00:32:56.412 "is_configured": true, 00:32:56.412 "data_offset": 2048, 00:32:56.412 "data_size": 63488 00:32:56.412 }, 00:32:56.412 { 00:32:56.412 "name": "BaseBdev2", 00:32:56.412 "uuid": "c16f80ed-8a1c-551e-8740-b8153be6107c", 00:32:56.412 "is_configured": true, 00:32:56.412 "data_offset": 2048, 00:32:56.412 "data_size": 63488 00:32:56.412 }, 00:32:56.412 { 00:32:56.412 "name": "BaseBdev3", 00:32:56.412 "uuid": "26ac3def-6b68-50c1-ab76-31a1efd3296c", 00:32:56.412 "is_configured": true, 00:32:56.412 "data_offset": 2048, 00:32:56.412 "data_size": 63488 00:32:56.412 }, 00:32:56.412 { 00:32:56.412 "name": "BaseBdev4", 00:32:56.412 "uuid": "5148a9cd-96a3-5d85-990e-c157bbdf20d6", 00:32:56.412 "is_configured": true, 00:32:56.412 "data_offset": 2048, 00:32:56.412 "data_size": 63488 00:32:56.412 } 00:32:56.412 ] 00:32:56.412 }' 00:32:56.412 17:29:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:32:56.412 17:29:33 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:32:56.671 17:29:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:32:56.671 17:29:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:32:56.671 [2024-11-26 17:29:34.110989] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006a40 00:32:57.607 17:29:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc write failure 00:32:57.607 17:29:35 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:57.607 17:29:35 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:32:57.607 17:29:35 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:57.607 17:29:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:32:57.607 17:29:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@832 -- # [[ concat = \r\a\i\d\1 ]] 00:32:57.607 17:29:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=4 00:32:57.607 17:29:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online concat 64 4 00:32:57.607 17:29:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:32:57.607 17:29:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:32:57.607 17:29:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:32:57.607 17:29:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:32:57.607 17:29:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:32:57.607 17:29:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:32:57.607 17:29:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:32:57.607 17:29:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:32:57.607 17:29:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:32:57.607 17:29:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:32:57.607 17:29:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:32:57.607 17:29:35 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:57.607 17:29:35 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:32:57.865 17:29:35 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:57.865 17:29:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:32:57.865 "name": "raid_bdev1", 00:32:57.865 "uuid": "b150f4d7-7e19-4652-b42c-f8968a8fc33a", 00:32:57.865 "strip_size_kb": 64, 00:32:57.865 "state": "online", 00:32:57.865 "raid_level": "concat", 00:32:57.865 "superblock": true, 00:32:57.865 "num_base_bdevs": 4, 00:32:57.865 "num_base_bdevs_discovered": 4, 00:32:57.865 "num_base_bdevs_operational": 4, 00:32:57.865 "base_bdevs_list": [ 00:32:57.865 { 00:32:57.865 "name": "BaseBdev1", 00:32:57.865 "uuid": "69444743-c22d-5157-a327-fd6dce9d866c", 00:32:57.865 "is_configured": true, 00:32:57.865 "data_offset": 2048, 00:32:57.865 "data_size": 63488 00:32:57.865 }, 00:32:57.865 { 00:32:57.865 "name": "BaseBdev2", 00:32:57.865 "uuid": "c16f80ed-8a1c-551e-8740-b8153be6107c", 00:32:57.865 "is_configured": true, 00:32:57.865 "data_offset": 2048, 00:32:57.865 "data_size": 63488 00:32:57.865 }, 00:32:57.865 { 00:32:57.865 "name": "BaseBdev3", 00:32:57.865 "uuid": "26ac3def-6b68-50c1-ab76-31a1efd3296c", 00:32:57.865 "is_configured": true, 00:32:57.865 "data_offset": 2048, 00:32:57.865 "data_size": 63488 00:32:57.865 }, 00:32:57.865 { 00:32:57.865 "name": "BaseBdev4", 00:32:57.865 "uuid": "5148a9cd-96a3-5d85-990e-c157bbdf20d6", 00:32:57.865 "is_configured": true, 00:32:57.865 "data_offset": 2048, 00:32:57.865 "data_size": 63488 00:32:57.865 } 00:32:57.865 ] 00:32:57.865 }' 00:32:57.865 17:29:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:32:57.865 17:29:35 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:32:58.124 17:29:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:32:58.124 17:29:35 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:58.124 17:29:35 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:32:58.124 [2024-11-26 17:29:35.466161] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:32:58.124 [2024-11-26 17:29:35.466206] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:32:58.124 [2024-11-26 17:29:35.469268] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:32:58.124 [2024-11-26 17:29:35.469337] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:32:58.124 [2024-11-26 17:29:35.469383] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:32:58.124 [2024-11-26 17:29:35.469403] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008580 name raid_bdev1, state offline 00:32:58.124 { 00:32:58.124 "results": [ 00:32:58.124 { 00:32:58.124 "job": "raid_bdev1", 00:32:58.124 "core_mask": "0x1", 00:32:58.124 "workload": "randrw", 00:32:58.124 "percentage": 50, 00:32:58.124 "status": "finished", 00:32:58.124 "queue_depth": 1, 00:32:58.124 "io_size": 131072, 00:32:58.124 "runtime": 1.352968, 00:32:58.124 "iops": 14804.489093607535, 00:32:58.124 "mibps": 1850.561136700942, 00:32:58.124 "io_failed": 1, 00:32:58.124 "io_timeout": 0, 00:32:58.124 "avg_latency_us": 93.12081169425487, 00:32:58.124 "min_latency_us": 27.794285714285714, 00:32:58.124 "max_latency_us": 1536.9752380952382 00:32:58.124 } 00:32:58.124 ], 00:32:58.124 "core_count": 1 00:32:58.124 } 00:32:58.124 17:29:35 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:58.124 17:29:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 73475 00:32:58.124 17:29:35 bdev_raid.raid_write_error_test -- common/autotest_common.sh@954 -- # '[' -z 73475 ']' 00:32:58.124 17:29:35 bdev_raid.raid_write_error_test -- common/autotest_common.sh@958 -- # kill -0 73475 00:32:58.124 17:29:35 bdev_raid.raid_write_error_test -- common/autotest_common.sh@959 -- # uname 00:32:58.124 17:29:35 bdev_raid.raid_write_error_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:32:58.124 17:29:35 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 73475 00:32:58.124 17:29:35 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:32:58.124 17:29:35 bdev_raid.raid_write_error_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:32:58.124 killing process with pid 73475 00:32:58.124 17:29:35 bdev_raid.raid_write_error_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 73475' 00:32:58.124 17:29:35 bdev_raid.raid_write_error_test -- common/autotest_common.sh@973 -- # kill 73475 00:32:58.124 [2024-11-26 17:29:35.510154] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:32:58.124 17:29:35 bdev_raid.raid_write_error_test -- common/autotest_common.sh@978 -- # wait 73475 00:32:58.691 [2024-11-26 17:29:35.854138] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:33:00.066 17:29:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.Zxn0Ur2Qw9 00:33:00.066 17:29:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:33:00.066 17:29:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:33:00.066 17:29:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.74 00:33:00.066 17:29:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy concat 00:33:00.066 17:29:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:33:00.066 17:29:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@200 -- # return 1 00:33:00.066 17:29:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@849 -- # [[ 0.74 != \0\.\0\0 ]] 00:33:00.066 00:33:00.066 real 0m4.775s 00:33:00.066 user 0m5.586s 00:33:00.066 sys 0m0.615s 00:33:00.066 17:29:37 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:33:00.066 ************************************ 00:33:00.067 17:29:37 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:33:00.067 END TEST raid_write_error_test 00:33:00.067 ************************************ 00:33:00.067 17:29:37 bdev_raid -- bdev/bdev_raid.sh@967 -- # for level in raid0 concat raid1 00:33:00.067 17:29:37 bdev_raid -- bdev/bdev_raid.sh@968 -- # run_test raid_state_function_test raid_state_function_test raid1 4 false 00:33:00.067 17:29:37 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:33:00.067 17:29:37 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:33:00.067 17:29:37 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:33:00.067 ************************************ 00:33:00.067 START TEST raid_state_function_test 00:33:00.067 ************************************ 00:33:00.067 17:29:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1129 -- # raid_state_function_test raid1 4 false 00:33:00.067 17:29:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # local raid_level=raid1 00:33:00.067 17:29:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=4 00:33:00.067 17:29:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # local superblock=false 00:33:00.067 17:29:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:33:00.067 17:29:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:33:00.067 17:29:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:33:00.067 17:29:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:33:00.067 17:29:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:33:00.067 17:29:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:33:00.067 17:29:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:33:00.067 17:29:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:33:00.067 17:29:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:33:00.067 17:29:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:33:00.067 17:29:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:33:00.067 17:29:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:33:00.067 17:29:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev4 00:33:00.067 17:29:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:33:00.067 17:29:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:33:00.067 17:29:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:33:00.067 17:29:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:33:00.067 17:29:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:33:00.067 17:29:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # local strip_size 00:33:00.067 17:29:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:33:00.067 17:29:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:33:00.067 17:29:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@215 -- # '[' raid1 '!=' raid1 ']' 00:33:00.067 17:29:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@219 -- # strip_size=0 00:33:00.067 17:29:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@222 -- # '[' false = true ']' 00:33:00.067 17:29:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # superblock_create_arg= 00:33:00.067 17:29:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@229 -- # raid_pid=73614 00:33:00.067 Process raid pid: 73614 00:33:00.067 17:29:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 73614' 00:33:00.067 17:29:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@231 -- # waitforlisten 73614 00:33:00.067 17:29:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:33:00.067 17:29:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@835 -- # '[' -z 73614 ']' 00:33:00.067 17:29:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:33:00.067 17:29:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:33:00.067 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:33:00.067 17:29:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:33:00.067 17:29:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:33:00.067 17:29:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:33:00.067 [2024-11-26 17:29:37.296129] Starting SPDK v25.01-pre git sha1 f7ce15267 / DPDK 24.03.0 initialization... 00:33:00.067 [2024-11-26 17:29:37.296304] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:33:00.067 [2024-11-26 17:29:37.490953] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:33:00.326 [2024-11-26 17:29:37.611227] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:33:00.584 [2024-11-26 17:29:37.827546] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:33:00.584 [2024-11-26 17:29:37.827592] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:33:00.843 17:29:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:33:00.843 17:29:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@868 -- # return 0 00:33:00.843 17:29:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:33:00.843 17:29:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:00.843 17:29:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:33:00.843 [2024-11-26 17:29:38.169181] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:33:00.843 [2024-11-26 17:29:38.169238] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:33:00.843 [2024-11-26 17:29:38.169250] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:33:00.843 [2024-11-26 17:29:38.169263] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:33:00.843 [2024-11-26 17:29:38.169271] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:33:00.843 [2024-11-26 17:29:38.169283] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:33:00.843 [2024-11-26 17:29:38.169297] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:33:00.843 [2024-11-26 17:29:38.169309] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:33:00.843 17:29:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:00.843 17:29:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:33:00.843 17:29:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:33:00.843 17:29:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:33:00.843 17:29:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:33:00.843 17:29:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:33:00.843 17:29:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:33:00.843 17:29:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:33:00.843 17:29:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:33:00.843 17:29:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:33:00.843 17:29:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:33:00.843 17:29:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:33:00.844 17:29:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:33:00.844 17:29:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:00.844 17:29:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:33:00.844 17:29:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:00.844 17:29:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:33:00.844 "name": "Existed_Raid", 00:33:00.844 "uuid": "00000000-0000-0000-0000-000000000000", 00:33:00.844 "strip_size_kb": 0, 00:33:00.844 "state": "configuring", 00:33:00.844 "raid_level": "raid1", 00:33:00.844 "superblock": false, 00:33:00.844 "num_base_bdevs": 4, 00:33:00.844 "num_base_bdevs_discovered": 0, 00:33:00.844 "num_base_bdevs_operational": 4, 00:33:00.844 "base_bdevs_list": [ 00:33:00.844 { 00:33:00.844 "name": "BaseBdev1", 00:33:00.844 "uuid": "00000000-0000-0000-0000-000000000000", 00:33:00.844 "is_configured": false, 00:33:00.844 "data_offset": 0, 00:33:00.844 "data_size": 0 00:33:00.844 }, 00:33:00.844 { 00:33:00.844 "name": "BaseBdev2", 00:33:00.844 "uuid": "00000000-0000-0000-0000-000000000000", 00:33:00.844 "is_configured": false, 00:33:00.844 "data_offset": 0, 00:33:00.844 "data_size": 0 00:33:00.844 }, 00:33:00.844 { 00:33:00.844 "name": "BaseBdev3", 00:33:00.844 "uuid": "00000000-0000-0000-0000-000000000000", 00:33:00.844 "is_configured": false, 00:33:00.844 "data_offset": 0, 00:33:00.844 "data_size": 0 00:33:00.844 }, 00:33:00.844 { 00:33:00.844 "name": "BaseBdev4", 00:33:00.844 "uuid": "00000000-0000-0000-0000-000000000000", 00:33:00.844 "is_configured": false, 00:33:00.844 "data_offset": 0, 00:33:00.844 "data_size": 0 00:33:00.844 } 00:33:00.844 ] 00:33:00.844 }' 00:33:00.844 17:29:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:33:00.844 17:29:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:33:01.412 17:29:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:33:01.412 17:29:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:01.412 17:29:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:33:01.412 [2024-11-26 17:29:38.617229] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:33:01.412 [2024-11-26 17:29:38.617274] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:33:01.412 17:29:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:01.412 17:29:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:33:01.412 17:29:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:01.412 17:29:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:33:01.412 [2024-11-26 17:29:38.629215] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:33:01.412 [2024-11-26 17:29:38.629256] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:33:01.412 [2024-11-26 17:29:38.629266] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:33:01.412 [2024-11-26 17:29:38.629279] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:33:01.412 [2024-11-26 17:29:38.629287] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:33:01.412 [2024-11-26 17:29:38.629299] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:33:01.412 [2024-11-26 17:29:38.629307] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:33:01.412 [2024-11-26 17:29:38.629318] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:33:01.412 17:29:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:01.412 17:29:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:33:01.412 17:29:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:01.412 17:29:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:33:01.412 [2024-11-26 17:29:38.674035] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:33:01.412 BaseBdev1 00:33:01.412 17:29:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:01.412 17:29:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:33:01.412 17:29:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:33:01.412 17:29:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:33:01.412 17:29:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:33:01.412 17:29:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:33:01.412 17:29:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:33:01.412 17:29:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:33:01.412 17:29:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:01.412 17:29:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:33:01.412 17:29:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:01.412 17:29:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:33:01.412 17:29:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:01.412 17:29:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:33:01.412 [ 00:33:01.412 { 00:33:01.412 "name": "BaseBdev1", 00:33:01.412 "aliases": [ 00:33:01.412 "ef688f20-cb0d-4758-8d46-450c4c1f7210" 00:33:01.412 ], 00:33:01.412 "product_name": "Malloc disk", 00:33:01.412 "block_size": 512, 00:33:01.412 "num_blocks": 65536, 00:33:01.412 "uuid": "ef688f20-cb0d-4758-8d46-450c4c1f7210", 00:33:01.412 "assigned_rate_limits": { 00:33:01.412 "rw_ios_per_sec": 0, 00:33:01.412 "rw_mbytes_per_sec": 0, 00:33:01.412 "r_mbytes_per_sec": 0, 00:33:01.412 "w_mbytes_per_sec": 0 00:33:01.412 }, 00:33:01.412 "claimed": true, 00:33:01.412 "claim_type": "exclusive_write", 00:33:01.412 "zoned": false, 00:33:01.412 "supported_io_types": { 00:33:01.412 "read": true, 00:33:01.412 "write": true, 00:33:01.412 "unmap": true, 00:33:01.412 "flush": true, 00:33:01.412 "reset": true, 00:33:01.412 "nvme_admin": false, 00:33:01.412 "nvme_io": false, 00:33:01.412 "nvme_io_md": false, 00:33:01.412 "write_zeroes": true, 00:33:01.412 "zcopy": true, 00:33:01.412 "get_zone_info": false, 00:33:01.412 "zone_management": false, 00:33:01.412 "zone_append": false, 00:33:01.412 "compare": false, 00:33:01.412 "compare_and_write": false, 00:33:01.412 "abort": true, 00:33:01.412 "seek_hole": false, 00:33:01.412 "seek_data": false, 00:33:01.412 "copy": true, 00:33:01.412 "nvme_iov_md": false 00:33:01.412 }, 00:33:01.412 "memory_domains": [ 00:33:01.412 { 00:33:01.412 "dma_device_id": "system", 00:33:01.412 "dma_device_type": 1 00:33:01.412 }, 00:33:01.412 { 00:33:01.412 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:33:01.412 "dma_device_type": 2 00:33:01.412 } 00:33:01.412 ], 00:33:01.412 "driver_specific": {} 00:33:01.412 } 00:33:01.412 ] 00:33:01.412 17:29:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:01.412 17:29:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:33:01.413 17:29:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:33:01.413 17:29:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:33:01.413 17:29:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:33:01.413 17:29:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:33:01.413 17:29:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:33:01.413 17:29:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:33:01.413 17:29:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:33:01.413 17:29:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:33:01.413 17:29:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:33:01.413 17:29:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:33:01.413 17:29:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:33:01.413 17:29:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:33:01.413 17:29:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:01.413 17:29:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:33:01.413 17:29:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:01.413 17:29:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:33:01.413 "name": "Existed_Raid", 00:33:01.413 "uuid": "00000000-0000-0000-0000-000000000000", 00:33:01.413 "strip_size_kb": 0, 00:33:01.413 "state": "configuring", 00:33:01.413 "raid_level": "raid1", 00:33:01.413 "superblock": false, 00:33:01.413 "num_base_bdevs": 4, 00:33:01.413 "num_base_bdevs_discovered": 1, 00:33:01.413 "num_base_bdevs_operational": 4, 00:33:01.413 "base_bdevs_list": [ 00:33:01.413 { 00:33:01.413 "name": "BaseBdev1", 00:33:01.413 "uuid": "ef688f20-cb0d-4758-8d46-450c4c1f7210", 00:33:01.413 "is_configured": true, 00:33:01.413 "data_offset": 0, 00:33:01.413 "data_size": 65536 00:33:01.413 }, 00:33:01.413 { 00:33:01.413 "name": "BaseBdev2", 00:33:01.413 "uuid": "00000000-0000-0000-0000-000000000000", 00:33:01.413 "is_configured": false, 00:33:01.413 "data_offset": 0, 00:33:01.413 "data_size": 0 00:33:01.413 }, 00:33:01.413 { 00:33:01.413 "name": "BaseBdev3", 00:33:01.413 "uuid": "00000000-0000-0000-0000-000000000000", 00:33:01.413 "is_configured": false, 00:33:01.413 "data_offset": 0, 00:33:01.413 "data_size": 0 00:33:01.413 }, 00:33:01.413 { 00:33:01.413 "name": "BaseBdev4", 00:33:01.413 "uuid": "00000000-0000-0000-0000-000000000000", 00:33:01.413 "is_configured": false, 00:33:01.413 "data_offset": 0, 00:33:01.413 "data_size": 0 00:33:01.413 } 00:33:01.413 ] 00:33:01.413 }' 00:33:01.413 17:29:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:33:01.413 17:29:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:33:01.989 17:29:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:33:01.989 17:29:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:01.989 17:29:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:33:01.989 [2024-11-26 17:29:39.178202] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:33:01.989 [2024-11-26 17:29:39.178256] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:33:01.989 17:29:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:01.989 17:29:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:33:01.989 17:29:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:01.989 17:29:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:33:01.989 [2024-11-26 17:29:39.186243] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:33:01.989 [2024-11-26 17:29:39.188319] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:33:01.989 [2024-11-26 17:29:39.188364] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:33:01.989 [2024-11-26 17:29:39.188376] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:33:01.990 [2024-11-26 17:29:39.188391] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:33:01.990 [2024-11-26 17:29:39.188399] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:33:01.990 [2024-11-26 17:29:39.188410] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:33:01.990 17:29:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:01.990 17:29:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:33:01.990 17:29:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:33:01.990 17:29:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:33:01.990 17:29:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:33:01.990 17:29:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:33:01.990 17:29:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:33:01.990 17:29:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:33:01.990 17:29:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:33:01.990 17:29:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:33:01.990 17:29:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:33:01.990 17:29:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:33:01.990 17:29:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:33:01.990 17:29:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:33:01.990 17:29:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:33:01.990 17:29:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:01.990 17:29:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:33:01.990 17:29:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:01.990 17:29:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:33:01.990 "name": "Existed_Raid", 00:33:01.990 "uuid": "00000000-0000-0000-0000-000000000000", 00:33:01.990 "strip_size_kb": 0, 00:33:01.990 "state": "configuring", 00:33:01.991 "raid_level": "raid1", 00:33:01.991 "superblock": false, 00:33:01.991 "num_base_bdevs": 4, 00:33:01.991 "num_base_bdevs_discovered": 1, 00:33:01.991 "num_base_bdevs_operational": 4, 00:33:01.991 "base_bdevs_list": [ 00:33:01.991 { 00:33:01.991 "name": "BaseBdev1", 00:33:01.991 "uuid": "ef688f20-cb0d-4758-8d46-450c4c1f7210", 00:33:01.991 "is_configured": true, 00:33:01.991 "data_offset": 0, 00:33:01.991 "data_size": 65536 00:33:01.991 }, 00:33:01.991 { 00:33:01.991 "name": "BaseBdev2", 00:33:01.991 "uuid": "00000000-0000-0000-0000-000000000000", 00:33:01.991 "is_configured": false, 00:33:01.991 "data_offset": 0, 00:33:01.991 "data_size": 0 00:33:01.991 }, 00:33:01.991 { 00:33:01.991 "name": "BaseBdev3", 00:33:01.991 "uuid": "00000000-0000-0000-0000-000000000000", 00:33:01.991 "is_configured": false, 00:33:01.991 "data_offset": 0, 00:33:01.991 "data_size": 0 00:33:01.991 }, 00:33:01.991 { 00:33:01.991 "name": "BaseBdev4", 00:33:01.991 "uuid": "00000000-0000-0000-0000-000000000000", 00:33:01.991 "is_configured": false, 00:33:01.991 "data_offset": 0, 00:33:01.991 "data_size": 0 00:33:01.991 } 00:33:01.991 ] 00:33:01.991 }' 00:33:01.991 17:29:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:33:01.991 17:29:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:33:02.252 17:29:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:33:02.252 17:29:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:02.252 17:29:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:33:02.252 [2024-11-26 17:29:39.676794] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:33:02.252 BaseBdev2 00:33:02.252 17:29:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:02.252 17:29:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:33:02.252 17:29:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:33:02.252 17:29:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:33:02.252 17:29:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:33:02.252 17:29:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:33:02.252 17:29:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:33:02.252 17:29:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:33:02.252 17:29:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:02.252 17:29:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:33:02.252 17:29:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:02.252 17:29:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:33:02.252 17:29:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:02.252 17:29:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:33:02.511 [ 00:33:02.511 { 00:33:02.511 "name": "BaseBdev2", 00:33:02.511 "aliases": [ 00:33:02.511 "b305a80b-ac67-4e12-a9a9-6f1f9777820b" 00:33:02.511 ], 00:33:02.511 "product_name": "Malloc disk", 00:33:02.511 "block_size": 512, 00:33:02.511 "num_blocks": 65536, 00:33:02.511 "uuid": "b305a80b-ac67-4e12-a9a9-6f1f9777820b", 00:33:02.511 "assigned_rate_limits": { 00:33:02.511 "rw_ios_per_sec": 0, 00:33:02.511 "rw_mbytes_per_sec": 0, 00:33:02.511 "r_mbytes_per_sec": 0, 00:33:02.511 "w_mbytes_per_sec": 0 00:33:02.511 }, 00:33:02.511 "claimed": true, 00:33:02.511 "claim_type": "exclusive_write", 00:33:02.511 "zoned": false, 00:33:02.511 "supported_io_types": { 00:33:02.511 "read": true, 00:33:02.511 "write": true, 00:33:02.511 "unmap": true, 00:33:02.511 "flush": true, 00:33:02.511 "reset": true, 00:33:02.511 "nvme_admin": false, 00:33:02.511 "nvme_io": false, 00:33:02.511 "nvme_io_md": false, 00:33:02.511 "write_zeroes": true, 00:33:02.511 "zcopy": true, 00:33:02.511 "get_zone_info": false, 00:33:02.511 "zone_management": false, 00:33:02.511 "zone_append": false, 00:33:02.511 "compare": false, 00:33:02.511 "compare_and_write": false, 00:33:02.511 "abort": true, 00:33:02.511 "seek_hole": false, 00:33:02.511 "seek_data": false, 00:33:02.511 "copy": true, 00:33:02.511 "nvme_iov_md": false 00:33:02.511 }, 00:33:02.511 "memory_domains": [ 00:33:02.511 { 00:33:02.511 "dma_device_id": "system", 00:33:02.511 "dma_device_type": 1 00:33:02.511 }, 00:33:02.511 { 00:33:02.511 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:33:02.511 "dma_device_type": 2 00:33:02.511 } 00:33:02.511 ], 00:33:02.511 "driver_specific": {} 00:33:02.511 } 00:33:02.511 ] 00:33:02.511 17:29:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:02.511 17:29:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:33:02.511 17:29:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:33:02.511 17:29:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:33:02.511 17:29:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:33:02.511 17:29:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:33:02.511 17:29:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:33:02.511 17:29:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:33:02.511 17:29:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:33:02.511 17:29:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:33:02.511 17:29:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:33:02.511 17:29:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:33:02.511 17:29:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:33:02.511 17:29:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:33:02.511 17:29:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:33:02.511 17:29:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:33:02.511 17:29:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:02.511 17:29:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:33:02.511 17:29:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:02.511 17:29:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:33:02.511 "name": "Existed_Raid", 00:33:02.511 "uuid": "00000000-0000-0000-0000-000000000000", 00:33:02.511 "strip_size_kb": 0, 00:33:02.511 "state": "configuring", 00:33:02.511 "raid_level": "raid1", 00:33:02.511 "superblock": false, 00:33:02.511 "num_base_bdevs": 4, 00:33:02.511 "num_base_bdevs_discovered": 2, 00:33:02.511 "num_base_bdevs_operational": 4, 00:33:02.511 "base_bdevs_list": [ 00:33:02.511 { 00:33:02.511 "name": "BaseBdev1", 00:33:02.511 "uuid": "ef688f20-cb0d-4758-8d46-450c4c1f7210", 00:33:02.511 "is_configured": true, 00:33:02.511 "data_offset": 0, 00:33:02.511 "data_size": 65536 00:33:02.511 }, 00:33:02.511 { 00:33:02.511 "name": "BaseBdev2", 00:33:02.511 "uuid": "b305a80b-ac67-4e12-a9a9-6f1f9777820b", 00:33:02.511 "is_configured": true, 00:33:02.511 "data_offset": 0, 00:33:02.511 "data_size": 65536 00:33:02.511 }, 00:33:02.511 { 00:33:02.511 "name": "BaseBdev3", 00:33:02.511 "uuid": "00000000-0000-0000-0000-000000000000", 00:33:02.511 "is_configured": false, 00:33:02.511 "data_offset": 0, 00:33:02.511 "data_size": 0 00:33:02.511 }, 00:33:02.511 { 00:33:02.511 "name": "BaseBdev4", 00:33:02.511 "uuid": "00000000-0000-0000-0000-000000000000", 00:33:02.511 "is_configured": false, 00:33:02.511 "data_offset": 0, 00:33:02.511 "data_size": 0 00:33:02.511 } 00:33:02.511 ] 00:33:02.511 }' 00:33:02.511 17:29:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:33:02.511 17:29:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:33:02.770 17:29:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:33:02.770 17:29:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:02.770 17:29:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:33:02.770 [2024-11-26 17:29:40.200516] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:33:02.770 BaseBdev3 00:33:02.770 17:29:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:02.770 17:29:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:33:02.770 17:29:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:33:02.770 17:29:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:33:02.770 17:29:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:33:02.770 17:29:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:33:02.770 17:29:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:33:02.770 17:29:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:33:02.770 17:29:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:02.770 17:29:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:33:02.770 17:29:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:02.770 17:29:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:33:02.770 17:29:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:02.770 17:29:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:33:03.030 [ 00:33:03.030 { 00:33:03.030 "name": "BaseBdev3", 00:33:03.030 "aliases": [ 00:33:03.030 "0acee5f4-5829-4046-9b6a-b2b0dacbdeec" 00:33:03.030 ], 00:33:03.030 "product_name": "Malloc disk", 00:33:03.030 "block_size": 512, 00:33:03.030 "num_blocks": 65536, 00:33:03.030 "uuid": "0acee5f4-5829-4046-9b6a-b2b0dacbdeec", 00:33:03.030 "assigned_rate_limits": { 00:33:03.030 "rw_ios_per_sec": 0, 00:33:03.030 "rw_mbytes_per_sec": 0, 00:33:03.030 "r_mbytes_per_sec": 0, 00:33:03.030 "w_mbytes_per_sec": 0 00:33:03.030 }, 00:33:03.030 "claimed": true, 00:33:03.030 "claim_type": "exclusive_write", 00:33:03.030 "zoned": false, 00:33:03.030 "supported_io_types": { 00:33:03.030 "read": true, 00:33:03.030 "write": true, 00:33:03.030 "unmap": true, 00:33:03.030 "flush": true, 00:33:03.030 "reset": true, 00:33:03.030 "nvme_admin": false, 00:33:03.030 "nvme_io": false, 00:33:03.030 "nvme_io_md": false, 00:33:03.030 "write_zeroes": true, 00:33:03.030 "zcopy": true, 00:33:03.030 "get_zone_info": false, 00:33:03.030 "zone_management": false, 00:33:03.030 "zone_append": false, 00:33:03.030 "compare": false, 00:33:03.030 "compare_and_write": false, 00:33:03.030 "abort": true, 00:33:03.030 "seek_hole": false, 00:33:03.030 "seek_data": false, 00:33:03.030 "copy": true, 00:33:03.030 "nvme_iov_md": false 00:33:03.030 }, 00:33:03.030 "memory_domains": [ 00:33:03.030 { 00:33:03.030 "dma_device_id": "system", 00:33:03.030 "dma_device_type": 1 00:33:03.030 }, 00:33:03.030 { 00:33:03.030 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:33:03.030 "dma_device_type": 2 00:33:03.030 } 00:33:03.030 ], 00:33:03.030 "driver_specific": {} 00:33:03.030 } 00:33:03.030 ] 00:33:03.030 17:29:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:03.030 17:29:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:33:03.030 17:29:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:33:03.030 17:29:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:33:03.030 17:29:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:33:03.030 17:29:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:33:03.030 17:29:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:33:03.030 17:29:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:33:03.030 17:29:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:33:03.030 17:29:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:33:03.030 17:29:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:33:03.030 17:29:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:33:03.030 17:29:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:33:03.030 17:29:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:33:03.030 17:29:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:33:03.030 17:29:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:33:03.030 17:29:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:03.030 17:29:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:33:03.030 17:29:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:03.030 17:29:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:33:03.030 "name": "Existed_Raid", 00:33:03.030 "uuid": "00000000-0000-0000-0000-000000000000", 00:33:03.030 "strip_size_kb": 0, 00:33:03.030 "state": "configuring", 00:33:03.030 "raid_level": "raid1", 00:33:03.030 "superblock": false, 00:33:03.030 "num_base_bdevs": 4, 00:33:03.030 "num_base_bdevs_discovered": 3, 00:33:03.030 "num_base_bdevs_operational": 4, 00:33:03.030 "base_bdevs_list": [ 00:33:03.030 { 00:33:03.030 "name": "BaseBdev1", 00:33:03.030 "uuid": "ef688f20-cb0d-4758-8d46-450c4c1f7210", 00:33:03.030 "is_configured": true, 00:33:03.030 "data_offset": 0, 00:33:03.030 "data_size": 65536 00:33:03.030 }, 00:33:03.030 { 00:33:03.030 "name": "BaseBdev2", 00:33:03.030 "uuid": "b305a80b-ac67-4e12-a9a9-6f1f9777820b", 00:33:03.030 "is_configured": true, 00:33:03.030 "data_offset": 0, 00:33:03.030 "data_size": 65536 00:33:03.030 }, 00:33:03.030 { 00:33:03.030 "name": "BaseBdev3", 00:33:03.030 "uuid": "0acee5f4-5829-4046-9b6a-b2b0dacbdeec", 00:33:03.030 "is_configured": true, 00:33:03.030 "data_offset": 0, 00:33:03.030 "data_size": 65536 00:33:03.030 }, 00:33:03.030 { 00:33:03.030 "name": "BaseBdev4", 00:33:03.030 "uuid": "00000000-0000-0000-0000-000000000000", 00:33:03.030 "is_configured": false, 00:33:03.030 "data_offset": 0, 00:33:03.030 "data_size": 0 00:33:03.030 } 00:33:03.030 ] 00:33:03.030 }' 00:33:03.030 17:29:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:33:03.030 17:29:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:33:03.289 17:29:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:33:03.289 17:29:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:03.289 17:29:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:33:03.289 [2024-11-26 17:29:40.721976] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:33:03.289 [2024-11-26 17:29:40.722036] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:33:03.289 [2024-11-26 17:29:40.722077] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:33:03.289 [2024-11-26 17:29:40.722390] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:33:03.289 [2024-11-26 17:29:40.722568] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:33:03.289 [2024-11-26 17:29:40.722585] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:33:03.289 [2024-11-26 17:29:40.722850] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:33:03.289 BaseBdev4 00:33:03.289 17:29:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:03.289 17:29:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev4 00:33:03.289 17:29:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev4 00:33:03.289 17:29:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:33:03.289 17:29:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:33:03.289 17:29:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:33:03.289 17:29:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:33:03.289 17:29:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:33:03.289 17:29:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:03.289 17:29:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:33:03.548 17:29:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:03.548 17:29:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:33:03.548 17:29:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:03.549 17:29:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:33:03.549 [ 00:33:03.549 { 00:33:03.549 "name": "BaseBdev4", 00:33:03.549 "aliases": [ 00:33:03.549 "9f6b4ede-69d3-4aa1-ab86-f10013c5f358" 00:33:03.549 ], 00:33:03.549 "product_name": "Malloc disk", 00:33:03.549 "block_size": 512, 00:33:03.549 "num_blocks": 65536, 00:33:03.549 "uuid": "9f6b4ede-69d3-4aa1-ab86-f10013c5f358", 00:33:03.549 "assigned_rate_limits": { 00:33:03.549 "rw_ios_per_sec": 0, 00:33:03.549 "rw_mbytes_per_sec": 0, 00:33:03.549 "r_mbytes_per_sec": 0, 00:33:03.549 "w_mbytes_per_sec": 0 00:33:03.549 }, 00:33:03.549 "claimed": true, 00:33:03.549 "claim_type": "exclusive_write", 00:33:03.549 "zoned": false, 00:33:03.549 "supported_io_types": { 00:33:03.549 "read": true, 00:33:03.549 "write": true, 00:33:03.549 "unmap": true, 00:33:03.549 "flush": true, 00:33:03.549 "reset": true, 00:33:03.549 "nvme_admin": false, 00:33:03.549 "nvme_io": false, 00:33:03.549 "nvme_io_md": false, 00:33:03.549 "write_zeroes": true, 00:33:03.549 "zcopy": true, 00:33:03.549 "get_zone_info": false, 00:33:03.549 "zone_management": false, 00:33:03.549 "zone_append": false, 00:33:03.549 "compare": false, 00:33:03.549 "compare_and_write": false, 00:33:03.549 "abort": true, 00:33:03.549 "seek_hole": false, 00:33:03.549 "seek_data": false, 00:33:03.549 "copy": true, 00:33:03.549 "nvme_iov_md": false 00:33:03.549 }, 00:33:03.549 "memory_domains": [ 00:33:03.549 { 00:33:03.549 "dma_device_id": "system", 00:33:03.549 "dma_device_type": 1 00:33:03.549 }, 00:33:03.549 { 00:33:03.549 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:33:03.549 "dma_device_type": 2 00:33:03.549 } 00:33:03.549 ], 00:33:03.549 "driver_specific": {} 00:33:03.549 } 00:33:03.549 ] 00:33:03.549 17:29:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:03.549 17:29:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:33:03.549 17:29:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:33:03.549 17:29:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:33:03.549 17:29:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid1 0 4 00:33:03.549 17:29:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:33:03.549 17:29:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:33:03.549 17:29:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:33:03.549 17:29:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:33:03.549 17:29:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:33:03.549 17:29:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:33:03.549 17:29:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:33:03.549 17:29:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:33:03.549 17:29:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:33:03.549 17:29:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:33:03.549 17:29:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:03.549 17:29:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:33:03.549 17:29:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:33:03.549 17:29:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:03.549 17:29:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:33:03.549 "name": "Existed_Raid", 00:33:03.549 "uuid": "80c0a1ee-e83a-4c75-8b48-3b95f3211517", 00:33:03.549 "strip_size_kb": 0, 00:33:03.549 "state": "online", 00:33:03.549 "raid_level": "raid1", 00:33:03.549 "superblock": false, 00:33:03.549 "num_base_bdevs": 4, 00:33:03.549 "num_base_bdevs_discovered": 4, 00:33:03.549 "num_base_bdevs_operational": 4, 00:33:03.549 "base_bdevs_list": [ 00:33:03.549 { 00:33:03.549 "name": "BaseBdev1", 00:33:03.549 "uuid": "ef688f20-cb0d-4758-8d46-450c4c1f7210", 00:33:03.549 "is_configured": true, 00:33:03.549 "data_offset": 0, 00:33:03.549 "data_size": 65536 00:33:03.549 }, 00:33:03.549 { 00:33:03.549 "name": "BaseBdev2", 00:33:03.549 "uuid": "b305a80b-ac67-4e12-a9a9-6f1f9777820b", 00:33:03.549 "is_configured": true, 00:33:03.549 "data_offset": 0, 00:33:03.549 "data_size": 65536 00:33:03.549 }, 00:33:03.549 { 00:33:03.549 "name": "BaseBdev3", 00:33:03.549 "uuid": "0acee5f4-5829-4046-9b6a-b2b0dacbdeec", 00:33:03.549 "is_configured": true, 00:33:03.549 "data_offset": 0, 00:33:03.549 "data_size": 65536 00:33:03.549 }, 00:33:03.549 { 00:33:03.549 "name": "BaseBdev4", 00:33:03.549 "uuid": "9f6b4ede-69d3-4aa1-ab86-f10013c5f358", 00:33:03.549 "is_configured": true, 00:33:03.549 "data_offset": 0, 00:33:03.549 "data_size": 65536 00:33:03.549 } 00:33:03.549 ] 00:33:03.549 }' 00:33:03.549 17:29:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:33:03.549 17:29:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:33:03.809 17:29:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:33:03.809 17:29:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:33:03.809 17:29:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:33:03.809 17:29:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:33:03.809 17:29:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:33:03.809 17:29:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:33:03.809 17:29:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:33:03.809 17:29:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:33:03.809 17:29:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:03.809 17:29:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:33:03.809 [2024-11-26 17:29:41.242495] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:33:04.068 17:29:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:04.068 17:29:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:33:04.068 "name": "Existed_Raid", 00:33:04.068 "aliases": [ 00:33:04.068 "80c0a1ee-e83a-4c75-8b48-3b95f3211517" 00:33:04.068 ], 00:33:04.068 "product_name": "Raid Volume", 00:33:04.068 "block_size": 512, 00:33:04.068 "num_blocks": 65536, 00:33:04.068 "uuid": "80c0a1ee-e83a-4c75-8b48-3b95f3211517", 00:33:04.068 "assigned_rate_limits": { 00:33:04.068 "rw_ios_per_sec": 0, 00:33:04.068 "rw_mbytes_per_sec": 0, 00:33:04.068 "r_mbytes_per_sec": 0, 00:33:04.068 "w_mbytes_per_sec": 0 00:33:04.068 }, 00:33:04.068 "claimed": false, 00:33:04.068 "zoned": false, 00:33:04.068 "supported_io_types": { 00:33:04.068 "read": true, 00:33:04.068 "write": true, 00:33:04.068 "unmap": false, 00:33:04.068 "flush": false, 00:33:04.068 "reset": true, 00:33:04.068 "nvme_admin": false, 00:33:04.068 "nvme_io": false, 00:33:04.068 "nvme_io_md": false, 00:33:04.068 "write_zeroes": true, 00:33:04.068 "zcopy": false, 00:33:04.068 "get_zone_info": false, 00:33:04.068 "zone_management": false, 00:33:04.068 "zone_append": false, 00:33:04.068 "compare": false, 00:33:04.068 "compare_and_write": false, 00:33:04.068 "abort": false, 00:33:04.068 "seek_hole": false, 00:33:04.068 "seek_data": false, 00:33:04.068 "copy": false, 00:33:04.068 "nvme_iov_md": false 00:33:04.068 }, 00:33:04.068 "memory_domains": [ 00:33:04.068 { 00:33:04.068 "dma_device_id": "system", 00:33:04.068 "dma_device_type": 1 00:33:04.068 }, 00:33:04.068 { 00:33:04.068 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:33:04.068 "dma_device_type": 2 00:33:04.068 }, 00:33:04.068 { 00:33:04.068 "dma_device_id": "system", 00:33:04.068 "dma_device_type": 1 00:33:04.068 }, 00:33:04.068 { 00:33:04.068 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:33:04.068 "dma_device_type": 2 00:33:04.068 }, 00:33:04.068 { 00:33:04.068 "dma_device_id": "system", 00:33:04.068 "dma_device_type": 1 00:33:04.068 }, 00:33:04.068 { 00:33:04.068 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:33:04.068 "dma_device_type": 2 00:33:04.068 }, 00:33:04.068 { 00:33:04.068 "dma_device_id": "system", 00:33:04.068 "dma_device_type": 1 00:33:04.068 }, 00:33:04.068 { 00:33:04.068 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:33:04.068 "dma_device_type": 2 00:33:04.068 } 00:33:04.068 ], 00:33:04.068 "driver_specific": { 00:33:04.068 "raid": { 00:33:04.068 "uuid": "80c0a1ee-e83a-4c75-8b48-3b95f3211517", 00:33:04.068 "strip_size_kb": 0, 00:33:04.068 "state": "online", 00:33:04.068 "raid_level": "raid1", 00:33:04.068 "superblock": false, 00:33:04.068 "num_base_bdevs": 4, 00:33:04.068 "num_base_bdevs_discovered": 4, 00:33:04.068 "num_base_bdevs_operational": 4, 00:33:04.068 "base_bdevs_list": [ 00:33:04.068 { 00:33:04.068 "name": "BaseBdev1", 00:33:04.068 "uuid": "ef688f20-cb0d-4758-8d46-450c4c1f7210", 00:33:04.068 "is_configured": true, 00:33:04.068 "data_offset": 0, 00:33:04.068 "data_size": 65536 00:33:04.068 }, 00:33:04.068 { 00:33:04.068 "name": "BaseBdev2", 00:33:04.068 "uuid": "b305a80b-ac67-4e12-a9a9-6f1f9777820b", 00:33:04.068 "is_configured": true, 00:33:04.068 "data_offset": 0, 00:33:04.068 "data_size": 65536 00:33:04.068 }, 00:33:04.068 { 00:33:04.068 "name": "BaseBdev3", 00:33:04.068 "uuid": "0acee5f4-5829-4046-9b6a-b2b0dacbdeec", 00:33:04.068 "is_configured": true, 00:33:04.068 "data_offset": 0, 00:33:04.068 "data_size": 65536 00:33:04.068 }, 00:33:04.068 { 00:33:04.068 "name": "BaseBdev4", 00:33:04.068 "uuid": "9f6b4ede-69d3-4aa1-ab86-f10013c5f358", 00:33:04.068 "is_configured": true, 00:33:04.068 "data_offset": 0, 00:33:04.068 "data_size": 65536 00:33:04.068 } 00:33:04.068 ] 00:33:04.068 } 00:33:04.068 } 00:33:04.068 }' 00:33:04.068 17:29:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:33:04.068 17:29:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:33:04.068 BaseBdev2 00:33:04.068 BaseBdev3 00:33:04.068 BaseBdev4' 00:33:04.068 17:29:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:33:04.068 17:29:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:33:04.068 17:29:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:33:04.068 17:29:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:33:04.068 17:29:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:04.068 17:29:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:33:04.068 17:29:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:33:04.068 17:29:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:04.068 17:29:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:33:04.068 17:29:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:33:04.069 17:29:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:33:04.069 17:29:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:33:04.069 17:29:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:04.069 17:29:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:33:04.069 17:29:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:33:04.069 17:29:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:04.069 17:29:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:33:04.069 17:29:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:33:04.069 17:29:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:33:04.069 17:29:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:33:04.069 17:29:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:04.069 17:29:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:33:04.069 17:29:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:33:04.069 17:29:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:04.328 17:29:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:33:04.328 17:29:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:33:04.328 17:29:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:33:04.328 17:29:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:33:04.328 17:29:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:33:04.328 17:29:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:04.328 17:29:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:33:04.328 17:29:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:04.328 17:29:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:33:04.328 17:29:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:33:04.328 17:29:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:33:04.328 17:29:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:04.328 17:29:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:33:04.328 [2024-11-26 17:29:41.570302] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:33:04.328 17:29:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:04.328 17:29:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@260 -- # local expected_state 00:33:04.328 17:29:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@261 -- # has_redundancy raid1 00:33:04.328 17:29:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:33:04.328 17:29:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@199 -- # return 0 00:33:04.328 17:29:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@264 -- # expected_state=online 00:33:04.328 17:29:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid online raid1 0 3 00:33:04.328 17:29:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:33:04.328 17:29:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:33:04.328 17:29:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:33:04.328 17:29:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:33:04.328 17:29:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:33:04.328 17:29:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:33:04.328 17:29:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:33:04.328 17:29:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:33:04.328 17:29:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:33:04.328 17:29:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:33:04.328 17:29:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:33:04.328 17:29:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:04.328 17:29:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:33:04.328 17:29:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:04.328 17:29:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:33:04.328 "name": "Existed_Raid", 00:33:04.328 "uuid": "80c0a1ee-e83a-4c75-8b48-3b95f3211517", 00:33:04.328 "strip_size_kb": 0, 00:33:04.328 "state": "online", 00:33:04.328 "raid_level": "raid1", 00:33:04.328 "superblock": false, 00:33:04.328 "num_base_bdevs": 4, 00:33:04.328 "num_base_bdevs_discovered": 3, 00:33:04.328 "num_base_bdevs_operational": 3, 00:33:04.328 "base_bdevs_list": [ 00:33:04.328 { 00:33:04.328 "name": null, 00:33:04.328 "uuid": "00000000-0000-0000-0000-000000000000", 00:33:04.328 "is_configured": false, 00:33:04.328 "data_offset": 0, 00:33:04.328 "data_size": 65536 00:33:04.328 }, 00:33:04.328 { 00:33:04.328 "name": "BaseBdev2", 00:33:04.328 "uuid": "b305a80b-ac67-4e12-a9a9-6f1f9777820b", 00:33:04.328 "is_configured": true, 00:33:04.328 "data_offset": 0, 00:33:04.328 "data_size": 65536 00:33:04.328 }, 00:33:04.328 { 00:33:04.328 "name": "BaseBdev3", 00:33:04.328 "uuid": "0acee5f4-5829-4046-9b6a-b2b0dacbdeec", 00:33:04.328 "is_configured": true, 00:33:04.328 "data_offset": 0, 00:33:04.328 "data_size": 65536 00:33:04.328 }, 00:33:04.328 { 00:33:04.328 "name": "BaseBdev4", 00:33:04.329 "uuid": "9f6b4ede-69d3-4aa1-ab86-f10013c5f358", 00:33:04.329 "is_configured": true, 00:33:04.329 "data_offset": 0, 00:33:04.329 "data_size": 65536 00:33:04.329 } 00:33:04.329 ] 00:33:04.329 }' 00:33:04.329 17:29:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:33:04.329 17:29:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:33:04.897 17:29:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:33:04.897 17:29:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:33:04.897 17:29:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:33:04.897 17:29:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:04.897 17:29:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:33:04.897 17:29:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:33:04.897 17:29:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:04.897 17:29:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:33:04.897 17:29:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:33:04.897 17:29:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:33:04.897 17:29:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:04.897 17:29:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:33:04.897 [2024-11-26 17:29:42.185379] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:33:04.897 17:29:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:04.897 17:29:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:33:04.897 17:29:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:33:04.897 17:29:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:33:04.897 17:29:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:04.897 17:29:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:33:04.897 17:29:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:33:04.897 17:29:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:04.897 17:29:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:33:04.897 17:29:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:33:04.897 17:29:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:33:04.897 17:29:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:04.897 17:29:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:33:04.897 [2024-11-26 17:29:42.339401] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:33:05.157 17:29:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:05.157 17:29:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:33:05.157 17:29:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:33:05.157 17:29:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:33:05.157 17:29:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:33:05.157 17:29:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:05.157 17:29:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:33:05.157 17:29:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:05.157 17:29:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:33:05.157 17:29:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:33:05.157 17:29:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev4 00:33:05.157 17:29:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:05.157 17:29:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:33:05.157 [2024-11-26 17:29:42.489945] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev4 00:33:05.157 [2024-11-26 17:29:42.490040] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:33:05.157 [2024-11-26 17:29:42.588539] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:33:05.157 [2024-11-26 17:29:42.588746] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:33:05.157 [2024-11-26 17:29:42.588860] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:33:05.157 17:29:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:05.157 17:29:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:33:05.157 17:29:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:33:05.157 17:29:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:33:05.157 17:29:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:05.157 17:29:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:33:05.157 17:29:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:33:05.157 17:29:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:05.504 17:29:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:33:05.504 17:29:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:33:05.504 17:29:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@284 -- # '[' 4 -gt 2 ']' 00:33:05.504 17:29:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:33:05.504 17:29:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:33:05.504 17:29:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:33:05.504 17:29:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:05.504 17:29:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:33:05.504 BaseBdev2 00:33:05.504 17:29:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:05.504 17:29:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:33:05.504 17:29:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:33:05.504 17:29:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:33:05.504 17:29:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:33:05.504 17:29:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:33:05.504 17:29:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:33:05.504 17:29:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:33:05.504 17:29:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:05.504 17:29:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:33:05.504 17:29:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:05.504 17:29:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:33:05.504 17:29:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:05.504 17:29:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:33:05.504 [ 00:33:05.504 { 00:33:05.504 "name": "BaseBdev2", 00:33:05.504 "aliases": [ 00:33:05.504 "9a319df6-e927-497d-ac07-d0223b97dfb4" 00:33:05.504 ], 00:33:05.504 "product_name": "Malloc disk", 00:33:05.504 "block_size": 512, 00:33:05.504 "num_blocks": 65536, 00:33:05.504 "uuid": "9a319df6-e927-497d-ac07-d0223b97dfb4", 00:33:05.504 "assigned_rate_limits": { 00:33:05.504 "rw_ios_per_sec": 0, 00:33:05.504 "rw_mbytes_per_sec": 0, 00:33:05.504 "r_mbytes_per_sec": 0, 00:33:05.504 "w_mbytes_per_sec": 0 00:33:05.504 }, 00:33:05.504 "claimed": false, 00:33:05.504 "zoned": false, 00:33:05.504 "supported_io_types": { 00:33:05.504 "read": true, 00:33:05.504 "write": true, 00:33:05.504 "unmap": true, 00:33:05.504 "flush": true, 00:33:05.504 "reset": true, 00:33:05.504 "nvme_admin": false, 00:33:05.504 "nvme_io": false, 00:33:05.504 "nvme_io_md": false, 00:33:05.504 "write_zeroes": true, 00:33:05.504 "zcopy": true, 00:33:05.504 "get_zone_info": false, 00:33:05.504 "zone_management": false, 00:33:05.504 "zone_append": false, 00:33:05.504 "compare": false, 00:33:05.504 "compare_and_write": false, 00:33:05.504 "abort": true, 00:33:05.504 "seek_hole": false, 00:33:05.504 "seek_data": false, 00:33:05.504 "copy": true, 00:33:05.504 "nvme_iov_md": false 00:33:05.504 }, 00:33:05.504 "memory_domains": [ 00:33:05.504 { 00:33:05.504 "dma_device_id": "system", 00:33:05.504 "dma_device_type": 1 00:33:05.504 }, 00:33:05.504 { 00:33:05.504 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:33:05.504 "dma_device_type": 2 00:33:05.504 } 00:33:05.504 ], 00:33:05.504 "driver_specific": {} 00:33:05.504 } 00:33:05.504 ] 00:33:05.504 17:29:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:05.504 17:29:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:33:05.504 17:29:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:33:05.504 17:29:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:33:05.504 17:29:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:33:05.504 17:29:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:05.504 17:29:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:33:05.504 BaseBdev3 00:33:05.504 17:29:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:05.504 17:29:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:33:05.504 17:29:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:33:05.504 17:29:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:33:05.504 17:29:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:33:05.504 17:29:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:33:05.504 17:29:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:33:05.504 17:29:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:33:05.504 17:29:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:05.504 17:29:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:33:05.504 17:29:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:05.504 17:29:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:33:05.504 17:29:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:05.504 17:29:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:33:05.504 [ 00:33:05.504 { 00:33:05.504 "name": "BaseBdev3", 00:33:05.504 "aliases": [ 00:33:05.504 "06d18c81-3cbb-43a6-adae-dd46c41d5b84" 00:33:05.504 ], 00:33:05.504 "product_name": "Malloc disk", 00:33:05.504 "block_size": 512, 00:33:05.504 "num_blocks": 65536, 00:33:05.504 "uuid": "06d18c81-3cbb-43a6-adae-dd46c41d5b84", 00:33:05.504 "assigned_rate_limits": { 00:33:05.504 "rw_ios_per_sec": 0, 00:33:05.504 "rw_mbytes_per_sec": 0, 00:33:05.504 "r_mbytes_per_sec": 0, 00:33:05.504 "w_mbytes_per_sec": 0 00:33:05.504 }, 00:33:05.504 "claimed": false, 00:33:05.504 "zoned": false, 00:33:05.504 "supported_io_types": { 00:33:05.504 "read": true, 00:33:05.504 "write": true, 00:33:05.504 "unmap": true, 00:33:05.504 "flush": true, 00:33:05.504 "reset": true, 00:33:05.504 "nvme_admin": false, 00:33:05.504 "nvme_io": false, 00:33:05.504 "nvme_io_md": false, 00:33:05.504 "write_zeroes": true, 00:33:05.504 "zcopy": true, 00:33:05.504 "get_zone_info": false, 00:33:05.504 "zone_management": false, 00:33:05.504 "zone_append": false, 00:33:05.504 "compare": false, 00:33:05.504 "compare_and_write": false, 00:33:05.504 "abort": true, 00:33:05.504 "seek_hole": false, 00:33:05.504 "seek_data": false, 00:33:05.504 "copy": true, 00:33:05.504 "nvme_iov_md": false 00:33:05.504 }, 00:33:05.504 "memory_domains": [ 00:33:05.504 { 00:33:05.504 "dma_device_id": "system", 00:33:05.504 "dma_device_type": 1 00:33:05.504 }, 00:33:05.504 { 00:33:05.504 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:33:05.504 "dma_device_type": 2 00:33:05.504 } 00:33:05.504 ], 00:33:05.504 "driver_specific": {} 00:33:05.504 } 00:33:05.504 ] 00:33:05.504 17:29:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:05.504 17:29:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:33:05.504 17:29:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:33:05.504 17:29:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:33:05.504 17:29:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:33:05.504 17:29:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:05.504 17:29:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:33:05.504 BaseBdev4 00:33:05.504 17:29:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:05.504 17:29:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev4 00:33:05.504 17:29:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev4 00:33:05.504 17:29:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:33:05.504 17:29:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:33:05.504 17:29:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:33:05.504 17:29:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:33:05.505 17:29:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:33:05.505 17:29:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:05.505 17:29:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:33:05.505 17:29:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:05.505 17:29:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:33:05.505 17:29:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:05.505 17:29:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:33:05.505 [ 00:33:05.505 { 00:33:05.505 "name": "BaseBdev4", 00:33:05.505 "aliases": [ 00:33:05.505 "77addabf-1e55-4b15-ac08-66494b9613c4" 00:33:05.505 ], 00:33:05.505 "product_name": "Malloc disk", 00:33:05.505 "block_size": 512, 00:33:05.505 "num_blocks": 65536, 00:33:05.505 "uuid": "77addabf-1e55-4b15-ac08-66494b9613c4", 00:33:05.505 "assigned_rate_limits": { 00:33:05.505 "rw_ios_per_sec": 0, 00:33:05.505 "rw_mbytes_per_sec": 0, 00:33:05.505 "r_mbytes_per_sec": 0, 00:33:05.505 "w_mbytes_per_sec": 0 00:33:05.505 }, 00:33:05.505 "claimed": false, 00:33:05.505 "zoned": false, 00:33:05.505 "supported_io_types": { 00:33:05.505 "read": true, 00:33:05.505 "write": true, 00:33:05.505 "unmap": true, 00:33:05.505 "flush": true, 00:33:05.505 "reset": true, 00:33:05.505 "nvme_admin": false, 00:33:05.505 "nvme_io": false, 00:33:05.505 "nvme_io_md": false, 00:33:05.505 "write_zeroes": true, 00:33:05.505 "zcopy": true, 00:33:05.505 "get_zone_info": false, 00:33:05.505 "zone_management": false, 00:33:05.505 "zone_append": false, 00:33:05.505 "compare": false, 00:33:05.505 "compare_and_write": false, 00:33:05.505 "abort": true, 00:33:05.505 "seek_hole": false, 00:33:05.505 "seek_data": false, 00:33:05.505 "copy": true, 00:33:05.505 "nvme_iov_md": false 00:33:05.505 }, 00:33:05.505 "memory_domains": [ 00:33:05.505 { 00:33:05.505 "dma_device_id": "system", 00:33:05.505 "dma_device_type": 1 00:33:05.505 }, 00:33:05.505 { 00:33:05.505 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:33:05.505 "dma_device_type": 2 00:33:05.505 } 00:33:05.505 ], 00:33:05.505 "driver_specific": {} 00:33:05.505 } 00:33:05.505 ] 00:33:05.505 17:29:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:05.505 17:29:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:33:05.505 17:29:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:33:05.505 17:29:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:33:05.505 17:29:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:33:05.505 17:29:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:05.505 17:29:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:33:05.505 [2024-11-26 17:29:42.867541] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:33:05.505 [2024-11-26 17:29:42.867595] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:33:05.505 [2024-11-26 17:29:42.867616] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:33:05.505 [2024-11-26 17:29:42.869670] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:33:05.505 [2024-11-26 17:29:42.869717] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:33:05.505 17:29:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:05.505 17:29:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:33:05.505 17:29:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:33:05.505 17:29:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:33:05.505 17:29:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:33:05.505 17:29:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:33:05.505 17:29:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:33:05.505 17:29:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:33:05.505 17:29:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:33:05.505 17:29:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:33:05.505 17:29:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:33:05.505 17:29:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:33:05.505 17:29:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:05.505 17:29:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:33:05.505 17:29:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:33:05.505 17:29:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:05.788 17:29:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:33:05.788 "name": "Existed_Raid", 00:33:05.788 "uuid": "00000000-0000-0000-0000-000000000000", 00:33:05.788 "strip_size_kb": 0, 00:33:05.788 "state": "configuring", 00:33:05.788 "raid_level": "raid1", 00:33:05.788 "superblock": false, 00:33:05.788 "num_base_bdevs": 4, 00:33:05.788 "num_base_bdevs_discovered": 3, 00:33:05.788 "num_base_bdevs_operational": 4, 00:33:05.788 "base_bdevs_list": [ 00:33:05.788 { 00:33:05.788 "name": "BaseBdev1", 00:33:05.788 "uuid": "00000000-0000-0000-0000-000000000000", 00:33:05.788 "is_configured": false, 00:33:05.788 "data_offset": 0, 00:33:05.788 "data_size": 0 00:33:05.788 }, 00:33:05.788 { 00:33:05.788 "name": "BaseBdev2", 00:33:05.788 "uuid": "9a319df6-e927-497d-ac07-d0223b97dfb4", 00:33:05.788 "is_configured": true, 00:33:05.788 "data_offset": 0, 00:33:05.788 "data_size": 65536 00:33:05.788 }, 00:33:05.788 { 00:33:05.788 "name": "BaseBdev3", 00:33:05.788 "uuid": "06d18c81-3cbb-43a6-adae-dd46c41d5b84", 00:33:05.788 "is_configured": true, 00:33:05.788 "data_offset": 0, 00:33:05.788 "data_size": 65536 00:33:05.788 }, 00:33:05.788 { 00:33:05.788 "name": "BaseBdev4", 00:33:05.788 "uuid": "77addabf-1e55-4b15-ac08-66494b9613c4", 00:33:05.788 "is_configured": true, 00:33:05.788 "data_offset": 0, 00:33:05.788 "data_size": 65536 00:33:05.788 } 00:33:05.788 ] 00:33:05.788 }' 00:33:05.788 17:29:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:33:05.788 17:29:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:33:06.047 17:29:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:33:06.047 17:29:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:06.047 17:29:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:33:06.047 [2024-11-26 17:29:43.335685] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:33:06.047 17:29:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:06.047 17:29:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:33:06.047 17:29:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:33:06.047 17:29:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:33:06.047 17:29:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:33:06.047 17:29:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:33:06.047 17:29:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:33:06.047 17:29:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:33:06.047 17:29:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:33:06.047 17:29:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:33:06.047 17:29:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:33:06.047 17:29:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:33:06.047 17:29:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:06.047 17:29:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:33:06.047 17:29:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:33:06.047 17:29:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:06.047 17:29:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:33:06.047 "name": "Existed_Raid", 00:33:06.047 "uuid": "00000000-0000-0000-0000-000000000000", 00:33:06.047 "strip_size_kb": 0, 00:33:06.047 "state": "configuring", 00:33:06.047 "raid_level": "raid1", 00:33:06.047 "superblock": false, 00:33:06.047 "num_base_bdevs": 4, 00:33:06.047 "num_base_bdevs_discovered": 2, 00:33:06.047 "num_base_bdevs_operational": 4, 00:33:06.047 "base_bdevs_list": [ 00:33:06.047 { 00:33:06.047 "name": "BaseBdev1", 00:33:06.047 "uuid": "00000000-0000-0000-0000-000000000000", 00:33:06.047 "is_configured": false, 00:33:06.047 "data_offset": 0, 00:33:06.047 "data_size": 0 00:33:06.047 }, 00:33:06.047 { 00:33:06.047 "name": null, 00:33:06.047 "uuid": "9a319df6-e927-497d-ac07-d0223b97dfb4", 00:33:06.047 "is_configured": false, 00:33:06.047 "data_offset": 0, 00:33:06.047 "data_size": 65536 00:33:06.047 }, 00:33:06.047 { 00:33:06.047 "name": "BaseBdev3", 00:33:06.047 "uuid": "06d18c81-3cbb-43a6-adae-dd46c41d5b84", 00:33:06.047 "is_configured": true, 00:33:06.047 "data_offset": 0, 00:33:06.047 "data_size": 65536 00:33:06.047 }, 00:33:06.047 { 00:33:06.047 "name": "BaseBdev4", 00:33:06.047 "uuid": "77addabf-1e55-4b15-ac08-66494b9613c4", 00:33:06.047 "is_configured": true, 00:33:06.047 "data_offset": 0, 00:33:06.047 "data_size": 65536 00:33:06.047 } 00:33:06.047 ] 00:33:06.047 }' 00:33:06.047 17:29:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:33:06.047 17:29:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:33:06.615 17:29:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:33:06.615 17:29:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:06.615 17:29:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:33:06.615 17:29:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:33:06.615 17:29:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:06.615 17:29:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:33:06.615 17:29:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:33:06.616 17:29:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:06.616 17:29:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:33:06.616 [2024-11-26 17:29:43.886529] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:33:06.616 BaseBdev1 00:33:06.616 17:29:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:06.616 17:29:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:33:06.616 17:29:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:33:06.616 17:29:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:33:06.616 17:29:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:33:06.616 17:29:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:33:06.616 17:29:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:33:06.616 17:29:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:33:06.616 17:29:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:06.616 17:29:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:33:06.616 17:29:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:06.616 17:29:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:33:06.616 17:29:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:06.616 17:29:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:33:06.616 [ 00:33:06.616 { 00:33:06.616 "name": "BaseBdev1", 00:33:06.616 "aliases": [ 00:33:06.616 "efa7eba2-ec94-4793-a6ea-f4649166fb4a" 00:33:06.616 ], 00:33:06.616 "product_name": "Malloc disk", 00:33:06.616 "block_size": 512, 00:33:06.616 "num_blocks": 65536, 00:33:06.616 "uuid": "efa7eba2-ec94-4793-a6ea-f4649166fb4a", 00:33:06.616 "assigned_rate_limits": { 00:33:06.616 "rw_ios_per_sec": 0, 00:33:06.616 "rw_mbytes_per_sec": 0, 00:33:06.616 "r_mbytes_per_sec": 0, 00:33:06.616 "w_mbytes_per_sec": 0 00:33:06.616 }, 00:33:06.616 "claimed": true, 00:33:06.616 "claim_type": "exclusive_write", 00:33:06.616 "zoned": false, 00:33:06.616 "supported_io_types": { 00:33:06.616 "read": true, 00:33:06.616 "write": true, 00:33:06.616 "unmap": true, 00:33:06.616 "flush": true, 00:33:06.616 "reset": true, 00:33:06.616 "nvme_admin": false, 00:33:06.616 "nvme_io": false, 00:33:06.616 "nvme_io_md": false, 00:33:06.616 "write_zeroes": true, 00:33:06.616 "zcopy": true, 00:33:06.616 "get_zone_info": false, 00:33:06.616 "zone_management": false, 00:33:06.616 "zone_append": false, 00:33:06.616 "compare": false, 00:33:06.616 "compare_and_write": false, 00:33:06.616 "abort": true, 00:33:06.616 "seek_hole": false, 00:33:06.616 "seek_data": false, 00:33:06.616 "copy": true, 00:33:06.616 "nvme_iov_md": false 00:33:06.616 }, 00:33:06.616 "memory_domains": [ 00:33:06.616 { 00:33:06.616 "dma_device_id": "system", 00:33:06.616 "dma_device_type": 1 00:33:06.616 }, 00:33:06.616 { 00:33:06.616 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:33:06.616 "dma_device_type": 2 00:33:06.616 } 00:33:06.616 ], 00:33:06.616 "driver_specific": {} 00:33:06.616 } 00:33:06.616 ] 00:33:06.616 17:29:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:06.616 17:29:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:33:06.616 17:29:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:33:06.616 17:29:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:33:06.616 17:29:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:33:06.616 17:29:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:33:06.616 17:29:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:33:06.616 17:29:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:33:06.616 17:29:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:33:06.616 17:29:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:33:06.616 17:29:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:33:06.616 17:29:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:33:06.616 17:29:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:33:06.616 17:29:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:33:06.616 17:29:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:06.616 17:29:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:33:06.616 17:29:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:06.616 17:29:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:33:06.616 "name": "Existed_Raid", 00:33:06.616 "uuid": "00000000-0000-0000-0000-000000000000", 00:33:06.616 "strip_size_kb": 0, 00:33:06.616 "state": "configuring", 00:33:06.616 "raid_level": "raid1", 00:33:06.616 "superblock": false, 00:33:06.616 "num_base_bdevs": 4, 00:33:06.616 "num_base_bdevs_discovered": 3, 00:33:06.616 "num_base_bdevs_operational": 4, 00:33:06.616 "base_bdevs_list": [ 00:33:06.616 { 00:33:06.616 "name": "BaseBdev1", 00:33:06.616 "uuid": "efa7eba2-ec94-4793-a6ea-f4649166fb4a", 00:33:06.616 "is_configured": true, 00:33:06.616 "data_offset": 0, 00:33:06.616 "data_size": 65536 00:33:06.616 }, 00:33:06.616 { 00:33:06.616 "name": null, 00:33:06.616 "uuid": "9a319df6-e927-497d-ac07-d0223b97dfb4", 00:33:06.616 "is_configured": false, 00:33:06.616 "data_offset": 0, 00:33:06.616 "data_size": 65536 00:33:06.616 }, 00:33:06.616 { 00:33:06.616 "name": "BaseBdev3", 00:33:06.616 "uuid": "06d18c81-3cbb-43a6-adae-dd46c41d5b84", 00:33:06.616 "is_configured": true, 00:33:06.616 "data_offset": 0, 00:33:06.616 "data_size": 65536 00:33:06.616 }, 00:33:06.616 { 00:33:06.616 "name": "BaseBdev4", 00:33:06.616 "uuid": "77addabf-1e55-4b15-ac08-66494b9613c4", 00:33:06.616 "is_configured": true, 00:33:06.616 "data_offset": 0, 00:33:06.616 "data_size": 65536 00:33:06.616 } 00:33:06.616 ] 00:33:06.616 }' 00:33:06.616 17:29:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:33:06.616 17:29:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:33:07.183 17:29:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:33:07.183 17:29:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:33:07.183 17:29:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:07.183 17:29:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:33:07.183 17:29:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:07.183 17:29:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:33:07.183 17:29:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:33:07.183 17:29:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:07.183 17:29:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:33:07.183 [2024-11-26 17:29:44.438748] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:33:07.183 17:29:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:07.183 17:29:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:33:07.183 17:29:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:33:07.183 17:29:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:33:07.183 17:29:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:33:07.183 17:29:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:33:07.183 17:29:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:33:07.183 17:29:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:33:07.183 17:29:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:33:07.183 17:29:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:33:07.183 17:29:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:33:07.183 17:29:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:33:07.183 17:29:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:07.183 17:29:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:33:07.183 17:29:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:33:07.183 17:29:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:07.183 17:29:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:33:07.183 "name": "Existed_Raid", 00:33:07.183 "uuid": "00000000-0000-0000-0000-000000000000", 00:33:07.183 "strip_size_kb": 0, 00:33:07.183 "state": "configuring", 00:33:07.183 "raid_level": "raid1", 00:33:07.183 "superblock": false, 00:33:07.183 "num_base_bdevs": 4, 00:33:07.183 "num_base_bdevs_discovered": 2, 00:33:07.183 "num_base_bdevs_operational": 4, 00:33:07.183 "base_bdevs_list": [ 00:33:07.183 { 00:33:07.183 "name": "BaseBdev1", 00:33:07.183 "uuid": "efa7eba2-ec94-4793-a6ea-f4649166fb4a", 00:33:07.183 "is_configured": true, 00:33:07.183 "data_offset": 0, 00:33:07.183 "data_size": 65536 00:33:07.183 }, 00:33:07.183 { 00:33:07.183 "name": null, 00:33:07.183 "uuid": "9a319df6-e927-497d-ac07-d0223b97dfb4", 00:33:07.183 "is_configured": false, 00:33:07.183 "data_offset": 0, 00:33:07.183 "data_size": 65536 00:33:07.183 }, 00:33:07.183 { 00:33:07.183 "name": null, 00:33:07.183 "uuid": "06d18c81-3cbb-43a6-adae-dd46c41d5b84", 00:33:07.183 "is_configured": false, 00:33:07.183 "data_offset": 0, 00:33:07.183 "data_size": 65536 00:33:07.183 }, 00:33:07.183 { 00:33:07.183 "name": "BaseBdev4", 00:33:07.183 "uuid": "77addabf-1e55-4b15-ac08-66494b9613c4", 00:33:07.183 "is_configured": true, 00:33:07.183 "data_offset": 0, 00:33:07.183 "data_size": 65536 00:33:07.183 } 00:33:07.183 ] 00:33:07.183 }' 00:33:07.183 17:29:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:33:07.183 17:29:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:33:07.442 17:29:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:33:07.442 17:29:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:07.442 17:29:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:33:07.442 17:29:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:33:07.700 17:29:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:07.700 17:29:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:33:07.700 17:29:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:33:07.700 17:29:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:07.700 17:29:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:33:07.700 [2024-11-26 17:29:44.934873] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:33:07.700 17:29:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:07.700 17:29:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:33:07.700 17:29:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:33:07.700 17:29:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:33:07.700 17:29:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:33:07.700 17:29:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:33:07.700 17:29:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:33:07.700 17:29:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:33:07.700 17:29:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:33:07.700 17:29:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:33:07.700 17:29:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:33:07.700 17:29:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:33:07.700 17:29:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:07.700 17:29:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:33:07.700 17:29:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:33:07.700 17:29:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:07.700 17:29:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:33:07.700 "name": "Existed_Raid", 00:33:07.700 "uuid": "00000000-0000-0000-0000-000000000000", 00:33:07.700 "strip_size_kb": 0, 00:33:07.700 "state": "configuring", 00:33:07.700 "raid_level": "raid1", 00:33:07.700 "superblock": false, 00:33:07.700 "num_base_bdevs": 4, 00:33:07.700 "num_base_bdevs_discovered": 3, 00:33:07.700 "num_base_bdevs_operational": 4, 00:33:07.700 "base_bdevs_list": [ 00:33:07.700 { 00:33:07.700 "name": "BaseBdev1", 00:33:07.700 "uuid": "efa7eba2-ec94-4793-a6ea-f4649166fb4a", 00:33:07.700 "is_configured": true, 00:33:07.700 "data_offset": 0, 00:33:07.700 "data_size": 65536 00:33:07.700 }, 00:33:07.700 { 00:33:07.700 "name": null, 00:33:07.700 "uuid": "9a319df6-e927-497d-ac07-d0223b97dfb4", 00:33:07.700 "is_configured": false, 00:33:07.700 "data_offset": 0, 00:33:07.700 "data_size": 65536 00:33:07.700 }, 00:33:07.700 { 00:33:07.700 "name": "BaseBdev3", 00:33:07.700 "uuid": "06d18c81-3cbb-43a6-adae-dd46c41d5b84", 00:33:07.700 "is_configured": true, 00:33:07.700 "data_offset": 0, 00:33:07.700 "data_size": 65536 00:33:07.700 }, 00:33:07.700 { 00:33:07.700 "name": "BaseBdev4", 00:33:07.700 "uuid": "77addabf-1e55-4b15-ac08-66494b9613c4", 00:33:07.700 "is_configured": true, 00:33:07.700 "data_offset": 0, 00:33:07.700 "data_size": 65536 00:33:07.700 } 00:33:07.700 ] 00:33:07.700 }' 00:33:07.700 17:29:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:33:07.700 17:29:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:33:08.268 17:29:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:33:08.268 17:29:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:33:08.268 17:29:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:08.268 17:29:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:33:08.268 17:29:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:08.268 17:29:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:33:08.268 17:29:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:33:08.268 17:29:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:08.268 17:29:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:33:08.268 [2024-11-26 17:29:45.455009] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:33:08.268 17:29:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:08.268 17:29:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:33:08.268 17:29:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:33:08.268 17:29:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:33:08.268 17:29:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:33:08.268 17:29:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:33:08.268 17:29:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:33:08.268 17:29:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:33:08.268 17:29:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:33:08.268 17:29:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:33:08.268 17:29:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:33:08.268 17:29:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:33:08.268 17:29:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:08.268 17:29:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:33:08.268 17:29:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:33:08.268 17:29:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:08.268 17:29:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:33:08.268 "name": "Existed_Raid", 00:33:08.268 "uuid": "00000000-0000-0000-0000-000000000000", 00:33:08.268 "strip_size_kb": 0, 00:33:08.268 "state": "configuring", 00:33:08.268 "raid_level": "raid1", 00:33:08.268 "superblock": false, 00:33:08.268 "num_base_bdevs": 4, 00:33:08.268 "num_base_bdevs_discovered": 2, 00:33:08.268 "num_base_bdevs_operational": 4, 00:33:08.268 "base_bdevs_list": [ 00:33:08.268 { 00:33:08.268 "name": null, 00:33:08.268 "uuid": "efa7eba2-ec94-4793-a6ea-f4649166fb4a", 00:33:08.268 "is_configured": false, 00:33:08.268 "data_offset": 0, 00:33:08.268 "data_size": 65536 00:33:08.268 }, 00:33:08.268 { 00:33:08.268 "name": null, 00:33:08.268 "uuid": "9a319df6-e927-497d-ac07-d0223b97dfb4", 00:33:08.268 "is_configured": false, 00:33:08.268 "data_offset": 0, 00:33:08.268 "data_size": 65536 00:33:08.268 }, 00:33:08.268 { 00:33:08.268 "name": "BaseBdev3", 00:33:08.268 "uuid": "06d18c81-3cbb-43a6-adae-dd46c41d5b84", 00:33:08.268 "is_configured": true, 00:33:08.268 "data_offset": 0, 00:33:08.268 "data_size": 65536 00:33:08.268 }, 00:33:08.268 { 00:33:08.268 "name": "BaseBdev4", 00:33:08.268 "uuid": "77addabf-1e55-4b15-ac08-66494b9613c4", 00:33:08.268 "is_configured": true, 00:33:08.268 "data_offset": 0, 00:33:08.268 "data_size": 65536 00:33:08.268 } 00:33:08.268 ] 00:33:08.268 }' 00:33:08.268 17:29:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:33:08.268 17:29:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:33:08.837 17:29:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:33:08.837 17:29:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:33:08.837 17:29:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:08.838 17:29:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:33:08.838 17:29:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:08.838 17:29:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:33:08.838 17:29:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:33:08.838 17:29:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:08.838 17:29:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:33:08.838 [2024-11-26 17:29:46.047297] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:33:08.838 17:29:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:08.838 17:29:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:33:08.838 17:29:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:33:08.838 17:29:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:33:08.838 17:29:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:33:08.838 17:29:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:33:08.838 17:29:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:33:08.838 17:29:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:33:08.838 17:29:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:33:08.838 17:29:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:33:08.838 17:29:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:33:08.838 17:29:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:33:08.838 17:29:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:33:08.838 17:29:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:08.838 17:29:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:33:08.838 17:29:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:08.838 17:29:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:33:08.838 "name": "Existed_Raid", 00:33:08.838 "uuid": "00000000-0000-0000-0000-000000000000", 00:33:08.838 "strip_size_kb": 0, 00:33:08.838 "state": "configuring", 00:33:08.838 "raid_level": "raid1", 00:33:08.838 "superblock": false, 00:33:08.838 "num_base_bdevs": 4, 00:33:08.838 "num_base_bdevs_discovered": 3, 00:33:08.838 "num_base_bdevs_operational": 4, 00:33:08.838 "base_bdevs_list": [ 00:33:08.838 { 00:33:08.838 "name": null, 00:33:08.838 "uuid": "efa7eba2-ec94-4793-a6ea-f4649166fb4a", 00:33:08.838 "is_configured": false, 00:33:08.838 "data_offset": 0, 00:33:08.838 "data_size": 65536 00:33:08.838 }, 00:33:08.838 { 00:33:08.838 "name": "BaseBdev2", 00:33:08.838 "uuid": "9a319df6-e927-497d-ac07-d0223b97dfb4", 00:33:08.838 "is_configured": true, 00:33:08.838 "data_offset": 0, 00:33:08.838 "data_size": 65536 00:33:08.838 }, 00:33:08.838 { 00:33:08.838 "name": "BaseBdev3", 00:33:08.838 "uuid": "06d18c81-3cbb-43a6-adae-dd46c41d5b84", 00:33:08.838 "is_configured": true, 00:33:08.838 "data_offset": 0, 00:33:08.838 "data_size": 65536 00:33:08.838 }, 00:33:08.838 { 00:33:08.838 "name": "BaseBdev4", 00:33:08.838 "uuid": "77addabf-1e55-4b15-ac08-66494b9613c4", 00:33:08.838 "is_configured": true, 00:33:08.838 "data_offset": 0, 00:33:08.838 "data_size": 65536 00:33:08.838 } 00:33:08.838 ] 00:33:08.838 }' 00:33:08.838 17:29:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:33:08.838 17:29:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:33:09.097 17:29:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:33:09.097 17:29:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:33:09.097 17:29:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:09.097 17:29:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:33:09.097 17:29:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:09.097 17:29:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:33:09.097 17:29:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:33:09.097 17:29:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:33:09.097 17:29:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:09.097 17:29:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:33:09.357 17:29:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:09.357 17:29:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u efa7eba2-ec94-4793-a6ea-f4649166fb4a 00:33:09.357 17:29:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:09.357 17:29:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:33:09.357 [2024-11-26 17:29:46.621807] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:33:09.357 [2024-11-26 17:29:46.621853] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:33:09.357 [2024-11-26 17:29:46.621864] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:33:09.357 [2024-11-26 17:29:46.622217] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000063c0 00:33:09.357 [2024-11-26 17:29:46.622379] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:33:09.357 [2024-11-26 17:29:46.622391] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000008200 00:33:09.357 NewBaseBdev 00:33:09.357 [2024-11-26 17:29:46.622665] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:33:09.357 17:29:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:09.357 17:29:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:33:09.357 17:29:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=NewBaseBdev 00:33:09.357 17:29:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:33:09.357 17:29:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:33:09.357 17:29:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:33:09.357 17:29:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:33:09.357 17:29:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:33:09.357 17:29:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:09.357 17:29:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:33:09.357 17:29:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:09.357 17:29:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:33:09.357 17:29:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:09.357 17:29:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:33:09.357 [ 00:33:09.357 { 00:33:09.357 "name": "NewBaseBdev", 00:33:09.357 "aliases": [ 00:33:09.357 "efa7eba2-ec94-4793-a6ea-f4649166fb4a" 00:33:09.357 ], 00:33:09.357 "product_name": "Malloc disk", 00:33:09.357 "block_size": 512, 00:33:09.357 "num_blocks": 65536, 00:33:09.357 "uuid": "efa7eba2-ec94-4793-a6ea-f4649166fb4a", 00:33:09.357 "assigned_rate_limits": { 00:33:09.357 "rw_ios_per_sec": 0, 00:33:09.357 "rw_mbytes_per_sec": 0, 00:33:09.357 "r_mbytes_per_sec": 0, 00:33:09.357 "w_mbytes_per_sec": 0 00:33:09.357 }, 00:33:09.357 "claimed": true, 00:33:09.357 "claim_type": "exclusive_write", 00:33:09.357 "zoned": false, 00:33:09.357 "supported_io_types": { 00:33:09.357 "read": true, 00:33:09.357 "write": true, 00:33:09.357 "unmap": true, 00:33:09.357 "flush": true, 00:33:09.357 "reset": true, 00:33:09.357 "nvme_admin": false, 00:33:09.357 "nvme_io": false, 00:33:09.357 "nvme_io_md": false, 00:33:09.357 "write_zeroes": true, 00:33:09.357 "zcopy": true, 00:33:09.357 "get_zone_info": false, 00:33:09.357 "zone_management": false, 00:33:09.357 "zone_append": false, 00:33:09.357 "compare": false, 00:33:09.357 "compare_and_write": false, 00:33:09.357 "abort": true, 00:33:09.357 "seek_hole": false, 00:33:09.357 "seek_data": false, 00:33:09.357 "copy": true, 00:33:09.357 "nvme_iov_md": false 00:33:09.357 }, 00:33:09.357 "memory_domains": [ 00:33:09.357 { 00:33:09.357 "dma_device_id": "system", 00:33:09.357 "dma_device_type": 1 00:33:09.357 }, 00:33:09.357 { 00:33:09.357 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:33:09.357 "dma_device_type": 2 00:33:09.357 } 00:33:09.357 ], 00:33:09.357 "driver_specific": {} 00:33:09.357 } 00:33:09.357 ] 00:33:09.357 17:29:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:09.357 17:29:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:33:09.357 17:29:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online raid1 0 4 00:33:09.357 17:29:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:33:09.357 17:29:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:33:09.357 17:29:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:33:09.357 17:29:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:33:09.357 17:29:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:33:09.357 17:29:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:33:09.357 17:29:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:33:09.357 17:29:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:33:09.357 17:29:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:33:09.357 17:29:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:33:09.357 17:29:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:09.357 17:29:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:33:09.357 17:29:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:33:09.357 17:29:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:09.357 17:29:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:33:09.357 "name": "Existed_Raid", 00:33:09.357 "uuid": "58ec302f-dd84-4548-9be5-c21ae1721192", 00:33:09.357 "strip_size_kb": 0, 00:33:09.357 "state": "online", 00:33:09.357 "raid_level": "raid1", 00:33:09.357 "superblock": false, 00:33:09.357 "num_base_bdevs": 4, 00:33:09.357 "num_base_bdevs_discovered": 4, 00:33:09.357 "num_base_bdevs_operational": 4, 00:33:09.357 "base_bdevs_list": [ 00:33:09.357 { 00:33:09.357 "name": "NewBaseBdev", 00:33:09.357 "uuid": "efa7eba2-ec94-4793-a6ea-f4649166fb4a", 00:33:09.357 "is_configured": true, 00:33:09.357 "data_offset": 0, 00:33:09.357 "data_size": 65536 00:33:09.357 }, 00:33:09.357 { 00:33:09.357 "name": "BaseBdev2", 00:33:09.357 "uuid": "9a319df6-e927-497d-ac07-d0223b97dfb4", 00:33:09.357 "is_configured": true, 00:33:09.357 "data_offset": 0, 00:33:09.357 "data_size": 65536 00:33:09.357 }, 00:33:09.357 { 00:33:09.357 "name": "BaseBdev3", 00:33:09.357 "uuid": "06d18c81-3cbb-43a6-adae-dd46c41d5b84", 00:33:09.357 "is_configured": true, 00:33:09.357 "data_offset": 0, 00:33:09.357 "data_size": 65536 00:33:09.357 }, 00:33:09.357 { 00:33:09.357 "name": "BaseBdev4", 00:33:09.357 "uuid": "77addabf-1e55-4b15-ac08-66494b9613c4", 00:33:09.357 "is_configured": true, 00:33:09.357 "data_offset": 0, 00:33:09.357 "data_size": 65536 00:33:09.357 } 00:33:09.357 ] 00:33:09.357 }' 00:33:09.357 17:29:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:33:09.358 17:29:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:33:09.926 17:29:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:33:09.926 17:29:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:33:09.926 17:29:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:33:09.926 17:29:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:33:09.926 17:29:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:33:09.926 17:29:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:33:09.926 17:29:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:33:09.926 17:29:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:33:09.926 17:29:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:09.926 17:29:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:33:09.926 [2024-11-26 17:29:47.110313] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:33:09.926 17:29:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:09.926 17:29:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:33:09.926 "name": "Existed_Raid", 00:33:09.926 "aliases": [ 00:33:09.926 "58ec302f-dd84-4548-9be5-c21ae1721192" 00:33:09.926 ], 00:33:09.926 "product_name": "Raid Volume", 00:33:09.926 "block_size": 512, 00:33:09.926 "num_blocks": 65536, 00:33:09.926 "uuid": "58ec302f-dd84-4548-9be5-c21ae1721192", 00:33:09.926 "assigned_rate_limits": { 00:33:09.926 "rw_ios_per_sec": 0, 00:33:09.926 "rw_mbytes_per_sec": 0, 00:33:09.926 "r_mbytes_per_sec": 0, 00:33:09.926 "w_mbytes_per_sec": 0 00:33:09.926 }, 00:33:09.926 "claimed": false, 00:33:09.926 "zoned": false, 00:33:09.926 "supported_io_types": { 00:33:09.926 "read": true, 00:33:09.926 "write": true, 00:33:09.926 "unmap": false, 00:33:09.926 "flush": false, 00:33:09.926 "reset": true, 00:33:09.926 "nvme_admin": false, 00:33:09.926 "nvme_io": false, 00:33:09.926 "nvme_io_md": false, 00:33:09.926 "write_zeroes": true, 00:33:09.926 "zcopy": false, 00:33:09.926 "get_zone_info": false, 00:33:09.926 "zone_management": false, 00:33:09.926 "zone_append": false, 00:33:09.926 "compare": false, 00:33:09.926 "compare_and_write": false, 00:33:09.926 "abort": false, 00:33:09.926 "seek_hole": false, 00:33:09.926 "seek_data": false, 00:33:09.926 "copy": false, 00:33:09.926 "nvme_iov_md": false 00:33:09.926 }, 00:33:09.926 "memory_domains": [ 00:33:09.926 { 00:33:09.926 "dma_device_id": "system", 00:33:09.926 "dma_device_type": 1 00:33:09.926 }, 00:33:09.926 { 00:33:09.926 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:33:09.926 "dma_device_type": 2 00:33:09.926 }, 00:33:09.926 { 00:33:09.926 "dma_device_id": "system", 00:33:09.926 "dma_device_type": 1 00:33:09.926 }, 00:33:09.926 { 00:33:09.926 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:33:09.926 "dma_device_type": 2 00:33:09.926 }, 00:33:09.926 { 00:33:09.926 "dma_device_id": "system", 00:33:09.926 "dma_device_type": 1 00:33:09.926 }, 00:33:09.926 { 00:33:09.926 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:33:09.926 "dma_device_type": 2 00:33:09.926 }, 00:33:09.926 { 00:33:09.926 "dma_device_id": "system", 00:33:09.926 "dma_device_type": 1 00:33:09.926 }, 00:33:09.926 { 00:33:09.926 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:33:09.926 "dma_device_type": 2 00:33:09.926 } 00:33:09.926 ], 00:33:09.926 "driver_specific": { 00:33:09.926 "raid": { 00:33:09.926 "uuid": "58ec302f-dd84-4548-9be5-c21ae1721192", 00:33:09.926 "strip_size_kb": 0, 00:33:09.926 "state": "online", 00:33:09.926 "raid_level": "raid1", 00:33:09.926 "superblock": false, 00:33:09.926 "num_base_bdevs": 4, 00:33:09.926 "num_base_bdevs_discovered": 4, 00:33:09.926 "num_base_bdevs_operational": 4, 00:33:09.926 "base_bdevs_list": [ 00:33:09.926 { 00:33:09.926 "name": "NewBaseBdev", 00:33:09.926 "uuid": "efa7eba2-ec94-4793-a6ea-f4649166fb4a", 00:33:09.926 "is_configured": true, 00:33:09.926 "data_offset": 0, 00:33:09.926 "data_size": 65536 00:33:09.926 }, 00:33:09.926 { 00:33:09.926 "name": "BaseBdev2", 00:33:09.926 "uuid": "9a319df6-e927-497d-ac07-d0223b97dfb4", 00:33:09.926 "is_configured": true, 00:33:09.926 "data_offset": 0, 00:33:09.926 "data_size": 65536 00:33:09.926 }, 00:33:09.926 { 00:33:09.926 "name": "BaseBdev3", 00:33:09.926 "uuid": "06d18c81-3cbb-43a6-adae-dd46c41d5b84", 00:33:09.926 "is_configured": true, 00:33:09.926 "data_offset": 0, 00:33:09.926 "data_size": 65536 00:33:09.926 }, 00:33:09.926 { 00:33:09.926 "name": "BaseBdev4", 00:33:09.926 "uuid": "77addabf-1e55-4b15-ac08-66494b9613c4", 00:33:09.926 "is_configured": true, 00:33:09.926 "data_offset": 0, 00:33:09.926 "data_size": 65536 00:33:09.926 } 00:33:09.926 ] 00:33:09.926 } 00:33:09.926 } 00:33:09.926 }' 00:33:09.926 17:29:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:33:09.926 17:29:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:33:09.926 BaseBdev2 00:33:09.926 BaseBdev3 00:33:09.926 BaseBdev4' 00:33:09.926 17:29:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:33:09.926 17:29:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:33:09.927 17:29:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:33:09.927 17:29:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:33:09.927 17:29:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:09.927 17:29:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:33:09.927 17:29:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:33:09.927 17:29:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:09.927 17:29:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:33:09.927 17:29:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:33:09.927 17:29:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:33:09.927 17:29:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:33:09.927 17:29:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:09.927 17:29:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:33:09.927 17:29:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:33:09.927 17:29:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:09.927 17:29:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:33:09.927 17:29:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:33:09.927 17:29:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:33:09.927 17:29:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:33:09.927 17:29:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:33:09.927 17:29:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:09.927 17:29:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:33:09.927 17:29:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:10.186 17:29:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:33:10.186 17:29:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:33:10.186 17:29:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:33:10.186 17:29:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:33:10.186 17:29:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:33:10.186 17:29:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:10.186 17:29:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:33:10.186 17:29:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:10.186 17:29:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:33:10.186 17:29:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:33:10.186 17:29:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:33:10.186 17:29:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:10.186 17:29:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:33:10.186 [2024-11-26 17:29:47.425994] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:33:10.186 [2024-11-26 17:29:47.426142] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:33:10.186 [2024-11-26 17:29:47.426239] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:33:10.186 [2024-11-26 17:29:47.426524] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:33:10.186 [2024-11-26 17:29:47.426541] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name Existed_Raid, state offline 00:33:10.186 17:29:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:10.186 17:29:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@326 -- # killprocess 73614 00:33:10.186 17:29:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@954 -- # '[' -z 73614 ']' 00:33:10.186 17:29:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@958 -- # kill -0 73614 00:33:10.186 17:29:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@959 -- # uname 00:33:10.186 17:29:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:33:10.186 17:29:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 73614 00:33:10.186 killing process with pid 73614 00:33:10.186 17:29:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:33:10.186 17:29:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:33:10.186 17:29:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 73614' 00:33:10.186 17:29:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@973 -- # kill 73614 00:33:10.186 [2024-11-26 17:29:47.471084] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:33:10.186 17:29:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@978 -- # wait 73614 00:33:10.445 [2024-11-26 17:29:47.882758] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:33:11.823 17:29:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@328 -- # return 0 00:33:11.823 ************************************ 00:33:11.823 END TEST raid_state_function_test 00:33:11.823 ************************************ 00:33:11.823 00:33:11.823 real 0m11.872s 00:33:11.823 user 0m18.965s 00:33:11.823 sys 0m2.232s 00:33:11.823 17:29:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:33:11.823 17:29:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:33:11.823 17:29:49 bdev_raid -- bdev/bdev_raid.sh@969 -- # run_test raid_state_function_test_sb raid_state_function_test raid1 4 true 00:33:11.823 17:29:49 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:33:11.823 17:29:49 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:33:11.823 17:29:49 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:33:11.823 ************************************ 00:33:11.823 START TEST raid_state_function_test_sb 00:33:11.823 ************************************ 00:33:11.823 17:29:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1129 -- # raid_state_function_test raid1 4 true 00:33:11.823 17:29:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # local raid_level=raid1 00:33:11.823 17:29:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=4 00:33:11.823 17:29:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:33:11.823 17:29:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:33:11.823 17:29:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:33:11.823 17:29:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:33:11.823 17:29:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:33:11.823 17:29:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:33:11.823 17:29:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:33:11.823 17:29:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:33:11.823 17:29:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:33:11.823 17:29:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:33:11.823 17:29:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:33:11.823 17:29:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:33:11.823 17:29:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:33:11.823 17:29:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev4 00:33:11.823 17:29:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:33:11.823 17:29:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:33:11.823 17:29:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:33:11.823 17:29:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:33:11.823 17:29:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:33:11.823 17:29:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # local strip_size 00:33:11.823 17:29:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:33:11.823 17:29:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:33:11.823 17:29:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@215 -- # '[' raid1 '!=' raid1 ']' 00:33:11.823 17:29:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@219 -- # strip_size=0 00:33:11.823 17:29:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:33:11.823 17:29:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:33:11.823 17:29:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@229 -- # raid_pid=74292 00:33:11.823 Process raid pid: 74292 00:33:11.823 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:33:11.823 17:29:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 74292' 00:33:11.823 17:29:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:33:11.823 17:29:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # waitforlisten 74292 00:33:11.823 17:29:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@835 -- # '[' -z 74292 ']' 00:33:11.823 17:29:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:33:11.823 17:29:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@840 -- # local max_retries=100 00:33:11.823 17:29:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:33:11.823 17:29:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@844 -- # xtrace_disable 00:33:11.823 17:29:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:33:11.823 [2024-11-26 17:29:49.201091] Starting SPDK v25.01-pre git sha1 f7ce15267 / DPDK 24.03.0 initialization... 00:33:11.823 [2024-11-26 17:29:49.201340] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:33:12.082 [2024-11-26 17:29:49.372039] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:33:12.082 [2024-11-26 17:29:49.489589] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:33:12.341 [2024-11-26 17:29:49.702813] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:33:12.341 [2024-11-26 17:29:49.702855] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:33:12.909 17:29:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:33:12.909 17:29:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@868 -- # return 0 00:33:12.909 17:29:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:33:12.909 17:29:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:12.909 17:29:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:33:12.909 [2024-11-26 17:29:50.106776] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:33:12.909 [2024-11-26 17:29:50.106834] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:33:12.909 [2024-11-26 17:29:50.106846] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:33:12.909 [2024-11-26 17:29:50.106859] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:33:12.909 [2024-11-26 17:29:50.106867] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:33:12.909 [2024-11-26 17:29:50.106879] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:33:12.909 [2024-11-26 17:29:50.106887] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:33:12.909 [2024-11-26 17:29:50.106899] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:33:12.909 17:29:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:12.909 17:29:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:33:12.909 17:29:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:33:12.909 17:29:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:33:12.909 17:29:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:33:12.909 17:29:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:33:12.909 17:29:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:33:12.909 17:29:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:33:12.909 17:29:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:33:12.909 17:29:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:33:12.909 17:29:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:33:12.909 17:29:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:33:12.909 17:29:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:33:12.909 17:29:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:12.910 17:29:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:33:12.910 17:29:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:12.910 17:29:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:33:12.910 "name": "Existed_Raid", 00:33:12.910 "uuid": "4ed5a34c-4b88-4049-a5ca-2f48eec95155", 00:33:12.910 "strip_size_kb": 0, 00:33:12.910 "state": "configuring", 00:33:12.910 "raid_level": "raid1", 00:33:12.910 "superblock": true, 00:33:12.910 "num_base_bdevs": 4, 00:33:12.910 "num_base_bdevs_discovered": 0, 00:33:12.910 "num_base_bdevs_operational": 4, 00:33:12.910 "base_bdevs_list": [ 00:33:12.910 { 00:33:12.910 "name": "BaseBdev1", 00:33:12.910 "uuid": "00000000-0000-0000-0000-000000000000", 00:33:12.910 "is_configured": false, 00:33:12.910 "data_offset": 0, 00:33:12.910 "data_size": 0 00:33:12.910 }, 00:33:12.910 { 00:33:12.910 "name": "BaseBdev2", 00:33:12.910 "uuid": "00000000-0000-0000-0000-000000000000", 00:33:12.910 "is_configured": false, 00:33:12.910 "data_offset": 0, 00:33:12.910 "data_size": 0 00:33:12.910 }, 00:33:12.910 { 00:33:12.910 "name": "BaseBdev3", 00:33:12.910 "uuid": "00000000-0000-0000-0000-000000000000", 00:33:12.910 "is_configured": false, 00:33:12.910 "data_offset": 0, 00:33:12.910 "data_size": 0 00:33:12.910 }, 00:33:12.910 { 00:33:12.910 "name": "BaseBdev4", 00:33:12.910 "uuid": "00000000-0000-0000-0000-000000000000", 00:33:12.910 "is_configured": false, 00:33:12.910 "data_offset": 0, 00:33:12.910 "data_size": 0 00:33:12.910 } 00:33:12.910 ] 00:33:12.910 }' 00:33:12.910 17:29:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:33:12.910 17:29:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:33:13.168 17:29:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:33:13.168 17:29:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:13.168 17:29:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:33:13.168 [2024-11-26 17:29:50.562830] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:33:13.168 [2024-11-26 17:29:50.562875] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:33:13.168 17:29:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:13.168 17:29:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:33:13.168 17:29:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:13.168 17:29:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:33:13.168 [2024-11-26 17:29:50.570803] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:33:13.168 [2024-11-26 17:29:50.570850] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:33:13.168 [2024-11-26 17:29:50.570860] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:33:13.168 [2024-11-26 17:29:50.570873] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:33:13.168 [2024-11-26 17:29:50.570881] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:33:13.168 [2024-11-26 17:29:50.570893] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:33:13.168 [2024-11-26 17:29:50.570901] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:33:13.168 [2024-11-26 17:29:50.570913] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:33:13.168 17:29:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:13.168 17:29:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:33:13.168 17:29:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:13.168 17:29:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:33:13.426 [2024-11-26 17:29:50.618108] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:33:13.426 BaseBdev1 00:33:13.426 17:29:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:13.426 17:29:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:33:13.426 17:29:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:33:13.426 17:29:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:33:13.426 17:29:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:33:13.426 17:29:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:33:13.426 17:29:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:33:13.426 17:29:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:33:13.426 17:29:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:13.426 17:29:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:33:13.426 17:29:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:13.427 17:29:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:33:13.427 17:29:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:13.427 17:29:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:33:13.427 [ 00:33:13.427 { 00:33:13.427 "name": "BaseBdev1", 00:33:13.427 "aliases": [ 00:33:13.427 "1cd18997-7ce8-43b6-ae93-6285f8df2a00" 00:33:13.427 ], 00:33:13.427 "product_name": "Malloc disk", 00:33:13.427 "block_size": 512, 00:33:13.427 "num_blocks": 65536, 00:33:13.427 "uuid": "1cd18997-7ce8-43b6-ae93-6285f8df2a00", 00:33:13.427 "assigned_rate_limits": { 00:33:13.427 "rw_ios_per_sec": 0, 00:33:13.427 "rw_mbytes_per_sec": 0, 00:33:13.427 "r_mbytes_per_sec": 0, 00:33:13.427 "w_mbytes_per_sec": 0 00:33:13.427 }, 00:33:13.427 "claimed": true, 00:33:13.427 "claim_type": "exclusive_write", 00:33:13.427 "zoned": false, 00:33:13.427 "supported_io_types": { 00:33:13.427 "read": true, 00:33:13.427 "write": true, 00:33:13.427 "unmap": true, 00:33:13.427 "flush": true, 00:33:13.427 "reset": true, 00:33:13.427 "nvme_admin": false, 00:33:13.427 "nvme_io": false, 00:33:13.427 "nvme_io_md": false, 00:33:13.427 "write_zeroes": true, 00:33:13.427 "zcopy": true, 00:33:13.427 "get_zone_info": false, 00:33:13.427 "zone_management": false, 00:33:13.427 "zone_append": false, 00:33:13.427 "compare": false, 00:33:13.427 "compare_and_write": false, 00:33:13.427 "abort": true, 00:33:13.427 "seek_hole": false, 00:33:13.427 "seek_data": false, 00:33:13.427 "copy": true, 00:33:13.427 "nvme_iov_md": false 00:33:13.427 }, 00:33:13.427 "memory_domains": [ 00:33:13.427 { 00:33:13.427 "dma_device_id": "system", 00:33:13.427 "dma_device_type": 1 00:33:13.427 }, 00:33:13.427 { 00:33:13.427 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:33:13.427 "dma_device_type": 2 00:33:13.427 } 00:33:13.427 ], 00:33:13.427 "driver_specific": {} 00:33:13.427 } 00:33:13.427 ] 00:33:13.427 17:29:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:13.427 17:29:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:33:13.427 17:29:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:33:13.427 17:29:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:33:13.427 17:29:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:33:13.427 17:29:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:33:13.427 17:29:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:33:13.427 17:29:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:33:13.427 17:29:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:33:13.427 17:29:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:33:13.427 17:29:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:33:13.427 17:29:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:33:13.427 17:29:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:33:13.427 17:29:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:33:13.427 17:29:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:13.427 17:29:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:33:13.427 17:29:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:13.427 17:29:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:33:13.427 "name": "Existed_Raid", 00:33:13.427 "uuid": "ba526094-ab2e-4c4a-b42d-e7a707d89f77", 00:33:13.427 "strip_size_kb": 0, 00:33:13.427 "state": "configuring", 00:33:13.427 "raid_level": "raid1", 00:33:13.427 "superblock": true, 00:33:13.427 "num_base_bdevs": 4, 00:33:13.427 "num_base_bdevs_discovered": 1, 00:33:13.427 "num_base_bdevs_operational": 4, 00:33:13.427 "base_bdevs_list": [ 00:33:13.427 { 00:33:13.427 "name": "BaseBdev1", 00:33:13.427 "uuid": "1cd18997-7ce8-43b6-ae93-6285f8df2a00", 00:33:13.427 "is_configured": true, 00:33:13.427 "data_offset": 2048, 00:33:13.427 "data_size": 63488 00:33:13.427 }, 00:33:13.427 { 00:33:13.427 "name": "BaseBdev2", 00:33:13.427 "uuid": "00000000-0000-0000-0000-000000000000", 00:33:13.427 "is_configured": false, 00:33:13.427 "data_offset": 0, 00:33:13.427 "data_size": 0 00:33:13.427 }, 00:33:13.427 { 00:33:13.427 "name": "BaseBdev3", 00:33:13.427 "uuid": "00000000-0000-0000-0000-000000000000", 00:33:13.427 "is_configured": false, 00:33:13.427 "data_offset": 0, 00:33:13.427 "data_size": 0 00:33:13.427 }, 00:33:13.427 { 00:33:13.427 "name": "BaseBdev4", 00:33:13.427 "uuid": "00000000-0000-0000-0000-000000000000", 00:33:13.427 "is_configured": false, 00:33:13.427 "data_offset": 0, 00:33:13.427 "data_size": 0 00:33:13.427 } 00:33:13.427 ] 00:33:13.427 }' 00:33:13.427 17:29:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:33:13.427 17:29:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:33:13.685 17:29:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:33:13.685 17:29:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:13.685 17:29:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:33:13.685 [2024-11-26 17:29:51.126260] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:33:13.685 [2024-11-26 17:29:51.126319] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:33:13.962 17:29:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:13.962 17:29:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:33:13.962 17:29:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:13.962 17:29:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:33:13.962 [2024-11-26 17:29:51.134301] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:33:13.962 [2024-11-26 17:29:51.136719] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:33:13.962 [2024-11-26 17:29:51.136885] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:33:13.962 [2024-11-26 17:29:51.136975] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:33:13.962 [2024-11-26 17:29:51.137099] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:33:13.962 [2024-11-26 17:29:51.137186] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:33:13.962 [2024-11-26 17:29:51.137234] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:33:13.962 17:29:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:13.962 17:29:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:33:13.962 17:29:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:33:13.962 17:29:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:33:13.962 17:29:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:33:13.962 17:29:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:33:13.962 17:29:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:33:13.962 17:29:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:33:13.962 17:29:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:33:13.962 17:29:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:33:13.962 17:29:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:33:13.962 17:29:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:33:13.962 17:29:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:33:13.962 17:29:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:33:13.962 17:29:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:13.962 17:29:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:33:13.962 17:29:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:33:13.962 17:29:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:13.962 17:29:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:33:13.962 "name": "Existed_Raid", 00:33:13.962 "uuid": "5d834ba8-4415-4a7f-9c9d-6d5a4bb4f405", 00:33:13.962 "strip_size_kb": 0, 00:33:13.962 "state": "configuring", 00:33:13.962 "raid_level": "raid1", 00:33:13.962 "superblock": true, 00:33:13.962 "num_base_bdevs": 4, 00:33:13.962 "num_base_bdevs_discovered": 1, 00:33:13.962 "num_base_bdevs_operational": 4, 00:33:13.962 "base_bdevs_list": [ 00:33:13.962 { 00:33:13.962 "name": "BaseBdev1", 00:33:13.962 "uuid": "1cd18997-7ce8-43b6-ae93-6285f8df2a00", 00:33:13.962 "is_configured": true, 00:33:13.962 "data_offset": 2048, 00:33:13.962 "data_size": 63488 00:33:13.962 }, 00:33:13.962 { 00:33:13.962 "name": "BaseBdev2", 00:33:13.962 "uuid": "00000000-0000-0000-0000-000000000000", 00:33:13.962 "is_configured": false, 00:33:13.962 "data_offset": 0, 00:33:13.962 "data_size": 0 00:33:13.962 }, 00:33:13.962 { 00:33:13.962 "name": "BaseBdev3", 00:33:13.962 "uuid": "00000000-0000-0000-0000-000000000000", 00:33:13.962 "is_configured": false, 00:33:13.962 "data_offset": 0, 00:33:13.962 "data_size": 0 00:33:13.962 }, 00:33:13.962 { 00:33:13.962 "name": "BaseBdev4", 00:33:13.962 "uuid": "00000000-0000-0000-0000-000000000000", 00:33:13.962 "is_configured": false, 00:33:13.962 "data_offset": 0, 00:33:13.962 "data_size": 0 00:33:13.962 } 00:33:13.962 ] 00:33:13.962 }' 00:33:13.962 17:29:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:33:13.962 17:29:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:33:14.257 17:29:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:33:14.257 17:29:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:14.257 17:29:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:33:14.257 [2024-11-26 17:29:51.640399] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:33:14.257 BaseBdev2 00:33:14.257 17:29:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:14.257 17:29:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:33:14.257 17:29:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:33:14.257 17:29:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:33:14.257 17:29:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:33:14.257 17:29:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:33:14.257 17:29:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:33:14.257 17:29:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:33:14.257 17:29:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:14.257 17:29:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:33:14.257 17:29:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:14.257 17:29:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:33:14.257 17:29:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:14.257 17:29:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:33:14.257 [ 00:33:14.257 { 00:33:14.257 "name": "BaseBdev2", 00:33:14.257 "aliases": [ 00:33:14.257 "0fd7487d-cfee-4289-aee0-158f96ca46b9" 00:33:14.257 ], 00:33:14.257 "product_name": "Malloc disk", 00:33:14.257 "block_size": 512, 00:33:14.257 "num_blocks": 65536, 00:33:14.257 "uuid": "0fd7487d-cfee-4289-aee0-158f96ca46b9", 00:33:14.257 "assigned_rate_limits": { 00:33:14.257 "rw_ios_per_sec": 0, 00:33:14.257 "rw_mbytes_per_sec": 0, 00:33:14.257 "r_mbytes_per_sec": 0, 00:33:14.257 "w_mbytes_per_sec": 0 00:33:14.257 }, 00:33:14.257 "claimed": true, 00:33:14.257 "claim_type": "exclusive_write", 00:33:14.257 "zoned": false, 00:33:14.257 "supported_io_types": { 00:33:14.257 "read": true, 00:33:14.257 "write": true, 00:33:14.257 "unmap": true, 00:33:14.257 "flush": true, 00:33:14.257 "reset": true, 00:33:14.257 "nvme_admin": false, 00:33:14.257 "nvme_io": false, 00:33:14.257 "nvme_io_md": false, 00:33:14.257 "write_zeroes": true, 00:33:14.257 "zcopy": true, 00:33:14.257 "get_zone_info": false, 00:33:14.257 "zone_management": false, 00:33:14.257 "zone_append": false, 00:33:14.257 "compare": false, 00:33:14.257 "compare_and_write": false, 00:33:14.257 "abort": true, 00:33:14.257 "seek_hole": false, 00:33:14.257 "seek_data": false, 00:33:14.257 "copy": true, 00:33:14.257 "nvme_iov_md": false 00:33:14.257 }, 00:33:14.257 "memory_domains": [ 00:33:14.257 { 00:33:14.257 "dma_device_id": "system", 00:33:14.257 "dma_device_type": 1 00:33:14.257 }, 00:33:14.257 { 00:33:14.257 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:33:14.257 "dma_device_type": 2 00:33:14.257 } 00:33:14.257 ], 00:33:14.257 "driver_specific": {} 00:33:14.257 } 00:33:14.257 ] 00:33:14.257 17:29:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:14.257 17:29:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:33:14.257 17:29:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:33:14.257 17:29:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:33:14.257 17:29:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:33:14.257 17:29:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:33:14.257 17:29:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:33:14.257 17:29:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:33:14.257 17:29:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:33:14.257 17:29:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:33:14.257 17:29:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:33:14.257 17:29:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:33:14.257 17:29:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:33:14.257 17:29:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:33:14.257 17:29:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:33:14.257 17:29:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:14.257 17:29:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:33:14.257 17:29:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:33:14.516 17:29:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:14.516 17:29:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:33:14.516 "name": "Existed_Raid", 00:33:14.516 "uuid": "5d834ba8-4415-4a7f-9c9d-6d5a4bb4f405", 00:33:14.516 "strip_size_kb": 0, 00:33:14.516 "state": "configuring", 00:33:14.516 "raid_level": "raid1", 00:33:14.516 "superblock": true, 00:33:14.516 "num_base_bdevs": 4, 00:33:14.516 "num_base_bdevs_discovered": 2, 00:33:14.516 "num_base_bdevs_operational": 4, 00:33:14.516 "base_bdevs_list": [ 00:33:14.516 { 00:33:14.516 "name": "BaseBdev1", 00:33:14.516 "uuid": "1cd18997-7ce8-43b6-ae93-6285f8df2a00", 00:33:14.516 "is_configured": true, 00:33:14.516 "data_offset": 2048, 00:33:14.516 "data_size": 63488 00:33:14.516 }, 00:33:14.516 { 00:33:14.516 "name": "BaseBdev2", 00:33:14.516 "uuid": "0fd7487d-cfee-4289-aee0-158f96ca46b9", 00:33:14.516 "is_configured": true, 00:33:14.516 "data_offset": 2048, 00:33:14.516 "data_size": 63488 00:33:14.516 }, 00:33:14.516 { 00:33:14.516 "name": "BaseBdev3", 00:33:14.516 "uuid": "00000000-0000-0000-0000-000000000000", 00:33:14.516 "is_configured": false, 00:33:14.516 "data_offset": 0, 00:33:14.516 "data_size": 0 00:33:14.516 }, 00:33:14.516 { 00:33:14.516 "name": "BaseBdev4", 00:33:14.516 "uuid": "00000000-0000-0000-0000-000000000000", 00:33:14.516 "is_configured": false, 00:33:14.516 "data_offset": 0, 00:33:14.516 "data_size": 0 00:33:14.516 } 00:33:14.516 ] 00:33:14.516 }' 00:33:14.516 17:29:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:33:14.516 17:29:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:33:14.775 17:29:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:33:14.775 17:29:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:14.775 17:29:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:33:14.775 [2024-11-26 17:29:52.159910] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:33:14.775 BaseBdev3 00:33:14.775 17:29:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:14.775 17:29:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:33:14.775 17:29:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:33:14.775 17:29:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:33:14.775 17:29:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:33:14.775 17:29:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:33:14.775 17:29:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:33:14.775 17:29:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:33:14.775 17:29:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:14.775 17:29:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:33:14.775 17:29:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:14.775 17:29:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:33:14.775 17:29:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:14.775 17:29:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:33:14.775 [ 00:33:14.775 { 00:33:14.775 "name": "BaseBdev3", 00:33:14.775 "aliases": [ 00:33:14.775 "125b3c45-b66f-4b97-9aaa-1a71b6573b0c" 00:33:14.775 ], 00:33:14.775 "product_name": "Malloc disk", 00:33:14.775 "block_size": 512, 00:33:14.775 "num_blocks": 65536, 00:33:14.775 "uuid": "125b3c45-b66f-4b97-9aaa-1a71b6573b0c", 00:33:14.775 "assigned_rate_limits": { 00:33:14.775 "rw_ios_per_sec": 0, 00:33:14.775 "rw_mbytes_per_sec": 0, 00:33:14.775 "r_mbytes_per_sec": 0, 00:33:14.775 "w_mbytes_per_sec": 0 00:33:14.775 }, 00:33:14.775 "claimed": true, 00:33:14.775 "claim_type": "exclusive_write", 00:33:14.775 "zoned": false, 00:33:14.775 "supported_io_types": { 00:33:14.775 "read": true, 00:33:14.775 "write": true, 00:33:14.775 "unmap": true, 00:33:14.775 "flush": true, 00:33:14.775 "reset": true, 00:33:14.775 "nvme_admin": false, 00:33:14.775 "nvme_io": false, 00:33:14.775 "nvme_io_md": false, 00:33:14.775 "write_zeroes": true, 00:33:14.775 "zcopy": true, 00:33:14.775 "get_zone_info": false, 00:33:14.775 "zone_management": false, 00:33:14.775 "zone_append": false, 00:33:14.775 "compare": false, 00:33:14.775 "compare_and_write": false, 00:33:14.775 "abort": true, 00:33:14.775 "seek_hole": false, 00:33:14.775 "seek_data": false, 00:33:14.775 "copy": true, 00:33:14.775 "nvme_iov_md": false 00:33:14.775 }, 00:33:14.775 "memory_domains": [ 00:33:14.775 { 00:33:14.775 "dma_device_id": "system", 00:33:14.775 "dma_device_type": 1 00:33:14.775 }, 00:33:14.775 { 00:33:14.775 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:33:14.775 "dma_device_type": 2 00:33:14.775 } 00:33:14.775 ], 00:33:14.775 "driver_specific": {} 00:33:14.775 } 00:33:14.775 ] 00:33:14.775 17:29:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:14.775 17:29:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:33:14.775 17:29:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:33:14.775 17:29:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:33:14.775 17:29:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:33:14.775 17:29:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:33:14.775 17:29:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:33:14.775 17:29:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:33:14.775 17:29:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:33:14.775 17:29:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:33:14.775 17:29:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:33:14.775 17:29:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:33:14.775 17:29:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:33:14.775 17:29:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:33:14.775 17:29:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:33:14.775 17:29:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:33:14.775 17:29:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:14.775 17:29:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:33:15.034 17:29:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:15.034 17:29:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:33:15.034 "name": "Existed_Raid", 00:33:15.034 "uuid": "5d834ba8-4415-4a7f-9c9d-6d5a4bb4f405", 00:33:15.034 "strip_size_kb": 0, 00:33:15.034 "state": "configuring", 00:33:15.034 "raid_level": "raid1", 00:33:15.034 "superblock": true, 00:33:15.034 "num_base_bdevs": 4, 00:33:15.034 "num_base_bdevs_discovered": 3, 00:33:15.034 "num_base_bdevs_operational": 4, 00:33:15.034 "base_bdevs_list": [ 00:33:15.034 { 00:33:15.034 "name": "BaseBdev1", 00:33:15.034 "uuid": "1cd18997-7ce8-43b6-ae93-6285f8df2a00", 00:33:15.034 "is_configured": true, 00:33:15.034 "data_offset": 2048, 00:33:15.034 "data_size": 63488 00:33:15.034 }, 00:33:15.034 { 00:33:15.034 "name": "BaseBdev2", 00:33:15.034 "uuid": "0fd7487d-cfee-4289-aee0-158f96ca46b9", 00:33:15.034 "is_configured": true, 00:33:15.034 "data_offset": 2048, 00:33:15.034 "data_size": 63488 00:33:15.034 }, 00:33:15.034 { 00:33:15.034 "name": "BaseBdev3", 00:33:15.034 "uuid": "125b3c45-b66f-4b97-9aaa-1a71b6573b0c", 00:33:15.034 "is_configured": true, 00:33:15.034 "data_offset": 2048, 00:33:15.034 "data_size": 63488 00:33:15.034 }, 00:33:15.034 { 00:33:15.034 "name": "BaseBdev4", 00:33:15.034 "uuid": "00000000-0000-0000-0000-000000000000", 00:33:15.034 "is_configured": false, 00:33:15.034 "data_offset": 0, 00:33:15.034 "data_size": 0 00:33:15.034 } 00:33:15.034 ] 00:33:15.034 }' 00:33:15.034 17:29:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:33:15.034 17:29:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:33:15.294 17:29:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:33:15.294 17:29:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:15.294 17:29:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:33:15.294 [2024-11-26 17:29:52.672140] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:33:15.294 [2024-11-26 17:29:52.672391] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:33:15.294 [2024-11-26 17:29:52.672408] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:33:15.294 BaseBdev4 00:33:15.294 [2024-11-26 17:29:52.672688] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:33:15.294 [2024-11-26 17:29:52.672854] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:33:15.294 [2024-11-26 17:29:52.672869] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:33:15.294 [2024-11-26 17:29:52.673025] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:33:15.294 17:29:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:15.294 17:29:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev4 00:33:15.294 17:29:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev4 00:33:15.294 17:29:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:33:15.294 17:29:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:33:15.294 17:29:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:33:15.294 17:29:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:33:15.294 17:29:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:33:15.294 17:29:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:15.294 17:29:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:33:15.294 17:29:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:15.294 17:29:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:33:15.294 17:29:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:15.294 17:29:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:33:15.294 [ 00:33:15.294 { 00:33:15.294 "name": "BaseBdev4", 00:33:15.294 "aliases": [ 00:33:15.294 "09627040-28ec-4e04-85cc-213b1a25085f" 00:33:15.294 ], 00:33:15.294 "product_name": "Malloc disk", 00:33:15.294 "block_size": 512, 00:33:15.294 "num_blocks": 65536, 00:33:15.294 "uuid": "09627040-28ec-4e04-85cc-213b1a25085f", 00:33:15.294 "assigned_rate_limits": { 00:33:15.294 "rw_ios_per_sec": 0, 00:33:15.294 "rw_mbytes_per_sec": 0, 00:33:15.294 "r_mbytes_per_sec": 0, 00:33:15.294 "w_mbytes_per_sec": 0 00:33:15.294 }, 00:33:15.294 "claimed": true, 00:33:15.295 "claim_type": "exclusive_write", 00:33:15.295 "zoned": false, 00:33:15.295 "supported_io_types": { 00:33:15.295 "read": true, 00:33:15.295 "write": true, 00:33:15.295 "unmap": true, 00:33:15.295 "flush": true, 00:33:15.295 "reset": true, 00:33:15.295 "nvme_admin": false, 00:33:15.295 "nvme_io": false, 00:33:15.295 "nvme_io_md": false, 00:33:15.295 "write_zeroes": true, 00:33:15.295 "zcopy": true, 00:33:15.295 "get_zone_info": false, 00:33:15.295 "zone_management": false, 00:33:15.295 "zone_append": false, 00:33:15.295 "compare": false, 00:33:15.295 "compare_and_write": false, 00:33:15.295 "abort": true, 00:33:15.295 "seek_hole": false, 00:33:15.295 "seek_data": false, 00:33:15.295 "copy": true, 00:33:15.295 "nvme_iov_md": false 00:33:15.295 }, 00:33:15.295 "memory_domains": [ 00:33:15.295 { 00:33:15.295 "dma_device_id": "system", 00:33:15.295 "dma_device_type": 1 00:33:15.295 }, 00:33:15.295 { 00:33:15.295 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:33:15.295 "dma_device_type": 2 00:33:15.295 } 00:33:15.295 ], 00:33:15.295 "driver_specific": {} 00:33:15.295 } 00:33:15.295 ] 00:33:15.295 17:29:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:15.295 17:29:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:33:15.295 17:29:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:33:15.295 17:29:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:33:15.295 17:29:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid1 0 4 00:33:15.295 17:29:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:33:15.295 17:29:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:33:15.295 17:29:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:33:15.295 17:29:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:33:15.295 17:29:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:33:15.295 17:29:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:33:15.295 17:29:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:33:15.295 17:29:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:33:15.295 17:29:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:33:15.295 17:29:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:33:15.295 17:29:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:33:15.295 17:29:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:15.295 17:29:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:33:15.295 17:29:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:15.554 17:29:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:33:15.554 "name": "Existed_Raid", 00:33:15.554 "uuid": "5d834ba8-4415-4a7f-9c9d-6d5a4bb4f405", 00:33:15.554 "strip_size_kb": 0, 00:33:15.554 "state": "online", 00:33:15.554 "raid_level": "raid1", 00:33:15.554 "superblock": true, 00:33:15.554 "num_base_bdevs": 4, 00:33:15.554 "num_base_bdevs_discovered": 4, 00:33:15.554 "num_base_bdevs_operational": 4, 00:33:15.554 "base_bdevs_list": [ 00:33:15.554 { 00:33:15.554 "name": "BaseBdev1", 00:33:15.554 "uuid": "1cd18997-7ce8-43b6-ae93-6285f8df2a00", 00:33:15.554 "is_configured": true, 00:33:15.554 "data_offset": 2048, 00:33:15.554 "data_size": 63488 00:33:15.554 }, 00:33:15.554 { 00:33:15.554 "name": "BaseBdev2", 00:33:15.554 "uuid": "0fd7487d-cfee-4289-aee0-158f96ca46b9", 00:33:15.554 "is_configured": true, 00:33:15.554 "data_offset": 2048, 00:33:15.554 "data_size": 63488 00:33:15.554 }, 00:33:15.554 { 00:33:15.554 "name": "BaseBdev3", 00:33:15.554 "uuid": "125b3c45-b66f-4b97-9aaa-1a71b6573b0c", 00:33:15.554 "is_configured": true, 00:33:15.554 "data_offset": 2048, 00:33:15.554 "data_size": 63488 00:33:15.554 }, 00:33:15.554 { 00:33:15.554 "name": "BaseBdev4", 00:33:15.554 "uuid": "09627040-28ec-4e04-85cc-213b1a25085f", 00:33:15.554 "is_configured": true, 00:33:15.555 "data_offset": 2048, 00:33:15.555 "data_size": 63488 00:33:15.555 } 00:33:15.555 ] 00:33:15.555 }' 00:33:15.555 17:29:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:33:15.555 17:29:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:33:15.814 17:29:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:33:15.814 17:29:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:33:15.814 17:29:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:33:15.814 17:29:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:33:15.814 17:29:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:33:15.814 17:29:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:33:15.814 17:29:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:33:15.814 17:29:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:15.814 17:29:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:33:15.814 17:29:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:33:15.814 [2024-11-26 17:29:53.161080] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:33:15.814 17:29:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:15.814 17:29:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:33:15.814 "name": "Existed_Raid", 00:33:15.814 "aliases": [ 00:33:15.814 "5d834ba8-4415-4a7f-9c9d-6d5a4bb4f405" 00:33:15.814 ], 00:33:15.814 "product_name": "Raid Volume", 00:33:15.814 "block_size": 512, 00:33:15.814 "num_blocks": 63488, 00:33:15.814 "uuid": "5d834ba8-4415-4a7f-9c9d-6d5a4bb4f405", 00:33:15.814 "assigned_rate_limits": { 00:33:15.814 "rw_ios_per_sec": 0, 00:33:15.814 "rw_mbytes_per_sec": 0, 00:33:15.814 "r_mbytes_per_sec": 0, 00:33:15.814 "w_mbytes_per_sec": 0 00:33:15.814 }, 00:33:15.814 "claimed": false, 00:33:15.814 "zoned": false, 00:33:15.814 "supported_io_types": { 00:33:15.814 "read": true, 00:33:15.814 "write": true, 00:33:15.814 "unmap": false, 00:33:15.814 "flush": false, 00:33:15.814 "reset": true, 00:33:15.814 "nvme_admin": false, 00:33:15.814 "nvme_io": false, 00:33:15.814 "nvme_io_md": false, 00:33:15.814 "write_zeroes": true, 00:33:15.814 "zcopy": false, 00:33:15.814 "get_zone_info": false, 00:33:15.814 "zone_management": false, 00:33:15.814 "zone_append": false, 00:33:15.814 "compare": false, 00:33:15.814 "compare_and_write": false, 00:33:15.814 "abort": false, 00:33:15.814 "seek_hole": false, 00:33:15.814 "seek_data": false, 00:33:15.814 "copy": false, 00:33:15.814 "nvme_iov_md": false 00:33:15.814 }, 00:33:15.814 "memory_domains": [ 00:33:15.814 { 00:33:15.814 "dma_device_id": "system", 00:33:15.814 "dma_device_type": 1 00:33:15.814 }, 00:33:15.814 { 00:33:15.814 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:33:15.814 "dma_device_type": 2 00:33:15.814 }, 00:33:15.814 { 00:33:15.814 "dma_device_id": "system", 00:33:15.814 "dma_device_type": 1 00:33:15.814 }, 00:33:15.814 { 00:33:15.814 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:33:15.814 "dma_device_type": 2 00:33:15.814 }, 00:33:15.814 { 00:33:15.814 "dma_device_id": "system", 00:33:15.814 "dma_device_type": 1 00:33:15.814 }, 00:33:15.814 { 00:33:15.814 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:33:15.814 "dma_device_type": 2 00:33:15.814 }, 00:33:15.814 { 00:33:15.814 "dma_device_id": "system", 00:33:15.814 "dma_device_type": 1 00:33:15.814 }, 00:33:15.814 { 00:33:15.814 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:33:15.814 "dma_device_type": 2 00:33:15.814 } 00:33:15.814 ], 00:33:15.814 "driver_specific": { 00:33:15.814 "raid": { 00:33:15.814 "uuid": "5d834ba8-4415-4a7f-9c9d-6d5a4bb4f405", 00:33:15.814 "strip_size_kb": 0, 00:33:15.814 "state": "online", 00:33:15.814 "raid_level": "raid1", 00:33:15.814 "superblock": true, 00:33:15.814 "num_base_bdevs": 4, 00:33:15.814 "num_base_bdevs_discovered": 4, 00:33:15.814 "num_base_bdevs_operational": 4, 00:33:15.814 "base_bdevs_list": [ 00:33:15.814 { 00:33:15.814 "name": "BaseBdev1", 00:33:15.814 "uuid": "1cd18997-7ce8-43b6-ae93-6285f8df2a00", 00:33:15.814 "is_configured": true, 00:33:15.814 "data_offset": 2048, 00:33:15.814 "data_size": 63488 00:33:15.814 }, 00:33:15.814 { 00:33:15.814 "name": "BaseBdev2", 00:33:15.814 "uuid": "0fd7487d-cfee-4289-aee0-158f96ca46b9", 00:33:15.814 "is_configured": true, 00:33:15.814 "data_offset": 2048, 00:33:15.814 "data_size": 63488 00:33:15.814 }, 00:33:15.814 { 00:33:15.814 "name": "BaseBdev3", 00:33:15.814 "uuid": "125b3c45-b66f-4b97-9aaa-1a71b6573b0c", 00:33:15.814 "is_configured": true, 00:33:15.814 "data_offset": 2048, 00:33:15.814 "data_size": 63488 00:33:15.814 }, 00:33:15.814 { 00:33:15.814 "name": "BaseBdev4", 00:33:15.814 "uuid": "09627040-28ec-4e04-85cc-213b1a25085f", 00:33:15.814 "is_configured": true, 00:33:15.814 "data_offset": 2048, 00:33:15.814 "data_size": 63488 00:33:15.814 } 00:33:15.814 ] 00:33:15.814 } 00:33:15.814 } 00:33:15.814 }' 00:33:15.814 17:29:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:33:16.074 17:29:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:33:16.074 BaseBdev2 00:33:16.074 BaseBdev3 00:33:16.074 BaseBdev4' 00:33:16.074 17:29:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:33:16.074 17:29:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:33:16.074 17:29:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:33:16.074 17:29:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:33:16.074 17:29:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:33:16.074 17:29:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:16.074 17:29:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:33:16.074 17:29:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:16.074 17:29:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:33:16.074 17:29:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:33:16.074 17:29:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:33:16.074 17:29:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:33:16.074 17:29:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:16.074 17:29:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:33:16.074 17:29:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:33:16.074 17:29:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:16.074 17:29:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:33:16.074 17:29:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:33:16.074 17:29:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:33:16.074 17:29:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:33:16.074 17:29:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:33:16.074 17:29:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:16.074 17:29:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:33:16.074 17:29:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:16.074 17:29:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:33:16.074 17:29:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:33:16.074 17:29:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:33:16.074 17:29:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:33:16.074 17:29:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:16.074 17:29:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:33:16.074 17:29:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:33:16.074 17:29:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:16.074 17:29:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:33:16.074 17:29:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:33:16.074 17:29:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:33:16.074 17:29:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:16.074 17:29:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:33:16.074 [2024-11-26 17:29:53.484810] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:33:16.333 17:29:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:16.333 17:29:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # local expected_state 00:33:16.333 17:29:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@261 -- # has_redundancy raid1 00:33:16.333 17:29:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # case $1 in 00:33:16.333 17:29:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@199 -- # return 0 00:33:16.333 17:29:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@264 -- # expected_state=online 00:33:16.333 17:29:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid online raid1 0 3 00:33:16.333 17:29:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:33:16.333 17:29:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:33:16.333 17:29:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:33:16.333 17:29:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:33:16.333 17:29:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:33:16.333 17:29:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:33:16.333 17:29:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:33:16.333 17:29:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:33:16.333 17:29:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:33:16.333 17:29:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:33:16.333 17:29:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:33:16.333 17:29:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:16.333 17:29:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:33:16.333 17:29:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:16.333 17:29:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:33:16.333 "name": "Existed_Raid", 00:33:16.333 "uuid": "5d834ba8-4415-4a7f-9c9d-6d5a4bb4f405", 00:33:16.333 "strip_size_kb": 0, 00:33:16.333 "state": "online", 00:33:16.333 "raid_level": "raid1", 00:33:16.333 "superblock": true, 00:33:16.333 "num_base_bdevs": 4, 00:33:16.333 "num_base_bdevs_discovered": 3, 00:33:16.333 "num_base_bdevs_operational": 3, 00:33:16.333 "base_bdevs_list": [ 00:33:16.333 { 00:33:16.333 "name": null, 00:33:16.333 "uuid": "00000000-0000-0000-0000-000000000000", 00:33:16.333 "is_configured": false, 00:33:16.333 "data_offset": 0, 00:33:16.333 "data_size": 63488 00:33:16.333 }, 00:33:16.333 { 00:33:16.333 "name": "BaseBdev2", 00:33:16.333 "uuid": "0fd7487d-cfee-4289-aee0-158f96ca46b9", 00:33:16.333 "is_configured": true, 00:33:16.333 "data_offset": 2048, 00:33:16.333 "data_size": 63488 00:33:16.333 }, 00:33:16.333 { 00:33:16.333 "name": "BaseBdev3", 00:33:16.333 "uuid": "125b3c45-b66f-4b97-9aaa-1a71b6573b0c", 00:33:16.334 "is_configured": true, 00:33:16.334 "data_offset": 2048, 00:33:16.334 "data_size": 63488 00:33:16.334 }, 00:33:16.334 { 00:33:16.334 "name": "BaseBdev4", 00:33:16.334 "uuid": "09627040-28ec-4e04-85cc-213b1a25085f", 00:33:16.334 "is_configured": true, 00:33:16.334 "data_offset": 2048, 00:33:16.334 "data_size": 63488 00:33:16.334 } 00:33:16.334 ] 00:33:16.334 }' 00:33:16.334 17:29:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:33:16.334 17:29:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:33:16.593 17:29:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:33:16.593 17:29:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:33:16.593 17:29:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:33:16.593 17:29:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:33:16.593 17:29:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:16.593 17:29:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:33:16.593 17:29:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:16.851 17:29:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:33:16.851 17:29:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:33:16.851 17:29:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:33:16.851 17:29:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:16.851 17:29:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:33:16.851 [2024-11-26 17:29:54.053342] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:33:16.851 17:29:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:16.851 17:29:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:33:16.851 17:29:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:33:16.851 17:29:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:33:16.851 17:29:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:33:16.851 17:29:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:16.851 17:29:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:33:16.851 17:29:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:16.851 17:29:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:33:16.851 17:29:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:33:16.851 17:29:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:33:16.851 17:29:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:16.851 17:29:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:33:16.851 [2024-11-26 17:29:54.211282] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:33:17.110 17:29:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:17.110 17:29:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:33:17.110 17:29:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:33:17.110 17:29:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:33:17.110 17:29:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:17.110 17:29:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:33:17.110 17:29:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:33:17.110 17:29:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:17.110 17:29:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:33:17.110 17:29:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:33:17.110 17:29:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev4 00:33:17.110 17:29:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:17.110 17:29:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:33:17.110 [2024-11-26 17:29:54.382958] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev4 00:33:17.110 [2024-11-26 17:29:54.383079] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:33:17.110 [2024-11-26 17:29:54.481988] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:33:17.110 [2024-11-26 17:29:54.482257] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:33:17.110 [2024-11-26 17:29:54.482466] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:33:17.110 17:29:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:17.110 17:29:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:33:17.110 17:29:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:33:17.110 17:29:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:33:17.110 17:29:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:33:17.110 17:29:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:17.110 17:29:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:33:17.110 17:29:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:17.110 17:29:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:33:17.110 17:29:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:33:17.110 17:29:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@284 -- # '[' 4 -gt 2 ']' 00:33:17.110 17:29:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:33:17.111 17:29:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:33:17.111 17:29:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:33:17.111 17:29:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:17.111 17:29:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:33:17.370 BaseBdev2 00:33:17.370 17:29:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:17.370 17:29:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:33:17.370 17:29:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:33:17.370 17:29:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:33:17.370 17:29:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:33:17.370 17:29:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:33:17.370 17:29:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:33:17.370 17:29:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:33:17.370 17:29:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:17.370 17:29:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:33:17.370 17:29:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:17.370 17:29:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:33:17.370 17:29:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:17.370 17:29:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:33:17.370 [ 00:33:17.370 { 00:33:17.370 "name": "BaseBdev2", 00:33:17.370 "aliases": [ 00:33:17.370 "3981584f-fc30-45b9-bbaf-b46ec517f93a" 00:33:17.370 ], 00:33:17.370 "product_name": "Malloc disk", 00:33:17.370 "block_size": 512, 00:33:17.370 "num_blocks": 65536, 00:33:17.370 "uuid": "3981584f-fc30-45b9-bbaf-b46ec517f93a", 00:33:17.370 "assigned_rate_limits": { 00:33:17.370 "rw_ios_per_sec": 0, 00:33:17.370 "rw_mbytes_per_sec": 0, 00:33:17.370 "r_mbytes_per_sec": 0, 00:33:17.370 "w_mbytes_per_sec": 0 00:33:17.370 }, 00:33:17.370 "claimed": false, 00:33:17.370 "zoned": false, 00:33:17.370 "supported_io_types": { 00:33:17.370 "read": true, 00:33:17.370 "write": true, 00:33:17.370 "unmap": true, 00:33:17.370 "flush": true, 00:33:17.370 "reset": true, 00:33:17.370 "nvme_admin": false, 00:33:17.370 "nvme_io": false, 00:33:17.370 "nvme_io_md": false, 00:33:17.370 "write_zeroes": true, 00:33:17.370 "zcopy": true, 00:33:17.370 "get_zone_info": false, 00:33:17.370 "zone_management": false, 00:33:17.370 "zone_append": false, 00:33:17.370 "compare": false, 00:33:17.370 "compare_and_write": false, 00:33:17.370 "abort": true, 00:33:17.370 "seek_hole": false, 00:33:17.370 "seek_data": false, 00:33:17.370 "copy": true, 00:33:17.370 "nvme_iov_md": false 00:33:17.370 }, 00:33:17.370 "memory_domains": [ 00:33:17.370 { 00:33:17.370 "dma_device_id": "system", 00:33:17.370 "dma_device_type": 1 00:33:17.370 }, 00:33:17.370 { 00:33:17.370 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:33:17.370 "dma_device_type": 2 00:33:17.370 } 00:33:17.370 ], 00:33:17.370 "driver_specific": {} 00:33:17.370 } 00:33:17.370 ] 00:33:17.370 17:29:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:17.370 17:29:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:33:17.370 17:29:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:33:17.370 17:29:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:33:17.370 17:29:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:33:17.370 17:29:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:17.370 17:29:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:33:17.370 BaseBdev3 00:33:17.370 17:29:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:17.370 17:29:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:33:17.370 17:29:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:33:17.370 17:29:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:33:17.370 17:29:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:33:17.370 17:29:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:33:17.370 17:29:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:33:17.370 17:29:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:33:17.370 17:29:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:17.370 17:29:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:33:17.370 17:29:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:17.370 17:29:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:33:17.370 17:29:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:17.370 17:29:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:33:17.370 [ 00:33:17.370 { 00:33:17.370 "name": "BaseBdev3", 00:33:17.370 "aliases": [ 00:33:17.370 "fc6dc222-1148-4cc4-b506-8aeeceb79f67" 00:33:17.370 ], 00:33:17.370 "product_name": "Malloc disk", 00:33:17.370 "block_size": 512, 00:33:17.370 "num_blocks": 65536, 00:33:17.370 "uuid": "fc6dc222-1148-4cc4-b506-8aeeceb79f67", 00:33:17.370 "assigned_rate_limits": { 00:33:17.370 "rw_ios_per_sec": 0, 00:33:17.370 "rw_mbytes_per_sec": 0, 00:33:17.370 "r_mbytes_per_sec": 0, 00:33:17.370 "w_mbytes_per_sec": 0 00:33:17.370 }, 00:33:17.370 "claimed": false, 00:33:17.370 "zoned": false, 00:33:17.370 "supported_io_types": { 00:33:17.370 "read": true, 00:33:17.370 "write": true, 00:33:17.370 "unmap": true, 00:33:17.370 "flush": true, 00:33:17.370 "reset": true, 00:33:17.370 "nvme_admin": false, 00:33:17.370 "nvme_io": false, 00:33:17.370 "nvme_io_md": false, 00:33:17.370 "write_zeroes": true, 00:33:17.370 "zcopy": true, 00:33:17.370 "get_zone_info": false, 00:33:17.370 "zone_management": false, 00:33:17.370 "zone_append": false, 00:33:17.370 "compare": false, 00:33:17.370 "compare_and_write": false, 00:33:17.370 "abort": true, 00:33:17.370 "seek_hole": false, 00:33:17.370 "seek_data": false, 00:33:17.370 "copy": true, 00:33:17.370 "nvme_iov_md": false 00:33:17.370 }, 00:33:17.370 "memory_domains": [ 00:33:17.370 { 00:33:17.370 "dma_device_id": "system", 00:33:17.370 "dma_device_type": 1 00:33:17.370 }, 00:33:17.370 { 00:33:17.370 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:33:17.370 "dma_device_type": 2 00:33:17.370 } 00:33:17.370 ], 00:33:17.370 "driver_specific": {} 00:33:17.370 } 00:33:17.370 ] 00:33:17.370 17:29:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:17.370 17:29:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:33:17.370 17:29:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:33:17.371 17:29:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:33:17.371 17:29:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:33:17.371 17:29:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:17.371 17:29:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:33:17.371 BaseBdev4 00:33:17.371 17:29:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:17.371 17:29:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev4 00:33:17.371 17:29:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev4 00:33:17.371 17:29:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:33:17.371 17:29:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:33:17.371 17:29:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:33:17.371 17:29:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:33:17.371 17:29:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:33:17.371 17:29:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:17.371 17:29:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:33:17.371 17:29:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:17.371 17:29:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:33:17.371 17:29:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:17.371 17:29:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:33:17.371 [ 00:33:17.371 { 00:33:17.371 "name": "BaseBdev4", 00:33:17.371 "aliases": [ 00:33:17.371 "b6ddcdf8-99f1-4316-b667-38f301f2cd6a" 00:33:17.371 ], 00:33:17.371 "product_name": "Malloc disk", 00:33:17.371 "block_size": 512, 00:33:17.371 "num_blocks": 65536, 00:33:17.371 "uuid": "b6ddcdf8-99f1-4316-b667-38f301f2cd6a", 00:33:17.371 "assigned_rate_limits": { 00:33:17.371 "rw_ios_per_sec": 0, 00:33:17.371 "rw_mbytes_per_sec": 0, 00:33:17.371 "r_mbytes_per_sec": 0, 00:33:17.371 "w_mbytes_per_sec": 0 00:33:17.371 }, 00:33:17.371 "claimed": false, 00:33:17.371 "zoned": false, 00:33:17.371 "supported_io_types": { 00:33:17.371 "read": true, 00:33:17.371 "write": true, 00:33:17.371 "unmap": true, 00:33:17.371 "flush": true, 00:33:17.371 "reset": true, 00:33:17.371 "nvme_admin": false, 00:33:17.371 "nvme_io": false, 00:33:17.371 "nvme_io_md": false, 00:33:17.371 "write_zeroes": true, 00:33:17.371 "zcopy": true, 00:33:17.371 "get_zone_info": false, 00:33:17.371 "zone_management": false, 00:33:17.371 "zone_append": false, 00:33:17.371 "compare": false, 00:33:17.371 "compare_and_write": false, 00:33:17.371 "abort": true, 00:33:17.371 "seek_hole": false, 00:33:17.371 "seek_data": false, 00:33:17.371 "copy": true, 00:33:17.371 "nvme_iov_md": false 00:33:17.371 }, 00:33:17.371 "memory_domains": [ 00:33:17.371 { 00:33:17.371 "dma_device_id": "system", 00:33:17.371 "dma_device_type": 1 00:33:17.371 }, 00:33:17.371 { 00:33:17.371 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:33:17.371 "dma_device_type": 2 00:33:17.371 } 00:33:17.371 ], 00:33:17.371 "driver_specific": {} 00:33:17.371 } 00:33:17.371 ] 00:33:17.371 17:29:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:17.371 17:29:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:33:17.371 17:29:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:33:17.371 17:29:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:33:17.371 17:29:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:33:17.371 17:29:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:17.371 17:29:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:33:17.371 [2024-11-26 17:29:54.777531] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:33:17.371 [2024-11-26 17:29:54.777581] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:33:17.371 [2024-11-26 17:29:54.777602] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:33:17.371 [2024-11-26 17:29:54.779786] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:33:17.371 [2024-11-26 17:29:54.779833] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:33:17.371 17:29:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:17.371 17:29:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:33:17.371 17:29:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:33:17.371 17:29:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:33:17.371 17:29:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:33:17.371 17:29:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:33:17.371 17:29:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:33:17.371 17:29:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:33:17.371 17:29:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:33:17.371 17:29:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:33:17.371 17:29:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:33:17.371 17:29:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:33:17.371 17:29:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:33:17.371 17:29:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:17.371 17:29:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:33:17.371 17:29:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:17.630 17:29:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:33:17.630 "name": "Existed_Raid", 00:33:17.630 "uuid": "8aa2a932-8446-41ac-82be-69e8313ce051", 00:33:17.630 "strip_size_kb": 0, 00:33:17.630 "state": "configuring", 00:33:17.630 "raid_level": "raid1", 00:33:17.630 "superblock": true, 00:33:17.630 "num_base_bdevs": 4, 00:33:17.630 "num_base_bdevs_discovered": 3, 00:33:17.630 "num_base_bdevs_operational": 4, 00:33:17.630 "base_bdevs_list": [ 00:33:17.630 { 00:33:17.630 "name": "BaseBdev1", 00:33:17.630 "uuid": "00000000-0000-0000-0000-000000000000", 00:33:17.630 "is_configured": false, 00:33:17.630 "data_offset": 0, 00:33:17.630 "data_size": 0 00:33:17.630 }, 00:33:17.630 { 00:33:17.630 "name": "BaseBdev2", 00:33:17.630 "uuid": "3981584f-fc30-45b9-bbaf-b46ec517f93a", 00:33:17.630 "is_configured": true, 00:33:17.630 "data_offset": 2048, 00:33:17.630 "data_size": 63488 00:33:17.630 }, 00:33:17.630 { 00:33:17.630 "name": "BaseBdev3", 00:33:17.630 "uuid": "fc6dc222-1148-4cc4-b506-8aeeceb79f67", 00:33:17.630 "is_configured": true, 00:33:17.630 "data_offset": 2048, 00:33:17.630 "data_size": 63488 00:33:17.630 }, 00:33:17.630 { 00:33:17.630 "name": "BaseBdev4", 00:33:17.630 "uuid": "b6ddcdf8-99f1-4316-b667-38f301f2cd6a", 00:33:17.630 "is_configured": true, 00:33:17.630 "data_offset": 2048, 00:33:17.630 "data_size": 63488 00:33:17.630 } 00:33:17.630 ] 00:33:17.630 }' 00:33:17.630 17:29:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:33:17.630 17:29:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:33:17.889 17:29:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:33:17.889 17:29:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:17.889 17:29:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:33:17.889 [2024-11-26 17:29:55.221640] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:33:17.889 17:29:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:17.889 17:29:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:33:17.889 17:29:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:33:17.889 17:29:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:33:17.890 17:29:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:33:17.890 17:29:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:33:17.890 17:29:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:33:17.890 17:29:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:33:17.890 17:29:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:33:17.890 17:29:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:33:17.890 17:29:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:33:17.890 17:29:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:33:17.890 17:29:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:17.890 17:29:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:33:17.890 17:29:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:33:17.890 17:29:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:17.890 17:29:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:33:17.890 "name": "Existed_Raid", 00:33:17.890 "uuid": "8aa2a932-8446-41ac-82be-69e8313ce051", 00:33:17.890 "strip_size_kb": 0, 00:33:17.890 "state": "configuring", 00:33:17.890 "raid_level": "raid1", 00:33:17.890 "superblock": true, 00:33:17.890 "num_base_bdevs": 4, 00:33:17.890 "num_base_bdevs_discovered": 2, 00:33:17.890 "num_base_bdevs_operational": 4, 00:33:17.890 "base_bdevs_list": [ 00:33:17.890 { 00:33:17.890 "name": "BaseBdev1", 00:33:17.890 "uuid": "00000000-0000-0000-0000-000000000000", 00:33:17.890 "is_configured": false, 00:33:17.890 "data_offset": 0, 00:33:17.890 "data_size": 0 00:33:17.890 }, 00:33:17.890 { 00:33:17.890 "name": null, 00:33:17.890 "uuid": "3981584f-fc30-45b9-bbaf-b46ec517f93a", 00:33:17.890 "is_configured": false, 00:33:17.890 "data_offset": 0, 00:33:17.890 "data_size": 63488 00:33:17.890 }, 00:33:17.890 { 00:33:17.890 "name": "BaseBdev3", 00:33:17.890 "uuid": "fc6dc222-1148-4cc4-b506-8aeeceb79f67", 00:33:17.890 "is_configured": true, 00:33:17.890 "data_offset": 2048, 00:33:17.890 "data_size": 63488 00:33:17.890 }, 00:33:17.890 { 00:33:17.890 "name": "BaseBdev4", 00:33:17.890 "uuid": "b6ddcdf8-99f1-4316-b667-38f301f2cd6a", 00:33:17.890 "is_configured": true, 00:33:17.890 "data_offset": 2048, 00:33:17.890 "data_size": 63488 00:33:17.890 } 00:33:17.890 ] 00:33:17.890 }' 00:33:17.890 17:29:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:33:17.890 17:29:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:33:18.457 17:29:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:33:18.457 17:29:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:33:18.457 17:29:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:18.457 17:29:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:33:18.457 17:29:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:18.457 17:29:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:33:18.457 17:29:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:33:18.457 17:29:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:18.457 17:29:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:33:18.457 [2024-11-26 17:29:55.769084] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:33:18.457 BaseBdev1 00:33:18.457 17:29:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:18.457 17:29:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:33:18.457 17:29:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:33:18.457 17:29:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:33:18.457 17:29:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:33:18.457 17:29:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:33:18.457 17:29:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:33:18.457 17:29:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:33:18.457 17:29:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:18.457 17:29:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:33:18.457 17:29:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:18.457 17:29:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:33:18.457 17:29:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:18.457 17:29:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:33:18.457 [ 00:33:18.457 { 00:33:18.457 "name": "BaseBdev1", 00:33:18.457 "aliases": [ 00:33:18.457 "959305d8-85a9-412c-93b2-54b273c6ee92" 00:33:18.457 ], 00:33:18.457 "product_name": "Malloc disk", 00:33:18.457 "block_size": 512, 00:33:18.457 "num_blocks": 65536, 00:33:18.457 "uuid": "959305d8-85a9-412c-93b2-54b273c6ee92", 00:33:18.457 "assigned_rate_limits": { 00:33:18.457 "rw_ios_per_sec": 0, 00:33:18.457 "rw_mbytes_per_sec": 0, 00:33:18.457 "r_mbytes_per_sec": 0, 00:33:18.457 "w_mbytes_per_sec": 0 00:33:18.457 }, 00:33:18.457 "claimed": true, 00:33:18.457 "claim_type": "exclusive_write", 00:33:18.457 "zoned": false, 00:33:18.457 "supported_io_types": { 00:33:18.457 "read": true, 00:33:18.457 "write": true, 00:33:18.457 "unmap": true, 00:33:18.457 "flush": true, 00:33:18.457 "reset": true, 00:33:18.457 "nvme_admin": false, 00:33:18.457 "nvme_io": false, 00:33:18.457 "nvme_io_md": false, 00:33:18.457 "write_zeroes": true, 00:33:18.457 "zcopy": true, 00:33:18.457 "get_zone_info": false, 00:33:18.457 "zone_management": false, 00:33:18.457 "zone_append": false, 00:33:18.457 "compare": false, 00:33:18.457 "compare_and_write": false, 00:33:18.457 "abort": true, 00:33:18.457 "seek_hole": false, 00:33:18.457 "seek_data": false, 00:33:18.457 "copy": true, 00:33:18.457 "nvme_iov_md": false 00:33:18.457 }, 00:33:18.457 "memory_domains": [ 00:33:18.457 { 00:33:18.457 "dma_device_id": "system", 00:33:18.457 "dma_device_type": 1 00:33:18.457 }, 00:33:18.457 { 00:33:18.457 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:33:18.457 "dma_device_type": 2 00:33:18.457 } 00:33:18.457 ], 00:33:18.457 "driver_specific": {} 00:33:18.457 } 00:33:18.457 ] 00:33:18.457 17:29:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:18.457 17:29:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:33:18.457 17:29:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:33:18.457 17:29:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:33:18.457 17:29:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:33:18.457 17:29:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:33:18.457 17:29:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:33:18.457 17:29:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:33:18.457 17:29:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:33:18.457 17:29:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:33:18.458 17:29:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:33:18.458 17:29:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:33:18.458 17:29:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:33:18.458 17:29:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:18.458 17:29:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:33:18.458 17:29:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:33:18.458 17:29:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:18.458 17:29:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:33:18.458 "name": "Existed_Raid", 00:33:18.458 "uuid": "8aa2a932-8446-41ac-82be-69e8313ce051", 00:33:18.458 "strip_size_kb": 0, 00:33:18.458 "state": "configuring", 00:33:18.458 "raid_level": "raid1", 00:33:18.458 "superblock": true, 00:33:18.458 "num_base_bdevs": 4, 00:33:18.458 "num_base_bdevs_discovered": 3, 00:33:18.458 "num_base_bdevs_operational": 4, 00:33:18.458 "base_bdevs_list": [ 00:33:18.458 { 00:33:18.458 "name": "BaseBdev1", 00:33:18.458 "uuid": "959305d8-85a9-412c-93b2-54b273c6ee92", 00:33:18.458 "is_configured": true, 00:33:18.458 "data_offset": 2048, 00:33:18.458 "data_size": 63488 00:33:18.458 }, 00:33:18.458 { 00:33:18.458 "name": null, 00:33:18.458 "uuid": "3981584f-fc30-45b9-bbaf-b46ec517f93a", 00:33:18.458 "is_configured": false, 00:33:18.458 "data_offset": 0, 00:33:18.458 "data_size": 63488 00:33:18.458 }, 00:33:18.458 { 00:33:18.458 "name": "BaseBdev3", 00:33:18.458 "uuid": "fc6dc222-1148-4cc4-b506-8aeeceb79f67", 00:33:18.458 "is_configured": true, 00:33:18.458 "data_offset": 2048, 00:33:18.458 "data_size": 63488 00:33:18.458 }, 00:33:18.458 { 00:33:18.458 "name": "BaseBdev4", 00:33:18.458 "uuid": "b6ddcdf8-99f1-4316-b667-38f301f2cd6a", 00:33:18.458 "is_configured": true, 00:33:18.458 "data_offset": 2048, 00:33:18.458 "data_size": 63488 00:33:18.458 } 00:33:18.458 ] 00:33:18.458 }' 00:33:18.458 17:29:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:33:18.458 17:29:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:33:19.025 17:29:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:33:19.025 17:29:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:33:19.025 17:29:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:19.025 17:29:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:33:19.025 17:29:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:19.025 17:29:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:33:19.025 17:29:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:33:19.025 17:29:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:19.025 17:29:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:33:19.025 [2024-11-26 17:29:56.277291] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:33:19.025 17:29:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:19.025 17:29:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:33:19.025 17:29:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:33:19.025 17:29:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:33:19.025 17:29:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:33:19.025 17:29:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:33:19.025 17:29:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:33:19.025 17:29:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:33:19.025 17:29:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:33:19.025 17:29:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:33:19.025 17:29:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:33:19.025 17:29:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:33:19.025 17:29:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:33:19.026 17:29:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:19.026 17:29:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:33:19.026 17:29:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:19.026 17:29:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:33:19.026 "name": "Existed_Raid", 00:33:19.026 "uuid": "8aa2a932-8446-41ac-82be-69e8313ce051", 00:33:19.026 "strip_size_kb": 0, 00:33:19.026 "state": "configuring", 00:33:19.026 "raid_level": "raid1", 00:33:19.026 "superblock": true, 00:33:19.026 "num_base_bdevs": 4, 00:33:19.026 "num_base_bdevs_discovered": 2, 00:33:19.026 "num_base_bdevs_operational": 4, 00:33:19.026 "base_bdevs_list": [ 00:33:19.026 { 00:33:19.026 "name": "BaseBdev1", 00:33:19.026 "uuid": "959305d8-85a9-412c-93b2-54b273c6ee92", 00:33:19.026 "is_configured": true, 00:33:19.026 "data_offset": 2048, 00:33:19.026 "data_size": 63488 00:33:19.026 }, 00:33:19.026 { 00:33:19.026 "name": null, 00:33:19.026 "uuid": "3981584f-fc30-45b9-bbaf-b46ec517f93a", 00:33:19.026 "is_configured": false, 00:33:19.026 "data_offset": 0, 00:33:19.026 "data_size": 63488 00:33:19.026 }, 00:33:19.026 { 00:33:19.026 "name": null, 00:33:19.026 "uuid": "fc6dc222-1148-4cc4-b506-8aeeceb79f67", 00:33:19.026 "is_configured": false, 00:33:19.026 "data_offset": 0, 00:33:19.026 "data_size": 63488 00:33:19.026 }, 00:33:19.026 { 00:33:19.026 "name": "BaseBdev4", 00:33:19.026 "uuid": "b6ddcdf8-99f1-4316-b667-38f301f2cd6a", 00:33:19.026 "is_configured": true, 00:33:19.026 "data_offset": 2048, 00:33:19.026 "data_size": 63488 00:33:19.026 } 00:33:19.026 ] 00:33:19.026 }' 00:33:19.026 17:29:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:33:19.026 17:29:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:33:19.284 17:29:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:33:19.284 17:29:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:33:19.284 17:29:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:19.284 17:29:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:33:19.284 17:29:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:19.543 17:29:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:33:19.543 17:29:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:33:19.543 17:29:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:19.543 17:29:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:33:19.543 [2024-11-26 17:29:56.749374] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:33:19.543 17:29:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:19.543 17:29:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:33:19.543 17:29:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:33:19.543 17:29:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:33:19.543 17:29:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:33:19.543 17:29:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:33:19.543 17:29:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:33:19.543 17:29:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:33:19.543 17:29:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:33:19.543 17:29:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:33:19.543 17:29:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:33:19.543 17:29:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:33:19.543 17:29:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:33:19.543 17:29:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:19.543 17:29:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:33:19.543 17:29:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:19.543 17:29:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:33:19.543 "name": "Existed_Raid", 00:33:19.543 "uuid": "8aa2a932-8446-41ac-82be-69e8313ce051", 00:33:19.543 "strip_size_kb": 0, 00:33:19.543 "state": "configuring", 00:33:19.543 "raid_level": "raid1", 00:33:19.543 "superblock": true, 00:33:19.543 "num_base_bdevs": 4, 00:33:19.543 "num_base_bdevs_discovered": 3, 00:33:19.543 "num_base_bdevs_operational": 4, 00:33:19.543 "base_bdevs_list": [ 00:33:19.543 { 00:33:19.543 "name": "BaseBdev1", 00:33:19.543 "uuid": "959305d8-85a9-412c-93b2-54b273c6ee92", 00:33:19.543 "is_configured": true, 00:33:19.543 "data_offset": 2048, 00:33:19.543 "data_size": 63488 00:33:19.543 }, 00:33:19.543 { 00:33:19.543 "name": null, 00:33:19.543 "uuid": "3981584f-fc30-45b9-bbaf-b46ec517f93a", 00:33:19.544 "is_configured": false, 00:33:19.544 "data_offset": 0, 00:33:19.544 "data_size": 63488 00:33:19.544 }, 00:33:19.544 { 00:33:19.544 "name": "BaseBdev3", 00:33:19.544 "uuid": "fc6dc222-1148-4cc4-b506-8aeeceb79f67", 00:33:19.544 "is_configured": true, 00:33:19.544 "data_offset": 2048, 00:33:19.544 "data_size": 63488 00:33:19.544 }, 00:33:19.544 { 00:33:19.544 "name": "BaseBdev4", 00:33:19.544 "uuid": "b6ddcdf8-99f1-4316-b667-38f301f2cd6a", 00:33:19.544 "is_configured": true, 00:33:19.544 "data_offset": 2048, 00:33:19.544 "data_size": 63488 00:33:19.544 } 00:33:19.544 ] 00:33:19.544 }' 00:33:19.544 17:29:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:33:19.544 17:29:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:33:19.803 17:29:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:33:19.803 17:29:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:33:19.803 17:29:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:19.803 17:29:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:33:19.803 17:29:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:19.803 17:29:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:33:19.803 17:29:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:33:19.803 17:29:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:19.803 17:29:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:33:19.803 [2024-11-26 17:29:57.161513] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:33:20.062 17:29:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:20.062 17:29:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:33:20.062 17:29:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:33:20.062 17:29:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:33:20.062 17:29:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:33:20.062 17:29:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:33:20.062 17:29:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:33:20.062 17:29:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:33:20.062 17:29:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:33:20.062 17:29:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:33:20.062 17:29:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:33:20.062 17:29:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:33:20.062 17:29:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:33:20.062 17:29:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:20.062 17:29:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:33:20.062 17:29:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:20.062 17:29:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:33:20.062 "name": "Existed_Raid", 00:33:20.062 "uuid": "8aa2a932-8446-41ac-82be-69e8313ce051", 00:33:20.062 "strip_size_kb": 0, 00:33:20.062 "state": "configuring", 00:33:20.062 "raid_level": "raid1", 00:33:20.062 "superblock": true, 00:33:20.062 "num_base_bdevs": 4, 00:33:20.062 "num_base_bdevs_discovered": 2, 00:33:20.062 "num_base_bdevs_operational": 4, 00:33:20.062 "base_bdevs_list": [ 00:33:20.062 { 00:33:20.062 "name": null, 00:33:20.062 "uuid": "959305d8-85a9-412c-93b2-54b273c6ee92", 00:33:20.062 "is_configured": false, 00:33:20.062 "data_offset": 0, 00:33:20.062 "data_size": 63488 00:33:20.062 }, 00:33:20.062 { 00:33:20.062 "name": null, 00:33:20.062 "uuid": "3981584f-fc30-45b9-bbaf-b46ec517f93a", 00:33:20.062 "is_configured": false, 00:33:20.062 "data_offset": 0, 00:33:20.062 "data_size": 63488 00:33:20.062 }, 00:33:20.062 { 00:33:20.062 "name": "BaseBdev3", 00:33:20.062 "uuid": "fc6dc222-1148-4cc4-b506-8aeeceb79f67", 00:33:20.062 "is_configured": true, 00:33:20.062 "data_offset": 2048, 00:33:20.062 "data_size": 63488 00:33:20.062 }, 00:33:20.062 { 00:33:20.062 "name": "BaseBdev4", 00:33:20.062 "uuid": "b6ddcdf8-99f1-4316-b667-38f301f2cd6a", 00:33:20.062 "is_configured": true, 00:33:20.062 "data_offset": 2048, 00:33:20.062 "data_size": 63488 00:33:20.062 } 00:33:20.062 ] 00:33:20.062 }' 00:33:20.062 17:29:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:33:20.062 17:29:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:33:20.321 17:29:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:33:20.321 17:29:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:33:20.321 17:29:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:20.321 17:29:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:33:20.321 17:29:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:20.321 17:29:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:33:20.321 17:29:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:33:20.321 17:29:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:20.321 17:29:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:33:20.321 [2024-11-26 17:29:57.747493] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:33:20.321 17:29:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:20.321 17:29:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:33:20.321 17:29:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:33:20.321 17:29:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:33:20.321 17:29:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:33:20.321 17:29:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:33:20.321 17:29:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:33:20.321 17:29:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:33:20.321 17:29:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:33:20.321 17:29:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:33:20.321 17:29:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:33:20.321 17:29:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:33:20.321 17:29:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:20.321 17:29:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:33:20.321 17:29:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:33:20.581 17:29:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:20.581 17:29:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:33:20.581 "name": "Existed_Raid", 00:33:20.581 "uuid": "8aa2a932-8446-41ac-82be-69e8313ce051", 00:33:20.581 "strip_size_kb": 0, 00:33:20.581 "state": "configuring", 00:33:20.581 "raid_level": "raid1", 00:33:20.581 "superblock": true, 00:33:20.581 "num_base_bdevs": 4, 00:33:20.581 "num_base_bdevs_discovered": 3, 00:33:20.581 "num_base_bdevs_operational": 4, 00:33:20.581 "base_bdevs_list": [ 00:33:20.581 { 00:33:20.581 "name": null, 00:33:20.581 "uuid": "959305d8-85a9-412c-93b2-54b273c6ee92", 00:33:20.581 "is_configured": false, 00:33:20.581 "data_offset": 0, 00:33:20.581 "data_size": 63488 00:33:20.581 }, 00:33:20.581 { 00:33:20.581 "name": "BaseBdev2", 00:33:20.581 "uuid": "3981584f-fc30-45b9-bbaf-b46ec517f93a", 00:33:20.581 "is_configured": true, 00:33:20.581 "data_offset": 2048, 00:33:20.581 "data_size": 63488 00:33:20.581 }, 00:33:20.581 { 00:33:20.581 "name": "BaseBdev3", 00:33:20.581 "uuid": "fc6dc222-1148-4cc4-b506-8aeeceb79f67", 00:33:20.581 "is_configured": true, 00:33:20.581 "data_offset": 2048, 00:33:20.581 "data_size": 63488 00:33:20.581 }, 00:33:20.581 { 00:33:20.581 "name": "BaseBdev4", 00:33:20.581 "uuid": "b6ddcdf8-99f1-4316-b667-38f301f2cd6a", 00:33:20.581 "is_configured": true, 00:33:20.581 "data_offset": 2048, 00:33:20.581 "data_size": 63488 00:33:20.581 } 00:33:20.581 ] 00:33:20.581 }' 00:33:20.581 17:29:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:33:20.581 17:29:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:33:20.838 17:29:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:33:20.838 17:29:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:20.838 17:29:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:33:20.838 17:29:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:33:20.838 17:29:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:20.838 17:29:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:33:20.838 17:29:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:33:20.838 17:29:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:20.838 17:29:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:33:20.838 17:29:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:33:20.838 17:29:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:21.097 17:29:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u 959305d8-85a9-412c-93b2-54b273c6ee92 00:33:21.097 17:29:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:21.097 17:29:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:33:21.097 [2024-11-26 17:29:58.341196] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:33:21.097 [2024-11-26 17:29:58.341429] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:33:21.097 [2024-11-26 17:29:58.341448] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:33:21.097 [2024-11-26 17:29:58.341717] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000063c0 00:33:21.097 NewBaseBdev 00:33:21.097 [2024-11-26 17:29:58.341868] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:33:21.097 [2024-11-26 17:29:58.341878] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000008200 00:33:21.097 [2024-11-26 17:29:58.342024] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:33:21.097 17:29:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:21.097 17:29:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:33:21.097 17:29:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=NewBaseBdev 00:33:21.097 17:29:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:33:21.097 17:29:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:33:21.097 17:29:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:33:21.097 17:29:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:33:21.097 17:29:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:33:21.097 17:29:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:21.097 17:29:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:33:21.097 17:29:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:21.097 17:29:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:33:21.097 17:29:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:21.097 17:29:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:33:21.097 [ 00:33:21.097 { 00:33:21.097 "name": "NewBaseBdev", 00:33:21.097 "aliases": [ 00:33:21.097 "959305d8-85a9-412c-93b2-54b273c6ee92" 00:33:21.097 ], 00:33:21.097 "product_name": "Malloc disk", 00:33:21.097 "block_size": 512, 00:33:21.097 "num_blocks": 65536, 00:33:21.098 "uuid": "959305d8-85a9-412c-93b2-54b273c6ee92", 00:33:21.098 "assigned_rate_limits": { 00:33:21.098 "rw_ios_per_sec": 0, 00:33:21.098 "rw_mbytes_per_sec": 0, 00:33:21.098 "r_mbytes_per_sec": 0, 00:33:21.098 "w_mbytes_per_sec": 0 00:33:21.098 }, 00:33:21.098 "claimed": true, 00:33:21.098 "claim_type": "exclusive_write", 00:33:21.098 "zoned": false, 00:33:21.098 "supported_io_types": { 00:33:21.098 "read": true, 00:33:21.098 "write": true, 00:33:21.098 "unmap": true, 00:33:21.098 "flush": true, 00:33:21.098 "reset": true, 00:33:21.098 "nvme_admin": false, 00:33:21.098 "nvme_io": false, 00:33:21.098 "nvme_io_md": false, 00:33:21.098 "write_zeroes": true, 00:33:21.098 "zcopy": true, 00:33:21.098 "get_zone_info": false, 00:33:21.098 "zone_management": false, 00:33:21.098 "zone_append": false, 00:33:21.098 "compare": false, 00:33:21.098 "compare_and_write": false, 00:33:21.098 "abort": true, 00:33:21.098 "seek_hole": false, 00:33:21.098 "seek_data": false, 00:33:21.098 "copy": true, 00:33:21.098 "nvme_iov_md": false 00:33:21.098 }, 00:33:21.098 "memory_domains": [ 00:33:21.098 { 00:33:21.098 "dma_device_id": "system", 00:33:21.098 "dma_device_type": 1 00:33:21.098 }, 00:33:21.098 { 00:33:21.098 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:33:21.098 "dma_device_type": 2 00:33:21.098 } 00:33:21.098 ], 00:33:21.098 "driver_specific": {} 00:33:21.098 } 00:33:21.098 ] 00:33:21.098 17:29:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:21.098 17:29:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:33:21.098 17:29:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online raid1 0 4 00:33:21.098 17:29:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:33:21.098 17:29:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:33:21.098 17:29:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:33:21.098 17:29:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:33:21.098 17:29:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:33:21.098 17:29:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:33:21.098 17:29:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:33:21.098 17:29:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:33:21.098 17:29:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:33:21.098 17:29:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:33:21.098 17:29:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:33:21.098 17:29:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:21.098 17:29:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:33:21.098 17:29:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:21.098 17:29:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:33:21.098 "name": "Existed_Raid", 00:33:21.098 "uuid": "8aa2a932-8446-41ac-82be-69e8313ce051", 00:33:21.098 "strip_size_kb": 0, 00:33:21.098 "state": "online", 00:33:21.098 "raid_level": "raid1", 00:33:21.098 "superblock": true, 00:33:21.098 "num_base_bdevs": 4, 00:33:21.098 "num_base_bdevs_discovered": 4, 00:33:21.098 "num_base_bdevs_operational": 4, 00:33:21.098 "base_bdevs_list": [ 00:33:21.098 { 00:33:21.098 "name": "NewBaseBdev", 00:33:21.098 "uuid": "959305d8-85a9-412c-93b2-54b273c6ee92", 00:33:21.098 "is_configured": true, 00:33:21.098 "data_offset": 2048, 00:33:21.098 "data_size": 63488 00:33:21.098 }, 00:33:21.098 { 00:33:21.098 "name": "BaseBdev2", 00:33:21.098 "uuid": "3981584f-fc30-45b9-bbaf-b46ec517f93a", 00:33:21.098 "is_configured": true, 00:33:21.098 "data_offset": 2048, 00:33:21.098 "data_size": 63488 00:33:21.098 }, 00:33:21.098 { 00:33:21.098 "name": "BaseBdev3", 00:33:21.098 "uuid": "fc6dc222-1148-4cc4-b506-8aeeceb79f67", 00:33:21.098 "is_configured": true, 00:33:21.098 "data_offset": 2048, 00:33:21.098 "data_size": 63488 00:33:21.098 }, 00:33:21.098 { 00:33:21.098 "name": "BaseBdev4", 00:33:21.098 "uuid": "b6ddcdf8-99f1-4316-b667-38f301f2cd6a", 00:33:21.098 "is_configured": true, 00:33:21.098 "data_offset": 2048, 00:33:21.098 "data_size": 63488 00:33:21.098 } 00:33:21.098 ] 00:33:21.098 }' 00:33:21.098 17:29:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:33:21.098 17:29:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:33:21.667 17:29:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:33:21.667 17:29:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:33:21.667 17:29:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:33:21.667 17:29:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:33:21.667 17:29:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:33:21.667 17:29:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:33:21.667 17:29:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:33:21.667 17:29:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:21.667 17:29:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:33:21.667 17:29:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:33:21.667 [2024-11-26 17:29:58.845699] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:33:21.667 17:29:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:21.667 17:29:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:33:21.667 "name": "Existed_Raid", 00:33:21.667 "aliases": [ 00:33:21.667 "8aa2a932-8446-41ac-82be-69e8313ce051" 00:33:21.667 ], 00:33:21.667 "product_name": "Raid Volume", 00:33:21.667 "block_size": 512, 00:33:21.667 "num_blocks": 63488, 00:33:21.667 "uuid": "8aa2a932-8446-41ac-82be-69e8313ce051", 00:33:21.667 "assigned_rate_limits": { 00:33:21.667 "rw_ios_per_sec": 0, 00:33:21.667 "rw_mbytes_per_sec": 0, 00:33:21.667 "r_mbytes_per_sec": 0, 00:33:21.667 "w_mbytes_per_sec": 0 00:33:21.667 }, 00:33:21.667 "claimed": false, 00:33:21.667 "zoned": false, 00:33:21.667 "supported_io_types": { 00:33:21.667 "read": true, 00:33:21.667 "write": true, 00:33:21.667 "unmap": false, 00:33:21.667 "flush": false, 00:33:21.667 "reset": true, 00:33:21.667 "nvme_admin": false, 00:33:21.667 "nvme_io": false, 00:33:21.667 "nvme_io_md": false, 00:33:21.667 "write_zeroes": true, 00:33:21.667 "zcopy": false, 00:33:21.667 "get_zone_info": false, 00:33:21.667 "zone_management": false, 00:33:21.667 "zone_append": false, 00:33:21.667 "compare": false, 00:33:21.667 "compare_and_write": false, 00:33:21.667 "abort": false, 00:33:21.667 "seek_hole": false, 00:33:21.667 "seek_data": false, 00:33:21.667 "copy": false, 00:33:21.667 "nvme_iov_md": false 00:33:21.667 }, 00:33:21.667 "memory_domains": [ 00:33:21.667 { 00:33:21.667 "dma_device_id": "system", 00:33:21.667 "dma_device_type": 1 00:33:21.667 }, 00:33:21.667 { 00:33:21.667 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:33:21.667 "dma_device_type": 2 00:33:21.667 }, 00:33:21.667 { 00:33:21.667 "dma_device_id": "system", 00:33:21.667 "dma_device_type": 1 00:33:21.667 }, 00:33:21.667 { 00:33:21.667 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:33:21.667 "dma_device_type": 2 00:33:21.667 }, 00:33:21.667 { 00:33:21.667 "dma_device_id": "system", 00:33:21.667 "dma_device_type": 1 00:33:21.667 }, 00:33:21.667 { 00:33:21.667 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:33:21.667 "dma_device_type": 2 00:33:21.667 }, 00:33:21.667 { 00:33:21.667 "dma_device_id": "system", 00:33:21.667 "dma_device_type": 1 00:33:21.667 }, 00:33:21.667 { 00:33:21.667 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:33:21.667 "dma_device_type": 2 00:33:21.667 } 00:33:21.667 ], 00:33:21.667 "driver_specific": { 00:33:21.667 "raid": { 00:33:21.667 "uuid": "8aa2a932-8446-41ac-82be-69e8313ce051", 00:33:21.667 "strip_size_kb": 0, 00:33:21.667 "state": "online", 00:33:21.667 "raid_level": "raid1", 00:33:21.667 "superblock": true, 00:33:21.667 "num_base_bdevs": 4, 00:33:21.667 "num_base_bdevs_discovered": 4, 00:33:21.667 "num_base_bdevs_operational": 4, 00:33:21.667 "base_bdevs_list": [ 00:33:21.667 { 00:33:21.667 "name": "NewBaseBdev", 00:33:21.667 "uuid": "959305d8-85a9-412c-93b2-54b273c6ee92", 00:33:21.667 "is_configured": true, 00:33:21.667 "data_offset": 2048, 00:33:21.667 "data_size": 63488 00:33:21.667 }, 00:33:21.667 { 00:33:21.667 "name": "BaseBdev2", 00:33:21.667 "uuid": "3981584f-fc30-45b9-bbaf-b46ec517f93a", 00:33:21.667 "is_configured": true, 00:33:21.667 "data_offset": 2048, 00:33:21.667 "data_size": 63488 00:33:21.667 }, 00:33:21.667 { 00:33:21.667 "name": "BaseBdev3", 00:33:21.667 "uuid": "fc6dc222-1148-4cc4-b506-8aeeceb79f67", 00:33:21.667 "is_configured": true, 00:33:21.667 "data_offset": 2048, 00:33:21.667 "data_size": 63488 00:33:21.667 }, 00:33:21.667 { 00:33:21.667 "name": "BaseBdev4", 00:33:21.667 "uuid": "b6ddcdf8-99f1-4316-b667-38f301f2cd6a", 00:33:21.667 "is_configured": true, 00:33:21.667 "data_offset": 2048, 00:33:21.667 "data_size": 63488 00:33:21.667 } 00:33:21.667 ] 00:33:21.667 } 00:33:21.667 } 00:33:21.667 }' 00:33:21.667 17:29:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:33:21.667 17:29:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:33:21.667 BaseBdev2 00:33:21.667 BaseBdev3 00:33:21.667 BaseBdev4' 00:33:21.667 17:29:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:33:21.667 17:29:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:33:21.667 17:29:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:33:21.667 17:29:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:33:21.667 17:29:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:33:21.667 17:29:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:21.667 17:29:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:33:21.667 17:29:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:21.667 17:29:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:33:21.667 17:29:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:33:21.667 17:29:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:33:21.667 17:29:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:33:21.667 17:29:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:21.667 17:29:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:33:21.667 17:29:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:33:21.667 17:29:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:21.667 17:29:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:33:21.667 17:29:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:33:21.667 17:29:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:33:21.667 17:29:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:33:21.667 17:29:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:21.667 17:29:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:33:21.667 17:29:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:33:21.667 17:29:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:21.667 17:29:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:33:21.667 17:29:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:33:21.667 17:29:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:33:21.667 17:29:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:33:21.667 17:29:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:33:21.667 17:29:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:21.667 17:29:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:33:21.926 17:29:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:21.927 17:29:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:33:21.927 17:29:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:33:21.927 17:29:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:33:21.927 17:29:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:21.927 17:29:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:33:21.927 [2024-11-26 17:29:59.157426] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:33:21.927 [2024-11-26 17:29:59.157559] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:33:21.927 [2024-11-26 17:29:59.157665] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:33:21.927 [2024-11-26 17:29:59.157962] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:33:21.927 [2024-11-26 17:29:59.157978] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name Existed_Raid, state offline 00:33:21.927 17:29:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:21.927 17:29:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@326 -- # killprocess 74292 00:33:21.927 17:29:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@954 -- # '[' -z 74292 ']' 00:33:21.927 17:29:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@958 -- # kill -0 74292 00:33:21.927 17:29:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@959 -- # uname 00:33:21.927 17:29:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:33:21.927 17:29:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 74292 00:33:21.927 17:29:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:33:21.927 17:29:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:33:21.927 killing process with pid 74292 00:33:21.927 17:29:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@972 -- # echo 'killing process with pid 74292' 00:33:21.927 17:29:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@973 -- # kill 74292 00:33:21.927 [2024-11-26 17:29:59.202808] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:33:21.927 17:29:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@978 -- # wait 74292 00:33:22.186 [2024-11-26 17:29:59.612764] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:33:23.640 17:30:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@328 -- # return 0 00:33:23.640 00:33:23.640 real 0m11.663s 00:33:23.640 user 0m18.587s 00:33:23.640 sys 0m2.182s 00:33:23.640 17:30:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1130 -- # xtrace_disable 00:33:23.640 17:30:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:33:23.640 ************************************ 00:33:23.640 END TEST raid_state_function_test_sb 00:33:23.640 ************************************ 00:33:23.640 17:30:00 bdev_raid -- bdev/bdev_raid.sh@970 -- # run_test raid_superblock_test raid_superblock_test raid1 4 00:33:23.640 17:30:00 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:33:23.640 17:30:00 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:33:23.640 17:30:00 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:33:23.640 ************************************ 00:33:23.640 START TEST raid_superblock_test 00:33:23.640 ************************************ 00:33:23.640 17:30:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1129 -- # raid_superblock_test raid1 4 00:33:23.640 17:30:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@393 -- # local raid_level=raid1 00:33:23.640 17:30:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=4 00:33:23.640 17:30:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:33:23.640 17:30:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:33:23.640 17:30:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:33:23.640 17:30:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:33:23.640 17:30:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:33:23.640 17:30:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:33:23.641 17:30:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:33:23.641 17:30:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size 00:33:23.641 17:30:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:33:23.641 17:30:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:33:23.641 17:30:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:33:23.641 17:30:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@404 -- # '[' raid1 '!=' raid1 ']' 00:33:23.641 17:30:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@408 -- # strip_size=0 00:33:23.641 17:30:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@412 -- # raid_pid=74958 00:33:23.641 17:30:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:33:23.641 17:30:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@413 -- # waitforlisten 74958 00:33:23.641 17:30:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@835 -- # '[' -z 74958 ']' 00:33:23.641 17:30:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:33:23.641 17:30:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:33:23.641 17:30:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:33:23.641 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:33:23.641 17:30:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:33:23.641 17:30:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:33:23.641 [2024-11-26 17:30:00.966168] Starting SPDK v25.01-pre git sha1 f7ce15267 / DPDK 24.03.0 initialization... 00:33:23.641 [2024-11-26 17:30:00.966349] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74958 ] 00:33:23.899 [2024-11-26 17:30:01.142197] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:33:23.899 [2024-11-26 17:30:01.256526] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:33:24.158 [2024-11-26 17:30:01.462423] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:33:24.158 [2024-11-26 17:30:01.462459] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:33:24.725 17:30:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:33:24.725 17:30:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@868 -- # return 0 00:33:24.725 17:30:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:33:24.725 17:30:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:33:24.725 17:30:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:33:24.725 17:30:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:33:24.725 17:30:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:33:24.725 17:30:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:33:24.725 17:30:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:33:24.725 17:30:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:33:24.725 17:30:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc1 00:33:24.725 17:30:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:24.725 17:30:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:33:24.725 malloc1 00:33:24.725 17:30:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:24.725 17:30:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:33:24.725 17:30:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:24.725 17:30:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:33:24.725 [2024-11-26 17:30:01.918878] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:33:24.726 [2024-11-26 17:30:01.919089] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:33:24.726 [2024-11-26 17:30:01.919154] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:33:24.726 [2024-11-26 17:30:01.919242] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:33:24.726 [2024-11-26 17:30:01.921684] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:33:24.726 [2024-11-26 17:30:01.921822] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:33:24.726 pt1 00:33:24.726 17:30:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:24.726 17:30:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:33:24.726 17:30:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:33:24.726 17:30:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:33:24.726 17:30:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:33:24.726 17:30:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:33:24.726 17:30:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:33:24.726 17:30:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:33:24.726 17:30:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:33:24.726 17:30:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc2 00:33:24.726 17:30:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:24.726 17:30:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:33:24.726 malloc2 00:33:24.726 17:30:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:24.726 17:30:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:33:24.726 17:30:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:24.726 17:30:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:33:24.726 [2024-11-26 17:30:01.975458] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:33:24.726 [2024-11-26 17:30:01.975517] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:33:24.726 [2024-11-26 17:30:01.975564] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:33:24.726 [2024-11-26 17:30:01.975575] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:33:24.726 [2024-11-26 17:30:01.977939] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:33:24.726 [2024-11-26 17:30:01.977978] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:33:24.726 pt2 00:33:24.726 17:30:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:24.726 17:30:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:33:24.726 17:30:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:33:24.726 17:30:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc3 00:33:24.726 17:30:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt3 00:33:24.726 17:30:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000003 00:33:24.726 17:30:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:33:24.726 17:30:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:33:24.726 17:30:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:33:24.726 17:30:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc3 00:33:24.726 17:30:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:24.726 17:30:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:33:24.726 malloc3 00:33:24.726 17:30:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:24.726 17:30:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:33:24.726 17:30:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:24.726 17:30:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:33:24.726 [2024-11-26 17:30:02.041857] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:33:24.726 [2024-11-26 17:30:02.042063] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:33:24.726 [2024-11-26 17:30:02.042110] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:33:24.726 [2024-11-26 17:30:02.042124] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:33:24.726 [2024-11-26 17:30:02.044685] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:33:24.726 [2024-11-26 17:30:02.044723] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:33:24.726 pt3 00:33:24.726 17:30:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:24.726 17:30:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:33:24.726 17:30:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:33:24.726 17:30:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc4 00:33:24.726 17:30:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt4 00:33:24.726 17:30:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000004 00:33:24.726 17:30:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:33:24.726 17:30:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:33:24.726 17:30:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:33:24.726 17:30:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc4 00:33:24.726 17:30:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:24.726 17:30:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:33:24.726 malloc4 00:33:24.726 17:30:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:24.726 17:30:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:33:24.726 17:30:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:24.726 17:30:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:33:24.726 [2024-11-26 17:30:02.096699] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:33:24.726 [2024-11-26 17:30:02.096867] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:33:24.726 [2024-11-26 17:30:02.096925] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:33:24.726 [2024-11-26 17:30:02.096995] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:33:24.726 [2024-11-26 17:30:02.099431] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:33:24.726 [2024-11-26 17:30:02.099572] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:33:24.726 pt4 00:33:24.726 17:30:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:24.726 17:30:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:33:24.726 17:30:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:33:24.726 17:30:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''pt1 pt2 pt3 pt4'\''' -n raid_bdev1 -s 00:33:24.726 17:30:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:24.726 17:30:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:33:24.726 [2024-11-26 17:30:02.108732] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:33:24.726 [2024-11-26 17:30:02.110895] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:33:24.726 [2024-11-26 17:30:02.110959] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:33:24.726 [2024-11-26 17:30:02.111022] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:33:24.726 [2024-11-26 17:30:02.111219] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:33:24.726 [2024-11-26 17:30:02.111237] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:33:24.726 [2024-11-26 17:30:02.111512] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:33:24.726 [2024-11-26 17:30:02.111702] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:33:24.726 [2024-11-26 17:30:02.111720] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:33:24.726 [2024-11-26 17:30:02.111862] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:33:24.726 17:30:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:24.726 17:30:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 4 00:33:24.726 17:30:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:33:24.726 17:30:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:33:24.726 17:30:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:33:24.726 17:30:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:33:24.726 17:30:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:33:24.726 17:30:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:33:24.726 17:30:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:33:24.726 17:30:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:33:24.726 17:30:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:33:24.726 17:30:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:33:24.726 17:30:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:33:24.726 17:30:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:24.726 17:30:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:33:24.726 17:30:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:24.726 17:30:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:33:24.726 "name": "raid_bdev1", 00:33:24.726 "uuid": "9e85a89c-a53e-4859-8b0a-b4c02e8a4133", 00:33:24.726 "strip_size_kb": 0, 00:33:24.726 "state": "online", 00:33:24.726 "raid_level": "raid1", 00:33:24.726 "superblock": true, 00:33:24.726 "num_base_bdevs": 4, 00:33:24.726 "num_base_bdevs_discovered": 4, 00:33:24.726 "num_base_bdevs_operational": 4, 00:33:24.726 "base_bdevs_list": [ 00:33:24.726 { 00:33:24.726 "name": "pt1", 00:33:24.726 "uuid": "00000000-0000-0000-0000-000000000001", 00:33:24.727 "is_configured": true, 00:33:24.727 "data_offset": 2048, 00:33:24.727 "data_size": 63488 00:33:24.727 }, 00:33:24.727 { 00:33:24.727 "name": "pt2", 00:33:24.727 "uuid": "00000000-0000-0000-0000-000000000002", 00:33:24.727 "is_configured": true, 00:33:24.727 "data_offset": 2048, 00:33:24.727 "data_size": 63488 00:33:24.727 }, 00:33:24.727 { 00:33:24.727 "name": "pt3", 00:33:24.727 "uuid": "00000000-0000-0000-0000-000000000003", 00:33:24.727 "is_configured": true, 00:33:24.727 "data_offset": 2048, 00:33:24.727 "data_size": 63488 00:33:24.727 }, 00:33:24.727 { 00:33:24.727 "name": "pt4", 00:33:24.727 "uuid": "00000000-0000-0000-0000-000000000004", 00:33:24.727 "is_configured": true, 00:33:24.727 "data_offset": 2048, 00:33:24.727 "data_size": 63488 00:33:24.727 } 00:33:24.727 ] 00:33:24.727 }' 00:33:24.727 17:30:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:33:24.727 17:30:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:33:25.294 17:30:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:33:25.294 17:30:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:33:25.294 17:30:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:33:25.294 17:30:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:33:25.294 17:30:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:33:25.294 17:30:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:33:25.295 17:30:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:33:25.295 17:30:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:33:25.295 17:30:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:25.295 17:30:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:33:25.295 [2024-11-26 17:30:02.593161] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:33:25.295 17:30:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:25.295 17:30:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:33:25.295 "name": "raid_bdev1", 00:33:25.295 "aliases": [ 00:33:25.295 "9e85a89c-a53e-4859-8b0a-b4c02e8a4133" 00:33:25.295 ], 00:33:25.295 "product_name": "Raid Volume", 00:33:25.295 "block_size": 512, 00:33:25.295 "num_blocks": 63488, 00:33:25.295 "uuid": "9e85a89c-a53e-4859-8b0a-b4c02e8a4133", 00:33:25.295 "assigned_rate_limits": { 00:33:25.295 "rw_ios_per_sec": 0, 00:33:25.295 "rw_mbytes_per_sec": 0, 00:33:25.295 "r_mbytes_per_sec": 0, 00:33:25.295 "w_mbytes_per_sec": 0 00:33:25.295 }, 00:33:25.295 "claimed": false, 00:33:25.295 "zoned": false, 00:33:25.295 "supported_io_types": { 00:33:25.295 "read": true, 00:33:25.295 "write": true, 00:33:25.295 "unmap": false, 00:33:25.295 "flush": false, 00:33:25.295 "reset": true, 00:33:25.295 "nvme_admin": false, 00:33:25.295 "nvme_io": false, 00:33:25.295 "nvme_io_md": false, 00:33:25.295 "write_zeroes": true, 00:33:25.295 "zcopy": false, 00:33:25.295 "get_zone_info": false, 00:33:25.295 "zone_management": false, 00:33:25.295 "zone_append": false, 00:33:25.295 "compare": false, 00:33:25.295 "compare_and_write": false, 00:33:25.295 "abort": false, 00:33:25.295 "seek_hole": false, 00:33:25.295 "seek_data": false, 00:33:25.295 "copy": false, 00:33:25.295 "nvme_iov_md": false 00:33:25.295 }, 00:33:25.295 "memory_domains": [ 00:33:25.295 { 00:33:25.295 "dma_device_id": "system", 00:33:25.295 "dma_device_type": 1 00:33:25.295 }, 00:33:25.295 { 00:33:25.295 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:33:25.295 "dma_device_type": 2 00:33:25.295 }, 00:33:25.295 { 00:33:25.295 "dma_device_id": "system", 00:33:25.295 "dma_device_type": 1 00:33:25.295 }, 00:33:25.295 { 00:33:25.295 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:33:25.295 "dma_device_type": 2 00:33:25.295 }, 00:33:25.295 { 00:33:25.295 "dma_device_id": "system", 00:33:25.295 "dma_device_type": 1 00:33:25.295 }, 00:33:25.295 { 00:33:25.295 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:33:25.295 "dma_device_type": 2 00:33:25.295 }, 00:33:25.295 { 00:33:25.295 "dma_device_id": "system", 00:33:25.295 "dma_device_type": 1 00:33:25.295 }, 00:33:25.295 { 00:33:25.295 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:33:25.295 "dma_device_type": 2 00:33:25.295 } 00:33:25.295 ], 00:33:25.295 "driver_specific": { 00:33:25.295 "raid": { 00:33:25.295 "uuid": "9e85a89c-a53e-4859-8b0a-b4c02e8a4133", 00:33:25.295 "strip_size_kb": 0, 00:33:25.295 "state": "online", 00:33:25.295 "raid_level": "raid1", 00:33:25.295 "superblock": true, 00:33:25.295 "num_base_bdevs": 4, 00:33:25.295 "num_base_bdevs_discovered": 4, 00:33:25.295 "num_base_bdevs_operational": 4, 00:33:25.295 "base_bdevs_list": [ 00:33:25.295 { 00:33:25.295 "name": "pt1", 00:33:25.295 "uuid": "00000000-0000-0000-0000-000000000001", 00:33:25.295 "is_configured": true, 00:33:25.295 "data_offset": 2048, 00:33:25.295 "data_size": 63488 00:33:25.295 }, 00:33:25.295 { 00:33:25.295 "name": "pt2", 00:33:25.295 "uuid": "00000000-0000-0000-0000-000000000002", 00:33:25.295 "is_configured": true, 00:33:25.295 "data_offset": 2048, 00:33:25.295 "data_size": 63488 00:33:25.295 }, 00:33:25.295 { 00:33:25.295 "name": "pt3", 00:33:25.295 "uuid": "00000000-0000-0000-0000-000000000003", 00:33:25.295 "is_configured": true, 00:33:25.295 "data_offset": 2048, 00:33:25.295 "data_size": 63488 00:33:25.295 }, 00:33:25.295 { 00:33:25.295 "name": "pt4", 00:33:25.295 "uuid": "00000000-0000-0000-0000-000000000004", 00:33:25.295 "is_configured": true, 00:33:25.295 "data_offset": 2048, 00:33:25.295 "data_size": 63488 00:33:25.295 } 00:33:25.295 ] 00:33:25.295 } 00:33:25.295 } 00:33:25.295 }' 00:33:25.295 17:30:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:33:25.295 17:30:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:33:25.295 pt2 00:33:25.295 pt3 00:33:25.295 pt4' 00:33:25.295 17:30:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:33:25.295 17:30:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:33:25.295 17:30:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:33:25.295 17:30:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:33:25.295 17:30:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:25.295 17:30:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:33:25.295 17:30:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:33:25.554 17:30:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:25.554 17:30:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:33:25.554 17:30:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:33:25.554 17:30:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:33:25.554 17:30:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:33:25.554 17:30:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:25.554 17:30:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:33:25.554 17:30:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:33:25.554 17:30:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:25.554 17:30:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:33:25.554 17:30:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:33:25.554 17:30:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:33:25.554 17:30:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:33:25.554 17:30:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:25.554 17:30:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:33:25.554 17:30:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:33:25.554 17:30:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:25.554 17:30:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:33:25.554 17:30:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:33:25.554 17:30:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:33:25.554 17:30:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:33:25.554 17:30:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt4 00:33:25.554 17:30:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:25.554 17:30:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:33:25.554 17:30:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:25.554 17:30:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:33:25.554 17:30:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:33:25.554 17:30:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:33:25.554 17:30:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:33:25.554 17:30:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:25.554 17:30:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:33:25.554 [2024-11-26 17:30:02.921191] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:33:25.554 17:30:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:25.554 17:30:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=9e85a89c-a53e-4859-8b0a-b4c02e8a4133 00:33:25.554 17:30:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@436 -- # '[' -z 9e85a89c-a53e-4859-8b0a-b4c02e8a4133 ']' 00:33:25.554 17:30:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:33:25.554 17:30:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:25.554 17:30:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:33:25.554 [2024-11-26 17:30:02.960885] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:33:25.554 [2024-11-26 17:30:02.960914] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:33:25.554 [2024-11-26 17:30:02.960998] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:33:25.554 [2024-11-26 17:30:02.961095] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:33:25.554 [2024-11-26 17:30:02.961115] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:33:25.554 17:30:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:25.554 17:30:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:33:25.554 17:30:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:25.554 17:30:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:33:25.554 17:30:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:33:25.554 17:30:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:25.815 17:30:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:33:25.815 17:30:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:33:25.815 17:30:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:33:25.815 17:30:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:33:25.815 17:30:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:25.815 17:30:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:33:25.815 17:30:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:25.815 17:30:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:33:25.815 17:30:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:33:25.815 17:30:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:25.815 17:30:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:33:25.815 17:30:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:25.815 17:30:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:33:25.815 17:30:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt3 00:33:25.815 17:30:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:25.815 17:30:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:33:25.815 17:30:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:25.815 17:30:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:33:25.815 17:30:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt4 00:33:25.815 17:30:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:25.815 17:30:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:33:25.815 17:30:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:25.815 17:30:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:33:25.815 17:30:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:25.815 17:30:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:33:25.815 17:30:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:33:25.815 17:30:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:25.815 17:30:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:33:25.815 17:30:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2 malloc3 malloc4'\''' -n raid_bdev1 00:33:25.815 17:30:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@652 -- # local es=0 00:33:25.815 17:30:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2 malloc3 malloc4'\''' -n raid_bdev1 00:33:25.815 17:30:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:33:25.815 17:30:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:33:25.815 17:30:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:33:25.815 17:30:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:33:25.815 17:30:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2 malloc3 malloc4'\''' -n raid_bdev1 00:33:25.815 17:30:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:25.815 17:30:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:33:25.815 [2024-11-26 17:30:03.116930] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:33:25.815 [2024-11-26 17:30:03.119161] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:33:25.815 [2024-11-26 17:30:03.119224] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc3 is claimed 00:33:25.815 [2024-11-26 17:30:03.119263] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc4 is claimed 00:33:25.815 [2024-11-26 17:30:03.119315] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:33:25.815 [2024-11-26 17:30:03.119370] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:33:25.815 [2024-11-26 17:30:03.119392] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc3 00:33:25.815 [2024-11-26 17:30:03.119414] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc4 00:33:25.815 [2024-11-26 17:30:03.119430] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:33:25.815 [2024-11-26 17:30:03.119443] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state configuring 00:33:25.815 request: 00:33:25.815 { 00:33:25.815 "name": "raid_bdev1", 00:33:25.815 "raid_level": "raid1", 00:33:25.815 "base_bdevs": [ 00:33:25.815 "malloc1", 00:33:25.815 "malloc2", 00:33:25.815 "malloc3", 00:33:25.815 "malloc4" 00:33:25.815 ], 00:33:25.815 "superblock": false, 00:33:25.816 "method": "bdev_raid_create", 00:33:25.816 "req_id": 1 00:33:25.816 } 00:33:25.816 Got JSON-RPC error response 00:33:25.816 response: 00:33:25.816 { 00:33:25.816 "code": -17, 00:33:25.816 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:33:25.816 } 00:33:25.816 17:30:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:33:25.816 17:30:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@655 -- # es=1 00:33:25.816 17:30:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:33:25.816 17:30:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:33:25.816 17:30:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:33:25.816 17:30:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:33:25.816 17:30:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:33:25.816 17:30:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:25.816 17:30:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:33:25.816 17:30:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:25.816 17:30:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:33:25.816 17:30:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:33:25.816 17:30:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:33:25.816 17:30:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:25.816 17:30:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:33:25.816 [2024-11-26 17:30:03.184925] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:33:25.816 [2024-11-26 17:30:03.184995] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:33:25.816 [2024-11-26 17:30:03.185016] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:33:25.816 [2024-11-26 17:30:03.185030] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:33:25.816 [2024-11-26 17:30:03.187583] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:33:25.816 [2024-11-26 17:30:03.187630] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:33:25.816 [2024-11-26 17:30:03.187716] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:33:25.816 [2024-11-26 17:30:03.187776] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:33:25.816 pt1 00:33:25.816 17:30:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:25.816 17:30:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 4 00:33:25.816 17:30:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:33:25.816 17:30:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:33:25.816 17:30:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:33:25.816 17:30:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:33:25.816 17:30:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:33:25.816 17:30:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:33:25.816 17:30:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:33:25.816 17:30:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:33:25.816 17:30:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:33:25.816 17:30:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:33:25.816 17:30:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:25.816 17:30:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:33:25.816 17:30:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:33:25.816 17:30:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:25.816 17:30:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:33:25.816 "name": "raid_bdev1", 00:33:25.816 "uuid": "9e85a89c-a53e-4859-8b0a-b4c02e8a4133", 00:33:25.816 "strip_size_kb": 0, 00:33:25.816 "state": "configuring", 00:33:25.816 "raid_level": "raid1", 00:33:25.816 "superblock": true, 00:33:25.816 "num_base_bdevs": 4, 00:33:25.816 "num_base_bdevs_discovered": 1, 00:33:25.816 "num_base_bdevs_operational": 4, 00:33:25.816 "base_bdevs_list": [ 00:33:25.816 { 00:33:25.816 "name": "pt1", 00:33:25.816 "uuid": "00000000-0000-0000-0000-000000000001", 00:33:25.816 "is_configured": true, 00:33:25.816 "data_offset": 2048, 00:33:25.816 "data_size": 63488 00:33:25.816 }, 00:33:25.816 { 00:33:25.816 "name": null, 00:33:25.816 "uuid": "00000000-0000-0000-0000-000000000002", 00:33:25.816 "is_configured": false, 00:33:25.816 "data_offset": 2048, 00:33:25.816 "data_size": 63488 00:33:25.816 }, 00:33:25.816 { 00:33:25.816 "name": null, 00:33:25.816 "uuid": "00000000-0000-0000-0000-000000000003", 00:33:25.816 "is_configured": false, 00:33:25.816 "data_offset": 2048, 00:33:25.816 "data_size": 63488 00:33:25.816 }, 00:33:25.816 { 00:33:25.816 "name": null, 00:33:25.816 "uuid": "00000000-0000-0000-0000-000000000004", 00:33:25.816 "is_configured": false, 00:33:25.816 "data_offset": 2048, 00:33:25.816 "data_size": 63488 00:33:25.816 } 00:33:25.816 ] 00:33:25.816 }' 00:33:25.816 17:30:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:33:25.816 17:30:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:33:26.444 17:30:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@470 -- # '[' 4 -gt 2 ']' 00:33:26.444 17:30:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@472 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:33:26.444 17:30:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:26.444 17:30:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:33:26.444 [2024-11-26 17:30:03.593026] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:33:26.444 [2024-11-26 17:30:03.593115] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:33:26.444 [2024-11-26 17:30:03.593140] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a880 00:33:26.444 [2024-11-26 17:30:03.593155] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:33:26.444 [2024-11-26 17:30:03.593596] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:33:26.444 [2024-11-26 17:30:03.593618] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:33:26.444 [2024-11-26 17:30:03.593701] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:33:26.444 [2024-11-26 17:30:03.593726] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:33:26.444 pt2 00:33:26.444 17:30:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:26.444 17:30:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@473 -- # rpc_cmd bdev_passthru_delete pt2 00:33:26.444 17:30:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:26.444 17:30:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:33:26.444 [2024-11-26 17:30:03.601023] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: pt2 00:33:26.444 17:30:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:26.444 17:30:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@474 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 4 00:33:26.444 17:30:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:33:26.444 17:30:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:33:26.444 17:30:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:33:26.444 17:30:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:33:26.444 17:30:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:33:26.444 17:30:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:33:26.444 17:30:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:33:26.444 17:30:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:33:26.444 17:30:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:33:26.444 17:30:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:33:26.444 17:30:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:33:26.444 17:30:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:26.444 17:30:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:33:26.444 17:30:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:26.444 17:30:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:33:26.444 "name": "raid_bdev1", 00:33:26.444 "uuid": "9e85a89c-a53e-4859-8b0a-b4c02e8a4133", 00:33:26.444 "strip_size_kb": 0, 00:33:26.444 "state": "configuring", 00:33:26.444 "raid_level": "raid1", 00:33:26.444 "superblock": true, 00:33:26.444 "num_base_bdevs": 4, 00:33:26.444 "num_base_bdevs_discovered": 1, 00:33:26.444 "num_base_bdevs_operational": 4, 00:33:26.444 "base_bdevs_list": [ 00:33:26.444 { 00:33:26.444 "name": "pt1", 00:33:26.444 "uuid": "00000000-0000-0000-0000-000000000001", 00:33:26.444 "is_configured": true, 00:33:26.444 "data_offset": 2048, 00:33:26.444 "data_size": 63488 00:33:26.444 }, 00:33:26.444 { 00:33:26.445 "name": null, 00:33:26.445 "uuid": "00000000-0000-0000-0000-000000000002", 00:33:26.445 "is_configured": false, 00:33:26.445 "data_offset": 0, 00:33:26.445 "data_size": 63488 00:33:26.445 }, 00:33:26.445 { 00:33:26.445 "name": null, 00:33:26.445 "uuid": "00000000-0000-0000-0000-000000000003", 00:33:26.445 "is_configured": false, 00:33:26.445 "data_offset": 2048, 00:33:26.445 "data_size": 63488 00:33:26.445 }, 00:33:26.445 { 00:33:26.445 "name": null, 00:33:26.445 "uuid": "00000000-0000-0000-0000-000000000004", 00:33:26.445 "is_configured": false, 00:33:26.445 "data_offset": 2048, 00:33:26.445 "data_size": 63488 00:33:26.445 } 00:33:26.445 ] 00:33:26.445 }' 00:33:26.445 17:30:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:33:26.445 17:30:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:33:26.703 17:30:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:33:26.703 17:30:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:33:26.703 17:30:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:33:26.703 17:30:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:26.703 17:30:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:33:26.703 [2024-11-26 17:30:04.121139] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:33:26.703 [2024-11-26 17:30:04.121207] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:33:26.703 [2024-11-26 17:30:04.121232] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ab80 00:33:26.703 [2024-11-26 17:30:04.121244] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:33:26.703 [2024-11-26 17:30:04.121698] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:33:26.703 [2024-11-26 17:30:04.121718] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:33:26.703 [2024-11-26 17:30:04.121803] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:33:26.703 [2024-11-26 17:30:04.121826] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:33:26.703 pt2 00:33:26.703 17:30:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:26.703 17:30:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:33:26.703 17:30:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:33:26.703 17:30:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:33:26.703 17:30:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:26.703 17:30:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:33:26.703 [2024-11-26 17:30:04.129112] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:33:26.703 [2024-11-26 17:30:04.129164] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:33:26.703 [2024-11-26 17:30:04.129184] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ae80 00:33:26.703 [2024-11-26 17:30:04.129196] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:33:26.703 [2024-11-26 17:30:04.129568] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:33:26.703 [2024-11-26 17:30:04.129593] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:33:26.703 [2024-11-26 17:30:04.129656] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:33:26.703 [2024-11-26 17:30:04.129681] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:33:26.703 pt3 00:33:26.703 17:30:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:26.703 17:30:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:33:26.703 17:30:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:33:26.703 17:30:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:33:26.703 17:30:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:26.703 17:30:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:33:26.703 [2024-11-26 17:30:04.137087] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:33:26.703 [2024-11-26 17:30:04.137129] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:33:26.703 [2024-11-26 17:30:04.137148] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b180 00:33:26.703 [2024-11-26 17:30:04.137158] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:33:26.703 [2024-11-26 17:30:04.137530] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:33:26.703 [2024-11-26 17:30:04.137552] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:33:26.704 [2024-11-26 17:30:04.137612] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt4 00:33:26.704 [2024-11-26 17:30:04.137636] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:33:26.704 [2024-11-26 17:30:04.137772] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:33:26.704 [2024-11-26 17:30:04.137782] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:33:26.704 [2024-11-26 17:30:04.138033] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:33:26.704 [2024-11-26 17:30:04.138213] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:33:26.704 [2024-11-26 17:30:04.138228] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:33:26.704 [2024-11-26 17:30:04.138358] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:33:26.704 pt4 00:33:26.704 17:30:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:26.704 17:30:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:33:26.704 17:30:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:33:26.704 17:30:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 4 00:33:26.704 17:30:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:33:26.704 17:30:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:33:26.704 17:30:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:33:26.704 17:30:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:33:26.704 17:30:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:33:26.704 17:30:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:33:26.704 17:30:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:33:26.704 17:30:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:33:26.704 17:30:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:33:26.704 17:30:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:33:26.704 17:30:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:26.704 17:30:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:33:26.704 17:30:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:33:26.962 17:30:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:26.962 17:30:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:33:26.962 "name": "raid_bdev1", 00:33:26.962 "uuid": "9e85a89c-a53e-4859-8b0a-b4c02e8a4133", 00:33:26.962 "strip_size_kb": 0, 00:33:26.962 "state": "online", 00:33:26.962 "raid_level": "raid1", 00:33:26.962 "superblock": true, 00:33:26.962 "num_base_bdevs": 4, 00:33:26.962 "num_base_bdevs_discovered": 4, 00:33:26.962 "num_base_bdevs_operational": 4, 00:33:26.962 "base_bdevs_list": [ 00:33:26.962 { 00:33:26.962 "name": "pt1", 00:33:26.962 "uuid": "00000000-0000-0000-0000-000000000001", 00:33:26.962 "is_configured": true, 00:33:26.962 "data_offset": 2048, 00:33:26.963 "data_size": 63488 00:33:26.963 }, 00:33:26.963 { 00:33:26.963 "name": "pt2", 00:33:26.963 "uuid": "00000000-0000-0000-0000-000000000002", 00:33:26.963 "is_configured": true, 00:33:26.963 "data_offset": 2048, 00:33:26.963 "data_size": 63488 00:33:26.963 }, 00:33:26.963 { 00:33:26.963 "name": "pt3", 00:33:26.963 "uuid": "00000000-0000-0000-0000-000000000003", 00:33:26.963 "is_configured": true, 00:33:26.963 "data_offset": 2048, 00:33:26.963 "data_size": 63488 00:33:26.963 }, 00:33:26.963 { 00:33:26.963 "name": "pt4", 00:33:26.963 "uuid": "00000000-0000-0000-0000-000000000004", 00:33:26.963 "is_configured": true, 00:33:26.963 "data_offset": 2048, 00:33:26.963 "data_size": 63488 00:33:26.963 } 00:33:26.963 ] 00:33:26.963 }' 00:33:26.963 17:30:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:33:26.963 17:30:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:33:27.220 17:30:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:33:27.220 17:30:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:33:27.220 17:30:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:33:27.220 17:30:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:33:27.220 17:30:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:33:27.220 17:30:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:33:27.220 17:30:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:33:27.220 17:30:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:27.221 17:30:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:33:27.221 17:30:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:33:27.221 [2024-11-26 17:30:04.646265] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:33:27.480 17:30:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:27.480 17:30:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:33:27.480 "name": "raid_bdev1", 00:33:27.480 "aliases": [ 00:33:27.480 "9e85a89c-a53e-4859-8b0a-b4c02e8a4133" 00:33:27.480 ], 00:33:27.480 "product_name": "Raid Volume", 00:33:27.480 "block_size": 512, 00:33:27.480 "num_blocks": 63488, 00:33:27.480 "uuid": "9e85a89c-a53e-4859-8b0a-b4c02e8a4133", 00:33:27.480 "assigned_rate_limits": { 00:33:27.480 "rw_ios_per_sec": 0, 00:33:27.480 "rw_mbytes_per_sec": 0, 00:33:27.480 "r_mbytes_per_sec": 0, 00:33:27.480 "w_mbytes_per_sec": 0 00:33:27.480 }, 00:33:27.480 "claimed": false, 00:33:27.480 "zoned": false, 00:33:27.480 "supported_io_types": { 00:33:27.480 "read": true, 00:33:27.480 "write": true, 00:33:27.480 "unmap": false, 00:33:27.480 "flush": false, 00:33:27.480 "reset": true, 00:33:27.480 "nvme_admin": false, 00:33:27.480 "nvme_io": false, 00:33:27.480 "nvme_io_md": false, 00:33:27.480 "write_zeroes": true, 00:33:27.480 "zcopy": false, 00:33:27.480 "get_zone_info": false, 00:33:27.480 "zone_management": false, 00:33:27.480 "zone_append": false, 00:33:27.480 "compare": false, 00:33:27.480 "compare_and_write": false, 00:33:27.480 "abort": false, 00:33:27.480 "seek_hole": false, 00:33:27.480 "seek_data": false, 00:33:27.480 "copy": false, 00:33:27.480 "nvme_iov_md": false 00:33:27.480 }, 00:33:27.480 "memory_domains": [ 00:33:27.480 { 00:33:27.480 "dma_device_id": "system", 00:33:27.480 "dma_device_type": 1 00:33:27.480 }, 00:33:27.480 { 00:33:27.480 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:33:27.480 "dma_device_type": 2 00:33:27.480 }, 00:33:27.480 { 00:33:27.480 "dma_device_id": "system", 00:33:27.480 "dma_device_type": 1 00:33:27.480 }, 00:33:27.480 { 00:33:27.480 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:33:27.480 "dma_device_type": 2 00:33:27.480 }, 00:33:27.480 { 00:33:27.480 "dma_device_id": "system", 00:33:27.480 "dma_device_type": 1 00:33:27.480 }, 00:33:27.480 { 00:33:27.480 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:33:27.480 "dma_device_type": 2 00:33:27.480 }, 00:33:27.480 { 00:33:27.480 "dma_device_id": "system", 00:33:27.480 "dma_device_type": 1 00:33:27.480 }, 00:33:27.480 { 00:33:27.480 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:33:27.480 "dma_device_type": 2 00:33:27.480 } 00:33:27.480 ], 00:33:27.480 "driver_specific": { 00:33:27.480 "raid": { 00:33:27.480 "uuid": "9e85a89c-a53e-4859-8b0a-b4c02e8a4133", 00:33:27.480 "strip_size_kb": 0, 00:33:27.480 "state": "online", 00:33:27.480 "raid_level": "raid1", 00:33:27.480 "superblock": true, 00:33:27.480 "num_base_bdevs": 4, 00:33:27.480 "num_base_bdevs_discovered": 4, 00:33:27.480 "num_base_bdevs_operational": 4, 00:33:27.480 "base_bdevs_list": [ 00:33:27.480 { 00:33:27.480 "name": "pt1", 00:33:27.480 "uuid": "00000000-0000-0000-0000-000000000001", 00:33:27.480 "is_configured": true, 00:33:27.480 "data_offset": 2048, 00:33:27.480 "data_size": 63488 00:33:27.480 }, 00:33:27.480 { 00:33:27.480 "name": "pt2", 00:33:27.480 "uuid": "00000000-0000-0000-0000-000000000002", 00:33:27.480 "is_configured": true, 00:33:27.480 "data_offset": 2048, 00:33:27.480 "data_size": 63488 00:33:27.480 }, 00:33:27.480 { 00:33:27.480 "name": "pt3", 00:33:27.480 "uuid": "00000000-0000-0000-0000-000000000003", 00:33:27.480 "is_configured": true, 00:33:27.480 "data_offset": 2048, 00:33:27.480 "data_size": 63488 00:33:27.480 }, 00:33:27.480 { 00:33:27.480 "name": "pt4", 00:33:27.480 "uuid": "00000000-0000-0000-0000-000000000004", 00:33:27.480 "is_configured": true, 00:33:27.480 "data_offset": 2048, 00:33:27.480 "data_size": 63488 00:33:27.480 } 00:33:27.480 ] 00:33:27.480 } 00:33:27.480 } 00:33:27.480 }' 00:33:27.480 17:30:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:33:27.480 17:30:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:33:27.480 pt2 00:33:27.480 pt3 00:33:27.480 pt4' 00:33:27.480 17:30:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:33:27.480 17:30:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:33:27.480 17:30:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:33:27.480 17:30:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:33:27.480 17:30:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:33:27.480 17:30:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:27.480 17:30:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:33:27.480 17:30:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:27.480 17:30:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:33:27.480 17:30:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:33:27.480 17:30:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:33:27.480 17:30:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:33:27.480 17:30:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:33:27.480 17:30:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:27.480 17:30:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:33:27.480 17:30:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:27.480 17:30:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:33:27.480 17:30:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:33:27.480 17:30:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:33:27.480 17:30:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:33:27.480 17:30:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:33:27.480 17:30:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:27.480 17:30:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:33:27.480 17:30:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:27.480 17:30:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:33:27.480 17:30:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:33:27.480 17:30:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:33:27.480 17:30:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt4 00:33:27.480 17:30:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:27.480 17:30:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:33:27.480 17:30:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:33:27.480 17:30:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:27.739 17:30:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:33:27.739 17:30:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:33:27.740 17:30:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:33:27.740 17:30:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:27.740 17:30:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:33:27.740 17:30:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:33:27.740 [2024-11-26 17:30:04.942224] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:33:27.740 17:30:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:27.740 17:30:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # '[' 9e85a89c-a53e-4859-8b0a-b4c02e8a4133 '!=' 9e85a89c-a53e-4859-8b0a-b4c02e8a4133 ']' 00:33:27.740 17:30:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@491 -- # has_redundancy raid1 00:33:27.740 17:30:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:33:27.740 17:30:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@199 -- # return 0 00:33:27.740 17:30:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@493 -- # rpc_cmd bdev_passthru_delete pt1 00:33:27.740 17:30:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:27.740 17:30:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:33:27.740 [2024-11-26 17:30:04.978025] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: pt1 00:33:27.740 17:30:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:27.740 17:30:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@496 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:33:27.740 17:30:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:33:27.740 17:30:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:33:27.740 17:30:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:33:27.740 17:30:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:33:27.740 17:30:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:33:27.740 17:30:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:33:27.740 17:30:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:33:27.740 17:30:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:33:27.740 17:30:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:33:27.740 17:30:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:33:27.740 17:30:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:27.740 17:30:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:33:27.740 17:30:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:33:27.740 17:30:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:27.740 17:30:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:33:27.740 "name": "raid_bdev1", 00:33:27.740 "uuid": "9e85a89c-a53e-4859-8b0a-b4c02e8a4133", 00:33:27.740 "strip_size_kb": 0, 00:33:27.740 "state": "online", 00:33:27.740 "raid_level": "raid1", 00:33:27.740 "superblock": true, 00:33:27.740 "num_base_bdevs": 4, 00:33:27.740 "num_base_bdevs_discovered": 3, 00:33:27.740 "num_base_bdevs_operational": 3, 00:33:27.740 "base_bdevs_list": [ 00:33:27.740 { 00:33:27.740 "name": null, 00:33:27.740 "uuid": "00000000-0000-0000-0000-000000000000", 00:33:27.740 "is_configured": false, 00:33:27.740 "data_offset": 0, 00:33:27.740 "data_size": 63488 00:33:27.740 }, 00:33:27.740 { 00:33:27.740 "name": "pt2", 00:33:27.740 "uuid": "00000000-0000-0000-0000-000000000002", 00:33:27.740 "is_configured": true, 00:33:27.740 "data_offset": 2048, 00:33:27.740 "data_size": 63488 00:33:27.740 }, 00:33:27.740 { 00:33:27.740 "name": "pt3", 00:33:27.740 "uuid": "00000000-0000-0000-0000-000000000003", 00:33:27.740 "is_configured": true, 00:33:27.740 "data_offset": 2048, 00:33:27.740 "data_size": 63488 00:33:27.740 }, 00:33:27.740 { 00:33:27.740 "name": "pt4", 00:33:27.740 "uuid": "00000000-0000-0000-0000-000000000004", 00:33:27.740 "is_configured": true, 00:33:27.740 "data_offset": 2048, 00:33:27.740 "data_size": 63488 00:33:27.740 } 00:33:27.740 ] 00:33:27.740 }' 00:33:27.740 17:30:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:33:27.740 17:30:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:33:27.999 17:30:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@499 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:33:27.999 17:30:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:27.999 17:30:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:33:27.999 [2024-11-26 17:30:05.442057] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:33:27.999 [2024-11-26 17:30:05.442098] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:33:27.999 [2024-11-26 17:30:05.442180] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:33:27.999 [2024-11-26 17:30:05.442257] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:33:27.999 [2024-11-26 17:30:05.442269] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:33:28.258 17:30:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:28.258 17:30:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@500 -- # rpc_cmd bdev_raid_get_bdevs all 00:33:28.258 17:30:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@500 -- # jq -r '.[]' 00:33:28.258 17:30:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:28.258 17:30:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:33:28.258 17:30:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:28.258 17:30:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@500 -- # raid_bdev= 00:33:28.258 17:30:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@501 -- # '[' -n '' ']' 00:33:28.258 17:30:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i = 1 )) 00:33:28.258 17:30:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:33:28.258 17:30:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt2 00:33:28.258 17:30:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:28.258 17:30:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:33:28.258 17:30:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:28.258 17:30:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:33:28.258 17:30:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:33:28.258 17:30:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt3 00:33:28.258 17:30:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:28.258 17:30:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:33:28.258 17:30:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:28.258 17:30:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:33:28.258 17:30:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:33:28.258 17:30:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt4 00:33:28.258 17:30:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:28.258 17:30:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:33:28.258 17:30:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:28.258 17:30:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:33:28.258 17:30:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:33:28.258 17:30:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i = 1 )) 00:33:28.258 17:30:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:33:28.258 17:30:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@512 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:33:28.258 17:30:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:28.258 17:30:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:33:28.258 [2024-11-26 17:30:05.522035] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:33:28.258 [2024-11-26 17:30:05.522105] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:33:28.258 [2024-11-26 17:30:05.522128] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b480 00:33:28.258 [2024-11-26 17:30:05.522140] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:33:28.258 [2024-11-26 17:30:05.524630] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:33:28.258 [2024-11-26 17:30:05.524668] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:33:28.258 [2024-11-26 17:30:05.524745] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:33:28.258 [2024-11-26 17:30:05.524788] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:33:28.258 pt2 00:33:28.258 17:30:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:28.258 17:30:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@515 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 3 00:33:28.258 17:30:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:33:28.258 17:30:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:33:28.258 17:30:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:33:28.258 17:30:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:33:28.258 17:30:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:33:28.258 17:30:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:33:28.258 17:30:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:33:28.258 17:30:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:33:28.258 17:30:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:33:28.258 17:30:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:33:28.258 17:30:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:28.258 17:30:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:33:28.258 17:30:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:33:28.258 17:30:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:28.258 17:30:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:33:28.258 "name": "raid_bdev1", 00:33:28.258 "uuid": "9e85a89c-a53e-4859-8b0a-b4c02e8a4133", 00:33:28.258 "strip_size_kb": 0, 00:33:28.258 "state": "configuring", 00:33:28.258 "raid_level": "raid1", 00:33:28.258 "superblock": true, 00:33:28.258 "num_base_bdevs": 4, 00:33:28.258 "num_base_bdevs_discovered": 1, 00:33:28.258 "num_base_bdevs_operational": 3, 00:33:28.258 "base_bdevs_list": [ 00:33:28.258 { 00:33:28.258 "name": null, 00:33:28.258 "uuid": "00000000-0000-0000-0000-000000000000", 00:33:28.258 "is_configured": false, 00:33:28.258 "data_offset": 2048, 00:33:28.258 "data_size": 63488 00:33:28.258 }, 00:33:28.258 { 00:33:28.258 "name": "pt2", 00:33:28.258 "uuid": "00000000-0000-0000-0000-000000000002", 00:33:28.258 "is_configured": true, 00:33:28.258 "data_offset": 2048, 00:33:28.258 "data_size": 63488 00:33:28.258 }, 00:33:28.258 { 00:33:28.258 "name": null, 00:33:28.258 "uuid": "00000000-0000-0000-0000-000000000003", 00:33:28.258 "is_configured": false, 00:33:28.258 "data_offset": 2048, 00:33:28.258 "data_size": 63488 00:33:28.258 }, 00:33:28.258 { 00:33:28.258 "name": null, 00:33:28.258 "uuid": "00000000-0000-0000-0000-000000000004", 00:33:28.258 "is_configured": false, 00:33:28.258 "data_offset": 2048, 00:33:28.258 "data_size": 63488 00:33:28.258 } 00:33:28.258 ] 00:33:28.258 }' 00:33:28.258 17:30:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:33:28.258 17:30:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:33:28.518 17:30:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i++ )) 00:33:28.518 17:30:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:33:28.518 17:30:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@512 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:33:28.518 17:30:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:28.518 17:30:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:33:28.518 [2024-11-26 17:30:05.950201] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:33:28.518 [2024-11-26 17:30:05.950269] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:33:28.518 [2024-11-26 17:30:05.950296] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ba80 00:33:28.518 [2024-11-26 17:30:05.950309] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:33:28.518 [2024-11-26 17:30:05.950801] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:33:28.518 [2024-11-26 17:30:05.950823] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:33:28.518 [2024-11-26 17:30:05.950912] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:33:28.518 [2024-11-26 17:30:05.950935] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:33:28.518 pt3 00:33:28.518 17:30:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:28.518 17:30:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@515 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 3 00:33:28.518 17:30:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:33:28.518 17:30:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:33:28.518 17:30:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:33:28.518 17:30:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:33:28.518 17:30:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:33:28.518 17:30:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:33:28.518 17:30:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:33:28.518 17:30:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:33:28.518 17:30:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:33:28.518 17:30:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:33:28.518 17:30:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:33:28.518 17:30:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:28.518 17:30:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:33:28.778 17:30:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:28.778 17:30:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:33:28.778 "name": "raid_bdev1", 00:33:28.778 "uuid": "9e85a89c-a53e-4859-8b0a-b4c02e8a4133", 00:33:28.778 "strip_size_kb": 0, 00:33:28.778 "state": "configuring", 00:33:28.778 "raid_level": "raid1", 00:33:28.778 "superblock": true, 00:33:28.778 "num_base_bdevs": 4, 00:33:28.778 "num_base_bdevs_discovered": 2, 00:33:28.778 "num_base_bdevs_operational": 3, 00:33:28.778 "base_bdevs_list": [ 00:33:28.778 { 00:33:28.778 "name": null, 00:33:28.778 "uuid": "00000000-0000-0000-0000-000000000000", 00:33:28.778 "is_configured": false, 00:33:28.778 "data_offset": 2048, 00:33:28.778 "data_size": 63488 00:33:28.778 }, 00:33:28.778 { 00:33:28.778 "name": "pt2", 00:33:28.778 "uuid": "00000000-0000-0000-0000-000000000002", 00:33:28.778 "is_configured": true, 00:33:28.778 "data_offset": 2048, 00:33:28.778 "data_size": 63488 00:33:28.778 }, 00:33:28.778 { 00:33:28.778 "name": "pt3", 00:33:28.778 "uuid": "00000000-0000-0000-0000-000000000003", 00:33:28.778 "is_configured": true, 00:33:28.778 "data_offset": 2048, 00:33:28.778 "data_size": 63488 00:33:28.778 }, 00:33:28.778 { 00:33:28.778 "name": null, 00:33:28.778 "uuid": "00000000-0000-0000-0000-000000000004", 00:33:28.778 "is_configured": false, 00:33:28.778 "data_offset": 2048, 00:33:28.778 "data_size": 63488 00:33:28.778 } 00:33:28.778 ] 00:33:28.778 }' 00:33:28.778 17:30:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:33:28.778 17:30:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:33:29.037 17:30:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i++ )) 00:33:29.037 17:30:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:33:29.037 17:30:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@519 -- # i=3 00:33:29.037 17:30:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@520 -- # rpc_cmd bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:33:29.037 17:30:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:29.037 17:30:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:33:29.037 [2024-11-26 17:30:06.418325] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:33:29.037 [2024-11-26 17:30:06.418395] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:33:29.037 [2024-11-26 17:30:06.418426] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000bd80 00:33:29.037 [2024-11-26 17:30:06.418438] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:33:29.037 [2024-11-26 17:30:06.418883] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:33:29.037 [2024-11-26 17:30:06.418902] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:33:29.037 [2024-11-26 17:30:06.418990] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt4 00:33:29.037 [2024-11-26 17:30:06.419013] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:33:29.037 [2024-11-26 17:30:06.419165] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:33:29.037 [2024-11-26 17:30:06.419176] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:33:29.037 [2024-11-26 17:30:06.419436] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006080 00:33:29.037 [2024-11-26 17:30:06.419599] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:33:29.037 [2024-11-26 17:30:06.419612] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008200 00:33:29.037 [2024-11-26 17:30:06.419751] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:33:29.037 pt4 00:33:29.037 17:30:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:29.037 17:30:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@523 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:33:29.037 17:30:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:33:29.037 17:30:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:33:29.037 17:30:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:33:29.037 17:30:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:33:29.037 17:30:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:33:29.037 17:30:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:33:29.037 17:30:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:33:29.037 17:30:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:33:29.037 17:30:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:33:29.037 17:30:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:33:29.037 17:30:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:29.037 17:30:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:33:29.037 17:30:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:33:29.037 17:30:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:29.037 17:30:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:33:29.037 "name": "raid_bdev1", 00:33:29.037 "uuid": "9e85a89c-a53e-4859-8b0a-b4c02e8a4133", 00:33:29.037 "strip_size_kb": 0, 00:33:29.037 "state": "online", 00:33:29.037 "raid_level": "raid1", 00:33:29.037 "superblock": true, 00:33:29.037 "num_base_bdevs": 4, 00:33:29.037 "num_base_bdevs_discovered": 3, 00:33:29.037 "num_base_bdevs_operational": 3, 00:33:29.037 "base_bdevs_list": [ 00:33:29.037 { 00:33:29.037 "name": null, 00:33:29.037 "uuid": "00000000-0000-0000-0000-000000000000", 00:33:29.037 "is_configured": false, 00:33:29.037 "data_offset": 2048, 00:33:29.037 "data_size": 63488 00:33:29.037 }, 00:33:29.038 { 00:33:29.038 "name": "pt2", 00:33:29.038 "uuid": "00000000-0000-0000-0000-000000000002", 00:33:29.038 "is_configured": true, 00:33:29.038 "data_offset": 2048, 00:33:29.038 "data_size": 63488 00:33:29.038 }, 00:33:29.038 { 00:33:29.038 "name": "pt3", 00:33:29.038 "uuid": "00000000-0000-0000-0000-000000000003", 00:33:29.038 "is_configured": true, 00:33:29.038 "data_offset": 2048, 00:33:29.038 "data_size": 63488 00:33:29.038 }, 00:33:29.038 { 00:33:29.038 "name": "pt4", 00:33:29.038 "uuid": "00000000-0000-0000-0000-000000000004", 00:33:29.038 "is_configured": true, 00:33:29.038 "data_offset": 2048, 00:33:29.038 "data_size": 63488 00:33:29.038 } 00:33:29.038 ] 00:33:29.038 }' 00:33:29.038 17:30:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:33:29.038 17:30:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:33:29.606 17:30:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@526 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:33:29.606 17:30:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:29.606 17:30:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:33:29.606 [2024-11-26 17:30:06.886406] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:33:29.606 [2024-11-26 17:30:06.886437] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:33:29.606 [2024-11-26 17:30:06.886520] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:33:29.607 [2024-11-26 17:30:06.886599] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:33:29.607 [2024-11-26 17:30:06.886616] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name raid_bdev1, state offline 00:33:29.607 17:30:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:29.607 17:30:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@527 -- # rpc_cmd bdev_raid_get_bdevs all 00:33:29.607 17:30:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:29.607 17:30:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:33:29.607 17:30:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@527 -- # jq -r '.[]' 00:33:29.607 17:30:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:29.607 17:30:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@527 -- # raid_bdev= 00:33:29.607 17:30:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@528 -- # '[' -n '' ']' 00:33:29.607 17:30:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@532 -- # '[' 4 -gt 2 ']' 00:33:29.607 17:30:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@534 -- # i=3 00:33:29.607 17:30:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@535 -- # rpc_cmd bdev_passthru_delete pt4 00:33:29.607 17:30:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:29.607 17:30:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:33:29.607 17:30:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:29.607 17:30:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@540 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:33:29.607 17:30:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:29.607 17:30:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:33:29.607 [2024-11-26 17:30:06.950407] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:33:29.607 [2024-11-26 17:30:06.950474] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:33:29.607 [2024-11-26 17:30:06.950495] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000c080 00:33:29.607 [2024-11-26 17:30:06.950511] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:33:29.607 [2024-11-26 17:30:06.952993] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:33:29.607 [2024-11-26 17:30:06.953039] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:33:29.607 [2024-11-26 17:30:06.953154] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:33:29.607 [2024-11-26 17:30:06.953203] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:33:29.607 [2024-11-26 17:30:06.953350] bdev_raid.c:3685:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev pt2 (4) greater than existing raid bdev raid_bdev1 (2) 00:33:29.607 [2024-11-26 17:30:06.953372] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:33:29.607 [2024-11-26 17:30:06.953395] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008580 name raid_bdev1, state configuring 00:33:29.607 [2024-11-26 17:30:06.953472] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:33:29.607 [2024-11-26 17:30:06.953576] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:33:29.607 pt1 00:33:29.607 17:30:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:29.607 17:30:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@542 -- # '[' 4 -gt 2 ']' 00:33:29.607 17:30:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@545 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 3 00:33:29.607 17:30:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:33:29.607 17:30:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:33:29.607 17:30:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:33:29.607 17:30:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:33:29.607 17:30:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:33:29.607 17:30:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:33:29.607 17:30:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:33:29.607 17:30:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:33:29.607 17:30:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:33:29.607 17:30:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:33:29.607 17:30:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:33:29.607 17:30:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:29.607 17:30:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:33:29.607 17:30:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:29.607 17:30:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:33:29.607 "name": "raid_bdev1", 00:33:29.607 "uuid": "9e85a89c-a53e-4859-8b0a-b4c02e8a4133", 00:33:29.607 "strip_size_kb": 0, 00:33:29.607 "state": "configuring", 00:33:29.607 "raid_level": "raid1", 00:33:29.607 "superblock": true, 00:33:29.607 "num_base_bdevs": 4, 00:33:29.607 "num_base_bdevs_discovered": 2, 00:33:29.607 "num_base_bdevs_operational": 3, 00:33:29.607 "base_bdevs_list": [ 00:33:29.607 { 00:33:29.607 "name": null, 00:33:29.607 "uuid": "00000000-0000-0000-0000-000000000000", 00:33:29.607 "is_configured": false, 00:33:29.607 "data_offset": 2048, 00:33:29.607 "data_size": 63488 00:33:29.607 }, 00:33:29.607 { 00:33:29.607 "name": "pt2", 00:33:29.607 "uuid": "00000000-0000-0000-0000-000000000002", 00:33:29.607 "is_configured": true, 00:33:29.607 "data_offset": 2048, 00:33:29.607 "data_size": 63488 00:33:29.607 }, 00:33:29.607 { 00:33:29.607 "name": "pt3", 00:33:29.607 "uuid": "00000000-0000-0000-0000-000000000003", 00:33:29.607 "is_configured": true, 00:33:29.607 "data_offset": 2048, 00:33:29.607 "data_size": 63488 00:33:29.607 }, 00:33:29.607 { 00:33:29.607 "name": null, 00:33:29.607 "uuid": "00000000-0000-0000-0000-000000000004", 00:33:29.607 "is_configured": false, 00:33:29.607 "data_offset": 2048, 00:33:29.607 "data_size": 63488 00:33:29.607 } 00:33:29.607 ] 00:33:29.607 }' 00:33:29.607 17:30:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:33:29.607 17:30:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:33:30.177 17:30:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@546 -- # rpc_cmd bdev_raid_get_bdevs configuring 00:33:30.177 17:30:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:30.177 17:30:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:33:30.177 17:30:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@546 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:33:30.177 17:30:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:30.177 17:30:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@546 -- # [[ false == \f\a\l\s\e ]] 00:33:30.177 17:30:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@549 -- # rpc_cmd bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:33:30.177 17:30:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:30.177 17:30:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:33:30.177 [2024-11-26 17:30:07.458537] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:33:30.177 [2024-11-26 17:30:07.458768] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:33:30.177 [2024-11-26 17:30:07.458810] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000c680 00:33:30.177 [2024-11-26 17:30:07.458824] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:33:30.177 [2024-11-26 17:30:07.459366] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:33:30.177 [2024-11-26 17:30:07.459388] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:33:30.177 [2024-11-26 17:30:07.459481] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt4 00:33:30.178 [2024-11-26 17:30:07.459506] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:33:30.178 [2024-11-26 17:30:07.459643] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008900 00:33:30.178 [2024-11-26 17:30:07.459654] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:33:30.178 [2024-11-26 17:30:07.459945] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006150 00:33:30.178 [2024-11-26 17:30:07.460108] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008900 00:33:30.178 [2024-11-26 17:30:07.460129] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008900 00:33:30.178 [2024-11-26 17:30:07.460283] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:33:30.178 pt4 00:33:30.178 17:30:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:30.178 17:30:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@554 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:33:30.178 17:30:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:33:30.178 17:30:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:33:30.178 17:30:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:33:30.178 17:30:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:33:30.178 17:30:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:33:30.178 17:30:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:33:30.178 17:30:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:33:30.178 17:30:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:33:30.178 17:30:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:33:30.178 17:30:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:33:30.178 17:30:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:30.178 17:30:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:33:30.178 17:30:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:33:30.178 17:30:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:30.178 17:30:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:33:30.178 "name": "raid_bdev1", 00:33:30.178 "uuid": "9e85a89c-a53e-4859-8b0a-b4c02e8a4133", 00:33:30.178 "strip_size_kb": 0, 00:33:30.178 "state": "online", 00:33:30.178 "raid_level": "raid1", 00:33:30.178 "superblock": true, 00:33:30.178 "num_base_bdevs": 4, 00:33:30.178 "num_base_bdevs_discovered": 3, 00:33:30.178 "num_base_bdevs_operational": 3, 00:33:30.178 "base_bdevs_list": [ 00:33:30.178 { 00:33:30.178 "name": null, 00:33:30.178 "uuid": "00000000-0000-0000-0000-000000000000", 00:33:30.178 "is_configured": false, 00:33:30.178 "data_offset": 2048, 00:33:30.178 "data_size": 63488 00:33:30.178 }, 00:33:30.178 { 00:33:30.178 "name": "pt2", 00:33:30.178 "uuid": "00000000-0000-0000-0000-000000000002", 00:33:30.178 "is_configured": true, 00:33:30.178 "data_offset": 2048, 00:33:30.178 "data_size": 63488 00:33:30.178 }, 00:33:30.178 { 00:33:30.178 "name": "pt3", 00:33:30.178 "uuid": "00000000-0000-0000-0000-000000000003", 00:33:30.178 "is_configured": true, 00:33:30.178 "data_offset": 2048, 00:33:30.178 "data_size": 63488 00:33:30.178 }, 00:33:30.178 { 00:33:30.178 "name": "pt4", 00:33:30.178 "uuid": "00000000-0000-0000-0000-000000000004", 00:33:30.178 "is_configured": true, 00:33:30.178 "data_offset": 2048, 00:33:30.178 "data_size": 63488 00:33:30.178 } 00:33:30.178 ] 00:33:30.178 }' 00:33:30.178 17:30:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:33:30.178 17:30:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:33:30.746 17:30:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@555 -- # rpc_cmd bdev_raid_get_bdevs online 00:33:30.746 17:30:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@555 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:33:30.746 17:30:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:30.746 17:30:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:33:30.746 17:30:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:30.746 17:30:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@555 -- # [[ false == \f\a\l\s\e ]] 00:33:30.746 17:30:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@558 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:33:30.746 17:30:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@558 -- # jq -r '.[] | .uuid' 00:33:30.746 17:30:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:30.746 17:30:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:33:30.746 [2024-11-26 17:30:08.026920] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:33:30.746 17:30:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:30.746 17:30:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@558 -- # '[' 9e85a89c-a53e-4859-8b0a-b4c02e8a4133 '!=' 9e85a89c-a53e-4859-8b0a-b4c02e8a4133 ']' 00:33:30.746 17:30:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@563 -- # killprocess 74958 00:33:30.746 17:30:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@954 -- # '[' -z 74958 ']' 00:33:30.746 17:30:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@958 -- # kill -0 74958 00:33:30.746 17:30:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@959 -- # uname 00:33:30.746 17:30:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:33:30.746 17:30:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 74958 00:33:30.746 killing process with pid 74958 00:33:30.746 17:30:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:33:30.746 17:30:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:33:30.746 17:30:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 74958' 00:33:30.746 17:30:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@973 -- # kill 74958 00:33:30.746 [2024-11-26 17:30:08.100999] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:33:30.746 [2024-11-26 17:30:08.101104] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:33:30.746 17:30:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@978 -- # wait 74958 00:33:30.746 [2024-11-26 17:30:08.101182] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:33:30.746 [2024-11-26 17:30:08.101197] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008900 name raid_bdev1, state offline 00:33:31.372 [2024-11-26 17:30:08.512205] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:33:32.334 17:30:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@565 -- # return 0 00:33:32.335 00:33:32.335 real 0m8.827s 00:33:32.335 user 0m13.951s 00:33:32.335 sys 0m1.720s 00:33:32.335 ************************************ 00:33:32.335 END TEST raid_superblock_test 00:33:32.335 ************************************ 00:33:32.335 17:30:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:33:32.335 17:30:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:33:32.335 17:30:09 bdev_raid -- bdev/bdev_raid.sh@971 -- # run_test raid_read_error_test raid_io_error_test raid1 4 read 00:33:32.335 17:30:09 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:33:32.335 17:30:09 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:33:32.335 17:30:09 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:33:32.335 ************************************ 00:33:32.335 START TEST raid_read_error_test 00:33:32.335 ************************************ 00:33:32.335 17:30:09 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1129 -- # raid_io_error_test raid1 4 read 00:33:32.335 17:30:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=raid1 00:33:32.335 17:30:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=4 00:33:32.335 17:30:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=read 00:33:32.335 17:30:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:33:32.335 17:30:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:33:32.335 17:30:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:33:32.335 17:30:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:33:32.335 17:30:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:33:32.335 17:30:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:33:32.335 17:30:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:33:32.335 17:30:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:33:32.335 17:30:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev3 00:33:32.335 17:30:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:33:32.335 17:30:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:33:32.335 17:30:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev4 00:33:32.335 17:30:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:33:32.335 17:30:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:33:32.335 17:30:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:33:32.335 17:30:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:33:32.335 17:30:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:33:32.335 17:30:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:33:32.335 17:30:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:33:32.335 17:30:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:33:32.335 17:30:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:33:32.335 17:30:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@800 -- # '[' raid1 '!=' raid1 ']' 00:33:32.335 17:30:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@804 -- # strip_size=0 00:33:32.335 17:30:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:33:32.335 17:30:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.201tTjflUw 00:33:32.335 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:33:32.335 17:30:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=75456 00:33:32.335 17:30:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 75456 00:33:32.335 17:30:09 bdev_raid.raid_read_error_test -- common/autotest_common.sh@835 -- # '[' -z 75456 ']' 00:33:32.335 17:30:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:33:32.335 17:30:09 bdev_raid.raid_read_error_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:33:32.335 17:30:09 bdev_raid.raid_read_error_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:33:32.335 17:30:09 bdev_raid.raid_read_error_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:33:32.335 17:30:09 bdev_raid.raid_read_error_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:33:32.335 17:30:09 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:33:32.594 [2024-11-26 17:30:09.872921] Starting SPDK v25.01-pre git sha1 f7ce15267 / DPDK 24.03.0 initialization... 00:33:32.594 [2024-11-26 17:30:09.873540] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75456 ] 00:33:32.853 [2024-11-26 17:30:10.069761] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:33:32.853 [2024-11-26 17:30:10.182411] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:33:33.111 [2024-11-26 17:30:10.395633] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:33:33.111 [2024-11-26 17:30:10.395665] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:33:33.680 17:30:10 bdev_raid.raid_read_error_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:33:33.680 17:30:10 bdev_raid.raid_read_error_test -- common/autotest_common.sh@868 -- # return 0 00:33:33.680 17:30:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:33:33.680 17:30:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:33:33.680 17:30:10 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:33.680 17:30:10 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:33:33.680 BaseBdev1_malloc 00:33:33.680 17:30:10 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:33.680 17:30:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:33:33.680 17:30:10 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:33.680 17:30:10 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:33:33.680 true 00:33:33.680 17:30:10 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:33.680 17:30:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:33:33.680 17:30:10 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:33.680 17:30:10 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:33:33.680 [2024-11-26 17:30:10.879192] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:33:33.680 [2024-11-26 17:30:10.879252] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:33:33.680 [2024-11-26 17:30:10.879278] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:33:33.680 [2024-11-26 17:30:10.879292] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:33:33.680 [2024-11-26 17:30:10.881682] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:33:33.680 [2024-11-26 17:30:10.881727] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:33:33.680 BaseBdev1 00:33:33.680 17:30:10 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:33.680 17:30:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:33:33.680 17:30:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:33:33.680 17:30:10 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:33.680 17:30:10 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:33:33.680 BaseBdev2_malloc 00:33:33.680 17:30:10 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:33.680 17:30:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:33:33.680 17:30:10 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:33.680 17:30:10 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:33:33.680 true 00:33:33.680 17:30:10 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:33.680 17:30:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:33:33.680 17:30:10 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:33.680 17:30:10 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:33:33.680 [2024-11-26 17:30:10.944540] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:33:33.680 [2024-11-26 17:30:10.945750] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:33:33.680 [2024-11-26 17:30:10.945781] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:33:33.680 [2024-11-26 17:30:10.945796] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:33:33.680 [2024-11-26 17:30:10.948191] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:33:33.680 [2024-11-26 17:30:10.948231] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:33:33.680 BaseBdev2 00:33:33.680 17:30:10 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:33.680 17:30:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:33:33.680 17:30:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:33:33.680 17:30:10 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:33.680 17:30:10 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:33:33.680 BaseBdev3_malloc 00:33:33.680 17:30:11 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:33.680 17:30:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev3_malloc 00:33:33.680 17:30:11 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:33.680 17:30:11 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:33:33.680 true 00:33:33.681 17:30:11 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:33.681 17:30:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:33:33.681 17:30:11 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:33.681 17:30:11 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:33:33.681 [2024-11-26 17:30:11.026234] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:33:33.681 [2024-11-26 17:30:11.026288] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:33:33.681 [2024-11-26 17:30:11.026310] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:33:33.681 [2024-11-26 17:30:11.026324] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:33:33.681 [2024-11-26 17:30:11.028713] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:33:33.681 [2024-11-26 17:30:11.028755] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:33:33.681 BaseBdev3 00:33:33.681 17:30:11 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:33.681 17:30:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:33:33.681 17:30:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:33:33.681 17:30:11 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:33.681 17:30:11 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:33:33.681 BaseBdev4_malloc 00:33:33.681 17:30:11 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:33.681 17:30:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev4_malloc 00:33:33.681 17:30:11 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:33.681 17:30:11 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:33:33.681 true 00:33:33.681 17:30:11 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:33.681 17:30:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev4_malloc -p BaseBdev4 00:33:33.681 17:30:11 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:33.681 17:30:11 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:33:33.681 [2024-11-26 17:30:11.091511] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev4_malloc 00:33:33.681 [2024-11-26 17:30:11.091569] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:33:33.681 [2024-11-26 17:30:11.091591] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:33:33.681 [2024-11-26 17:30:11.091605] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:33:33.681 [2024-11-26 17:30:11.093960] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:33:33.681 [2024-11-26 17:30:11.094008] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:33:33.681 BaseBdev4 00:33:33.681 17:30:11 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:33.681 17:30:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n raid_bdev1 -s 00:33:33.681 17:30:11 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:33.681 17:30:11 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:33:33.681 [2024-11-26 17:30:11.099571] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:33:33.681 [2024-11-26 17:30:11.101634] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:33:33.681 [2024-11-26 17:30:11.101832] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:33:33.681 [2024-11-26 17:30:11.101908] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:33:33.681 [2024-11-26 17:30:11.102154] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008580 00:33:33.681 [2024-11-26 17:30:11.102171] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:33:33.681 [2024-11-26 17:30:11.102422] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000068a0 00:33:33.681 [2024-11-26 17:30:11.102588] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008580 00:33:33.681 [2024-11-26 17:30:11.102598] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008580 00:33:33.681 [2024-11-26 17:30:11.102747] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:33:33.681 17:30:11 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:33.681 17:30:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 4 00:33:33.681 17:30:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:33:33.681 17:30:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:33:33.681 17:30:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:33:33.681 17:30:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:33:33.681 17:30:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:33:33.681 17:30:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:33:33.681 17:30:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:33:33.681 17:30:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:33:33.681 17:30:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:33:33.681 17:30:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:33:33.681 17:30:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:33:33.681 17:30:11 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:33.681 17:30:11 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:33:33.940 17:30:11 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:33.940 17:30:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:33:33.940 "name": "raid_bdev1", 00:33:33.940 "uuid": "80da8c9b-df3d-4618-aa2e-93db7492a5e3", 00:33:33.940 "strip_size_kb": 0, 00:33:33.940 "state": "online", 00:33:33.940 "raid_level": "raid1", 00:33:33.940 "superblock": true, 00:33:33.940 "num_base_bdevs": 4, 00:33:33.940 "num_base_bdevs_discovered": 4, 00:33:33.940 "num_base_bdevs_operational": 4, 00:33:33.940 "base_bdevs_list": [ 00:33:33.940 { 00:33:33.940 "name": "BaseBdev1", 00:33:33.940 "uuid": "7934a25b-8044-567a-a740-5678344afd95", 00:33:33.940 "is_configured": true, 00:33:33.940 "data_offset": 2048, 00:33:33.940 "data_size": 63488 00:33:33.940 }, 00:33:33.940 { 00:33:33.940 "name": "BaseBdev2", 00:33:33.940 "uuid": "590da35d-2986-5a89-8f44-248abbc924cf", 00:33:33.940 "is_configured": true, 00:33:33.940 "data_offset": 2048, 00:33:33.940 "data_size": 63488 00:33:33.940 }, 00:33:33.940 { 00:33:33.940 "name": "BaseBdev3", 00:33:33.940 "uuid": "24a29c35-bd0f-5678-9e76-8793fcf3dafd", 00:33:33.940 "is_configured": true, 00:33:33.940 "data_offset": 2048, 00:33:33.940 "data_size": 63488 00:33:33.940 }, 00:33:33.940 { 00:33:33.940 "name": "BaseBdev4", 00:33:33.940 "uuid": "34c937ce-2601-5d3b-ab94-7a17e3d4c94e", 00:33:33.940 "is_configured": true, 00:33:33.940 "data_offset": 2048, 00:33:33.940 "data_size": 63488 00:33:33.940 } 00:33:33.940 ] 00:33:33.940 }' 00:33:33.940 17:30:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:33:33.940 17:30:11 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:33:34.199 17:30:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:33:34.199 17:30:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:33:34.458 [2024-11-26 17:30:11.677205] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006a40 00:33:35.396 17:30:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc read failure 00:33:35.396 17:30:12 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:35.396 17:30:12 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:33:35.396 17:30:12 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:35.396 17:30:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:33:35.396 17:30:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@832 -- # [[ raid1 = \r\a\i\d\1 ]] 00:33:35.396 17:30:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@832 -- # [[ read = \w\r\i\t\e ]] 00:33:35.396 17:30:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=4 00:33:35.396 17:30:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 4 00:33:35.396 17:30:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:33:35.396 17:30:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:33:35.396 17:30:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:33:35.396 17:30:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:33:35.396 17:30:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:33:35.396 17:30:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:33:35.396 17:30:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:33:35.396 17:30:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:33:35.396 17:30:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:33:35.396 17:30:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:33:35.396 17:30:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:33:35.396 17:30:12 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:35.396 17:30:12 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:33:35.396 17:30:12 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:35.396 17:30:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:33:35.396 "name": "raid_bdev1", 00:33:35.396 "uuid": "80da8c9b-df3d-4618-aa2e-93db7492a5e3", 00:33:35.396 "strip_size_kb": 0, 00:33:35.396 "state": "online", 00:33:35.396 "raid_level": "raid1", 00:33:35.396 "superblock": true, 00:33:35.396 "num_base_bdevs": 4, 00:33:35.396 "num_base_bdevs_discovered": 4, 00:33:35.396 "num_base_bdevs_operational": 4, 00:33:35.396 "base_bdevs_list": [ 00:33:35.396 { 00:33:35.396 "name": "BaseBdev1", 00:33:35.396 "uuid": "7934a25b-8044-567a-a740-5678344afd95", 00:33:35.396 "is_configured": true, 00:33:35.396 "data_offset": 2048, 00:33:35.396 "data_size": 63488 00:33:35.396 }, 00:33:35.396 { 00:33:35.396 "name": "BaseBdev2", 00:33:35.396 "uuid": "590da35d-2986-5a89-8f44-248abbc924cf", 00:33:35.396 "is_configured": true, 00:33:35.396 "data_offset": 2048, 00:33:35.396 "data_size": 63488 00:33:35.396 }, 00:33:35.396 { 00:33:35.396 "name": "BaseBdev3", 00:33:35.396 "uuid": "24a29c35-bd0f-5678-9e76-8793fcf3dafd", 00:33:35.396 "is_configured": true, 00:33:35.396 "data_offset": 2048, 00:33:35.396 "data_size": 63488 00:33:35.396 }, 00:33:35.396 { 00:33:35.396 "name": "BaseBdev4", 00:33:35.396 "uuid": "34c937ce-2601-5d3b-ab94-7a17e3d4c94e", 00:33:35.396 "is_configured": true, 00:33:35.396 "data_offset": 2048, 00:33:35.396 "data_size": 63488 00:33:35.396 } 00:33:35.396 ] 00:33:35.396 }' 00:33:35.396 17:30:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:33:35.396 17:30:12 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:33:35.655 17:30:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:33:35.655 17:30:13 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:35.655 17:30:13 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:33:35.655 [2024-11-26 17:30:13.007874] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:33:35.655 [2024-11-26 17:30:13.008054] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:33:35.655 [2024-11-26 17:30:13.010962] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:33:35.655 [2024-11-26 17:30:13.011143] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:33:35.655 [2024-11-26 17:30:13.011298] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:33:35.655 [2024-11-26 17:30:13.011566] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008580 name raid_bdev1, state offline 00:33:35.655 17:30:13 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:35.655 { 00:33:35.655 "results": [ 00:33:35.655 { 00:33:35.655 "job": "raid_bdev1", 00:33:35.655 "core_mask": "0x1", 00:33:35.656 "workload": "randrw", 00:33:35.656 "percentage": 50, 00:33:35.656 "status": "finished", 00:33:35.656 "queue_depth": 1, 00:33:35.656 "io_size": 131072, 00:33:35.656 "runtime": 1.328954, 00:33:35.656 "iops": 10726.481127262494, 00:33:35.656 "mibps": 1340.8101409078117, 00:33:35.656 "io_failed": 0, 00:33:35.656 "io_timeout": 0, 00:33:35.656 "avg_latency_us": 90.49217951929982, 00:33:35.656 "min_latency_us": 23.283809523809524, 00:33:35.656 "max_latency_us": 1497.9657142857143 00:33:35.656 } 00:33:35.656 ], 00:33:35.656 "core_count": 1 00:33:35.656 } 00:33:35.656 17:30:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 75456 00:33:35.656 17:30:13 bdev_raid.raid_read_error_test -- common/autotest_common.sh@954 -- # '[' -z 75456 ']' 00:33:35.656 17:30:13 bdev_raid.raid_read_error_test -- common/autotest_common.sh@958 -- # kill -0 75456 00:33:35.656 17:30:13 bdev_raid.raid_read_error_test -- common/autotest_common.sh@959 -- # uname 00:33:35.656 17:30:13 bdev_raid.raid_read_error_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:33:35.656 17:30:13 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 75456 00:33:35.656 killing process with pid 75456 00:33:35.656 17:30:13 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:33:35.656 17:30:13 bdev_raid.raid_read_error_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:33:35.656 17:30:13 bdev_raid.raid_read_error_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 75456' 00:33:35.656 17:30:13 bdev_raid.raid_read_error_test -- common/autotest_common.sh@973 -- # kill 75456 00:33:35.656 [2024-11-26 17:30:13.056590] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:33:35.656 17:30:13 bdev_raid.raid_read_error_test -- common/autotest_common.sh@978 -- # wait 75456 00:33:36.224 [2024-11-26 17:30:13.385078] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:33:37.160 17:30:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.201tTjflUw 00:33:37.160 17:30:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:33:37.160 17:30:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:33:37.419 17:30:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.00 00:33:37.419 17:30:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy raid1 00:33:37.419 17:30:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:33:37.419 17:30:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@199 -- # return 0 00:33:37.419 17:30:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@847 -- # [[ 0.00 = \0\.\0\0 ]] 00:33:37.419 00:33:37.420 real 0m4.871s 00:33:37.420 user 0m5.832s 00:33:37.420 sys 0m0.651s 00:33:37.420 17:30:14 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:33:37.420 17:30:14 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:33:37.420 ************************************ 00:33:37.420 END TEST raid_read_error_test 00:33:37.420 ************************************ 00:33:37.420 17:30:14 bdev_raid -- bdev/bdev_raid.sh@972 -- # run_test raid_write_error_test raid_io_error_test raid1 4 write 00:33:37.420 17:30:14 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:33:37.420 17:30:14 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:33:37.420 17:30:14 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:33:37.420 ************************************ 00:33:37.420 START TEST raid_write_error_test 00:33:37.420 ************************************ 00:33:37.420 17:30:14 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1129 -- # raid_io_error_test raid1 4 write 00:33:37.420 17:30:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=raid1 00:33:37.420 17:30:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=4 00:33:37.420 17:30:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=write 00:33:37.420 17:30:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:33:37.420 17:30:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:33:37.420 17:30:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:33:37.420 17:30:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:33:37.420 17:30:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:33:37.420 17:30:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:33:37.420 17:30:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:33:37.420 17:30:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:33:37.420 17:30:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev3 00:33:37.420 17:30:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:33:37.420 17:30:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:33:37.420 17:30:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev4 00:33:37.420 17:30:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:33:37.420 17:30:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:33:37.420 17:30:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:33:37.420 17:30:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:33:37.420 17:30:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:33:37.420 17:30:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:33:37.420 17:30:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:33:37.420 17:30:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:33:37.420 17:30:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:33:37.420 17:30:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@800 -- # '[' raid1 '!=' raid1 ']' 00:33:37.420 17:30:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@804 -- # strip_size=0 00:33:37.420 17:30:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:33:37.420 17:30:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.IWt10DFcem 00:33:37.420 17:30:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=75602 00:33:37.420 17:30:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 75602 00:33:37.420 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:33:37.420 17:30:14 bdev_raid.raid_write_error_test -- common/autotest_common.sh@835 -- # '[' -z 75602 ']' 00:33:37.420 17:30:14 bdev_raid.raid_write_error_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:33:37.420 17:30:14 bdev_raid.raid_write_error_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:33:37.420 17:30:14 bdev_raid.raid_write_error_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:33:37.420 17:30:14 bdev_raid.raid_write_error_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:33:37.420 17:30:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:33:37.420 17:30:14 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:33:37.420 [2024-11-26 17:30:14.816121] Starting SPDK v25.01-pre git sha1 f7ce15267 / DPDK 24.03.0 initialization... 00:33:37.420 [2024-11-26 17:30:14.816301] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75602 ] 00:33:37.679 [2024-11-26 17:30:15.005498] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:33:37.679 [2024-11-26 17:30:15.118359] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:33:37.938 [2024-11-26 17:30:15.323213] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:33:37.938 [2024-11-26 17:30:15.323279] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:33:38.506 17:30:15 bdev_raid.raid_write_error_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:33:38.506 17:30:15 bdev_raid.raid_write_error_test -- common/autotest_common.sh@868 -- # return 0 00:33:38.506 17:30:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:33:38.506 17:30:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:33:38.506 17:30:15 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:38.506 17:30:15 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:33:38.506 BaseBdev1_malloc 00:33:38.506 17:30:15 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:38.506 17:30:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:33:38.506 17:30:15 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:38.506 17:30:15 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:33:38.506 true 00:33:38.506 17:30:15 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:38.506 17:30:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:33:38.506 17:30:15 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:38.506 17:30:15 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:33:38.506 [2024-11-26 17:30:15.777871] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:33:38.507 [2024-11-26 17:30:15.777939] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:33:38.507 [2024-11-26 17:30:15.777967] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:33:38.507 [2024-11-26 17:30:15.777982] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:33:38.507 [2024-11-26 17:30:15.780555] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:33:38.507 [2024-11-26 17:30:15.780749] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:33:38.507 BaseBdev1 00:33:38.507 17:30:15 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:38.507 17:30:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:33:38.507 17:30:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:33:38.507 17:30:15 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:38.507 17:30:15 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:33:38.507 BaseBdev2_malloc 00:33:38.507 17:30:15 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:38.507 17:30:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:33:38.507 17:30:15 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:38.507 17:30:15 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:33:38.507 true 00:33:38.507 17:30:15 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:38.507 17:30:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:33:38.507 17:30:15 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:38.507 17:30:15 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:33:38.507 [2024-11-26 17:30:15.844396] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:33:38.507 [2024-11-26 17:30:15.844458] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:33:38.507 [2024-11-26 17:30:15.844478] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:33:38.507 [2024-11-26 17:30:15.844493] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:33:38.507 [2024-11-26 17:30:15.846932] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:33:38.507 [2024-11-26 17:30:15.847108] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:33:38.507 BaseBdev2 00:33:38.507 17:30:15 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:38.507 17:30:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:33:38.507 17:30:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:33:38.507 17:30:15 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:38.507 17:30:15 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:33:38.507 BaseBdev3_malloc 00:33:38.507 17:30:15 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:38.507 17:30:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev3_malloc 00:33:38.507 17:30:15 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:38.507 17:30:15 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:33:38.507 true 00:33:38.507 17:30:15 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:38.507 17:30:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:33:38.507 17:30:15 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:38.507 17:30:15 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:33:38.507 [2024-11-26 17:30:15.920577] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:33:38.507 [2024-11-26 17:30:15.920637] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:33:38.507 [2024-11-26 17:30:15.920661] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:33:38.507 [2024-11-26 17:30:15.920676] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:33:38.507 [2024-11-26 17:30:15.923105] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:33:38.507 [2024-11-26 17:30:15.923144] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:33:38.507 BaseBdev3 00:33:38.507 17:30:15 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:38.507 17:30:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:33:38.507 17:30:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:33:38.507 17:30:15 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:38.507 17:30:15 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:33:38.767 BaseBdev4_malloc 00:33:38.767 17:30:15 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:38.767 17:30:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev4_malloc 00:33:38.767 17:30:15 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:38.767 17:30:15 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:33:38.767 true 00:33:38.767 17:30:15 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:38.767 17:30:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev4_malloc -p BaseBdev4 00:33:38.767 17:30:15 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:38.767 17:30:15 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:33:38.767 [2024-11-26 17:30:15.989226] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev4_malloc 00:33:38.767 [2024-11-26 17:30:15.989283] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:33:38.767 [2024-11-26 17:30:15.989309] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:33:38.767 [2024-11-26 17:30:15.989324] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:33:38.767 [2024-11-26 17:30:15.991905] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:33:38.767 [2024-11-26 17:30:15.991947] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:33:38.767 BaseBdev4 00:33:38.767 17:30:15 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:38.767 17:30:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n raid_bdev1 -s 00:33:38.767 17:30:15 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:38.767 17:30:15 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:33:38.767 [2024-11-26 17:30:16.001285] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:33:38.767 [2024-11-26 17:30:16.003609] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:33:38.767 [2024-11-26 17:30:16.003684] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:33:38.767 [2024-11-26 17:30:16.003749] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:33:38.767 [2024-11-26 17:30:16.003979] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008580 00:33:38.767 [2024-11-26 17:30:16.003998] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:33:38.767 [2024-11-26 17:30:16.004289] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000068a0 00:33:38.767 [2024-11-26 17:30:16.004463] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008580 00:33:38.767 [2024-11-26 17:30:16.004474] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008580 00:33:38.767 [2024-11-26 17:30:16.004668] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:33:38.767 17:30:16 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:38.767 17:30:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 4 00:33:38.767 17:30:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:33:38.767 17:30:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:33:38.767 17:30:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:33:38.767 17:30:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:33:38.767 17:30:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:33:38.767 17:30:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:33:38.767 17:30:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:33:38.767 17:30:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:33:38.767 17:30:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:33:38.767 17:30:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:33:38.767 17:30:16 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:38.767 17:30:16 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:33:38.767 17:30:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:33:38.767 17:30:16 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:38.767 17:30:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:33:38.767 "name": "raid_bdev1", 00:33:38.767 "uuid": "dc40c55e-4859-4f83-ba12-1cbee0ab4c22", 00:33:38.767 "strip_size_kb": 0, 00:33:38.767 "state": "online", 00:33:38.767 "raid_level": "raid1", 00:33:38.767 "superblock": true, 00:33:38.767 "num_base_bdevs": 4, 00:33:38.767 "num_base_bdevs_discovered": 4, 00:33:38.767 "num_base_bdevs_operational": 4, 00:33:38.767 "base_bdevs_list": [ 00:33:38.767 { 00:33:38.767 "name": "BaseBdev1", 00:33:38.767 "uuid": "7023a6ef-2df7-507a-a1d7-56b4991eaf81", 00:33:38.767 "is_configured": true, 00:33:38.767 "data_offset": 2048, 00:33:38.767 "data_size": 63488 00:33:38.767 }, 00:33:38.767 { 00:33:38.767 "name": "BaseBdev2", 00:33:38.767 "uuid": "547e7bad-2846-5891-b295-9a653c6c564e", 00:33:38.767 "is_configured": true, 00:33:38.767 "data_offset": 2048, 00:33:38.767 "data_size": 63488 00:33:38.767 }, 00:33:38.767 { 00:33:38.767 "name": "BaseBdev3", 00:33:38.767 "uuid": "f8363795-3c7e-55f8-a265-17f6139e0f6e", 00:33:38.767 "is_configured": true, 00:33:38.767 "data_offset": 2048, 00:33:38.767 "data_size": 63488 00:33:38.767 }, 00:33:38.767 { 00:33:38.767 "name": "BaseBdev4", 00:33:38.767 "uuid": "51557389-be8b-5b55-be63-3beb604d3d7e", 00:33:38.767 "is_configured": true, 00:33:38.767 "data_offset": 2048, 00:33:38.767 "data_size": 63488 00:33:38.767 } 00:33:38.767 ] 00:33:38.767 }' 00:33:38.767 17:30:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:33:38.767 17:30:16 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:33:39.026 17:30:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:33:39.026 17:30:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:33:39.313 [2024-11-26 17:30:16.527019] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006a40 00:33:40.257 17:30:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc write failure 00:33:40.257 17:30:17 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:40.257 17:30:17 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:33:40.257 [2024-11-26 17:30:17.415027] bdev_raid.c:2276:_raid_bdev_fail_base_bdev: *NOTICE*: Failing base bdev in slot 0 ('BaseBdev1') of raid bdev 'raid_bdev1' 00:33:40.257 [2024-11-26 17:30:17.415102] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:33:40.257 [2024-11-26 17:30:17.415333] bdev_raid.c:1974:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 0 raid_ch: 0x60d000006a40 00:33:40.257 17:30:17 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:40.257 17:30:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:33:40.257 17:30:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@832 -- # [[ raid1 = \r\a\i\d\1 ]] 00:33:40.257 17:30:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@832 -- # [[ write = \w\r\i\t\e ]] 00:33:40.257 17:30:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@833 -- # expected_num_base_bdevs=3 00:33:40.257 17:30:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:33:40.257 17:30:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:33:40.257 17:30:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:33:40.257 17:30:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:33:40.257 17:30:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:33:40.257 17:30:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:33:40.257 17:30:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:33:40.257 17:30:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:33:40.257 17:30:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:33:40.257 17:30:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:33:40.257 17:30:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:33:40.257 17:30:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:33:40.257 17:30:17 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:40.257 17:30:17 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:33:40.257 17:30:17 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:40.257 17:30:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:33:40.257 "name": "raid_bdev1", 00:33:40.257 "uuid": "dc40c55e-4859-4f83-ba12-1cbee0ab4c22", 00:33:40.257 "strip_size_kb": 0, 00:33:40.257 "state": "online", 00:33:40.257 "raid_level": "raid1", 00:33:40.257 "superblock": true, 00:33:40.257 "num_base_bdevs": 4, 00:33:40.257 "num_base_bdevs_discovered": 3, 00:33:40.257 "num_base_bdevs_operational": 3, 00:33:40.257 "base_bdevs_list": [ 00:33:40.257 { 00:33:40.257 "name": null, 00:33:40.257 "uuid": "00000000-0000-0000-0000-000000000000", 00:33:40.257 "is_configured": false, 00:33:40.257 "data_offset": 0, 00:33:40.257 "data_size": 63488 00:33:40.257 }, 00:33:40.257 { 00:33:40.257 "name": "BaseBdev2", 00:33:40.257 "uuid": "547e7bad-2846-5891-b295-9a653c6c564e", 00:33:40.257 "is_configured": true, 00:33:40.257 "data_offset": 2048, 00:33:40.257 "data_size": 63488 00:33:40.257 }, 00:33:40.257 { 00:33:40.257 "name": "BaseBdev3", 00:33:40.257 "uuid": "f8363795-3c7e-55f8-a265-17f6139e0f6e", 00:33:40.257 "is_configured": true, 00:33:40.257 "data_offset": 2048, 00:33:40.257 "data_size": 63488 00:33:40.257 }, 00:33:40.257 { 00:33:40.257 "name": "BaseBdev4", 00:33:40.257 "uuid": "51557389-be8b-5b55-be63-3beb604d3d7e", 00:33:40.257 "is_configured": true, 00:33:40.257 "data_offset": 2048, 00:33:40.257 "data_size": 63488 00:33:40.257 } 00:33:40.257 ] 00:33:40.257 }' 00:33:40.257 17:30:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:33:40.257 17:30:17 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:33:40.516 17:30:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:33:40.516 17:30:17 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:40.516 17:30:17 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:33:40.516 [2024-11-26 17:30:17.912285] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:33:40.516 [2024-11-26 17:30:17.912319] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:33:40.516 { 00:33:40.516 "results": [ 00:33:40.516 { 00:33:40.516 "job": "raid_bdev1", 00:33:40.516 "core_mask": "0x1", 00:33:40.516 "workload": "randrw", 00:33:40.516 "percentage": 50, 00:33:40.516 "status": "finished", 00:33:40.516 "queue_depth": 1, 00:33:40.516 "io_size": 131072, 00:33:40.516 "runtime": 1.382944, 00:33:40.516 "iops": 11437.194853876947, 00:33:40.516 "mibps": 1429.6493567346183, 00:33:40.516 "io_failed": 0, 00:33:40.516 "io_timeout": 0, 00:33:40.516 "avg_latency_us": 84.60318415688967, 00:33:40.516 "min_latency_us": 25.112380952380953, 00:33:40.516 "max_latency_us": 1599.3904761904762 00:33:40.516 } 00:33:40.516 ], 00:33:40.516 "core_count": 1 00:33:40.516 } 00:33:40.516 [2024-11-26 17:30:17.915295] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:33:40.516 [2024-11-26 17:30:17.915343] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:33:40.516 [2024-11-26 17:30:17.915444] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:33:40.516 [2024-11-26 17:30:17.915459] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008580 name raid_bdev1, state offline 00:33:40.516 17:30:17 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:40.516 17:30:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 75602 00:33:40.516 17:30:17 bdev_raid.raid_write_error_test -- common/autotest_common.sh@954 -- # '[' -z 75602 ']' 00:33:40.516 17:30:17 bdev_raid.raid_write_error_test -- common/autotest_common.sh@958 -- # kill -0 75602 00:33:40.516 17:30:17 bdev_raid.raid_write_error_test -- common/autotest_common.sh@959 -- # uname 00:33:40.516 17:30:17 bdev_raid.raid_write_error_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:33:40.516 17:30:17 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 75602 00:33:40.806 killing process with pid 75602 00:33:40.806 17:30:17 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:33:40.806 17:30:17 bdev_raid.raid_write_error_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:33:40.806 17:30:17 bdev_raid.raid_write_error_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 75602' 00:33:40.806 17:30:17 bdev_raid.raid_write_error_test -- common/autotest_common.sh@973 -- # kill 75602 00:33:40.806 [2024-11-26 17:30:17.963576] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:33:40.806 17:30:17 bdev_raid.raid_write_error_test -- common/autotest_common.sh@978 -- # wait 75602 00:33:41.064 [2024-11-26 17:30:18.307311] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:33:42.441 17:30:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:33:42.441 17:30:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.IWt10DFcem 00:33:42.441 17:30:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:33:42.441 ************************************ 00:33:42.441 END TEST raid_write_error_test 00:33:42.441 ************************************ 00:33:42.441 17:30:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.00 00:33:42.441 17:30:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy raid1 00:33:42.442 17:30:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:33:42.442 17:30:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@199 -- # return 0 00:33:42.442 17:30:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@847 -- # [[ 0.00 = \0\.\0\0 ]] 00:33:42.442 00:33:42.442 real 0m4.870s 00:33:42.442 user 0m5.775s 00:33:42.442 sys 0m0.653s 00:33:42.442 17:30:19 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:33:42.442 17:30:19 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:33:42.442 17:30:19 bdev_raid -- bdev/bdev_raid.sh@976 -- # '[' true = true ']' 00:33:42.442 17:30:19 bdev_raid -- bdev/bdev_raid.sh@977 -- # for n in 2 4 00:33:42.442 17:30:19 bdev_raid -- bdev/bdev_raid.sh@978 -- # run_test raid_rebuild_test raid_rebuild_test raid1 2 false false true 00:33:42.442 17:30:19 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 7 -le 1 ']' 00:33:42.442 17:30:19 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:33:42.442 17:30:19 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:33:42.442 ************************************ 00:33:42.442 START TEST raid_rebuild_test 00:33:42.442 ************************************ 00:33:42.442 17:30:19 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@1129 -- # raid_rebuild_test raid1 2 false false true 00:33:42.442 17:30:19 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@569 -- # local raid_level=raid1 00:33:42.442 17:30:19 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=2 00:33:42.442 17:30:19 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@571 -- # local superblock=false 00:33:42.442 17:30:19 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@572 -- # local background_io=false 00:33:42.442 17:30:19 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@573 -- # local verify=true 00:33:42.442 17:30:19 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:33:42.442 17:30:19 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:33:42.442 17:30:19 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:33:42.442 17:30:19 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:33:42.442 17:30:19 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:33:42.442 17:30:19 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:33:42.442 17:30:19 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:33:42.442 17:30:19 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:33:42.442 17:30:19 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:33:42.442 17:30:19 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:33:42.442 17:30:19 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:33:42.442 17:30:19 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@576 -- # local strip_size 00:33:42.442 17:30:19 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@577 -- # local create_arg 00:33:42.442 17:30:19 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:33:42.442 17:30:19 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@579 -- # local data_offset 00:33:42.442 17:30:19 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@581 -- # '[' raid1 '!=' raid1 ']' 00:33:42.442 17:30:19 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@589 -- # strip_size=0 00:33:42.442 17:30:19 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@592 -- # '[' false = true ']' 00:33:42.442 17:30:19 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@597 -- # raid_pid=75751 00:33:42.442 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:33:42.442 17:30:19 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@598 -- # waitforlisten 75751 00:33:42.442 17:30:19 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@835 -- # '[' -z 75751 ']' 00:33:42.442 17:30:19 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:33:42.442 17:30:19 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:33:42.442 17:30:19 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:33:42.442 17:30:19 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:33:42.442 17:30:19 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:33:42.442 17:30:19 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:33:42.442 I/O size of 3145728 is greater than zero copy threshold (65536). 00:33:42.442 Zero copy mechanism will not be used. 00:33:42.442 [2024-11-26 17:30:19.709855] Starting SPDK v25.01-pre git sha1 f7ce15267 / DPDK 24.03.0 initialization... 00:33:42.442 [2024-11-26 17:30:19.709984] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75751 ] 00:33:42.442 [2024-11-26 17:30:19.882155] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:33:42.700 [2024-11-26 17:30:19.995615] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:33:42.959 [2024-11-26 17:30:20.214649] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:33:42.959 [2024-11-26 17:30:20.214710] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:33:43.529 17:30:20 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:33:43.529 17:30:20 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@868 -- # return 0 00:33:43.529 17:30:20 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:33:43.529 17:30:20 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:33:43.529 17:30:20 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:43.529 17:30:20 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:33:43.529 BaseBdev1_malloc 00:33:43.529 17:30:20 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:43.529 17:30:20 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:33:43.529 17:30:20 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:43.529 17:30:20 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:33:43.529 [2024-11-26 17:30:20.714229] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:33:43.529 [2024-11-26 17:30:20.714441] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:33:43.529 [2024-11-26 17:30:20.714507] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:33:43.529 [2024-11-26 17:30:20.714613] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:33:43.529 [2024-11-26 17:30:20.717276] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:33:43.529 [2024-11-26 17:30:20.717448] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:33:43.529 BaseBdev1 00:33:43.529 17:30:20 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:43.529 17:30:20 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:33:43.529 17:30:20 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:33:43.529 17:30:20 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:43.529 17:30:20 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:33:43.529 BaseBdev2_malloc 00:33:43.529 17:30:20 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:43.529 17:30:20 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:33:43.529 17:30:20 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:43.529 17:30:20 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:33:43.529 [2024-11-26 17:30:20.767779] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:33:43.529 [2024-11-26 17:30:20.767958] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:33:43.529 [2024-11-26 17:30:20.768019] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:33:43.529 [2024-11-26 17:30:20.768117] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:33:43.529 [2024-11-26 17:30:20.770504] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:33:43.529 BaseBdev2 00:33:43.529 [2024-11-26 17:30:20.770642] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:33:43.529 17:30:20 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:43.529 17:30:20 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 512 -b spare_malloc 00:33:43.529 17:30:20 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:43.529 17:30:20 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:33:43.529 spare_malloc 00:33:43.529 17:30:20 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:43.529 17:30:20 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:33:43.529 17:30:20 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:43.529 17:30:20 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:33:43.529 spare_delay 00:33:43.529 17:30:20 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:43.529 17:30:20 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:33:43.529 17:30:20 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:43.529 17:30:20 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:33:43.529 [2024-11-26 17:30:20.841296] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:33:43.529 [2024-11-26 17:30:20.841497] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:33:43.529 [2024-11-26 17:30:20.841565] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:33:43.529 [2024-11-26 17:30:20.841786] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:33:43.529 [2024-11-26 17:30:20.844677] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:33:43.529 [2024-11-26 17:30:20.844838] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:33:43.529 spare 00:33:43.529 17:30:20 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:43.529 17:30:20 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n raid_bdev1 00:33:43.529 17:30:20 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:43.529 17:30:20 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:33:43.529 [2024-11-26 17:30:20.849550] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:33:43.529 [2024-11-26 17:30:20.851787] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:33:43.529 [2024-11-26 17:30:20.852005] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:33:43.529 [2024-11-26 17:30:20.852030] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:33:43.529 [2024-11-26 17:30:20.852339] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:33:43.529 [2024-11-26 17:30:20.852519] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:33:43.529 [2024-11-26 17:30:20.852533] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:33:43.529 [2024-11-26 17:30:20.852692] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:33:43.529 17:30:20 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:43.529 17:30:20 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:33:43.529 17:30:20 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:33:43.529 17:30:20 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:33:43.529 17:30:20 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:33:43.529 17:30:20 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:33:43.529 17:30:20 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:33:43.529 17:30:20 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:33:43.529 17:30:20 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:33:43.529 17:30:20 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:33:43.529 17:30:20 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:33:43.529 17:30:20 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:33:43.529 17:30:20 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:33:43.529 17:30:20 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:43.529 17:30:20 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:33:43.529 17:30:20 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:43.529 17:30:20 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:33:43.529 "name": "raid_bdev1", 00:33:43.529 "uuid": "d1b7dde0-f4dc-48c2-9386-95df55e30516", 00:33:43.529 "strip_size_kb": 0, 00:33:43.529 "state": "online", 00:33:43.529 "raid_level": "raid1", 00:33:43.529 "superblock": false, 00:33:43.529 "num_base_bdevs": 2, 00:33:43.529 "num_base_bdevs_discovered": 2, 00:33:43.529 "num_base_bdevs_operational": 2, 00:33:43.529 "base_bdevs_list": [ 00:33:43.529 { 00:33:43.530 "name": "BaseBdev1", 00:33:43.530 "uuid": "69c4dbf5-1b0f-50de-a00d-6e03c85e8dab", 00:33:43.530 "is_configured": true, 00:33:43.530 "data_offset": 0, 00:33:43.530 "data_size": 65536 00:33:43.530 }, 00:33:43.530 { 00:33:43.530 "name": "BaseBdev2", 00:33:43.530 "uuid": "4a664acf-e9c1-5798-9588-b2ceb964cc7e", 00:33:43.530 "is_configured": true, 00:33:43.530 "data_offset": 0, 00:33:43.530 "data_size": 65536 00:33:43.530 } 00:33:43.530 ] 00:33:43.530 }' 00:33:43.530 17:30:20 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:33:43.530 17:30:20 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:33:44.098 17:30:21 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:33:44.098 17:30:21 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:44.098 17:30:21 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:33:44.098 17:30:21 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:33:44.098 [2024-11-26 17:30:21.289915] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:33:44.098 17:30:21 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:44.098 17:30:21 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=65536 00:33:44.098 17:30:21 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:33:44.098 17:30:21 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:44.098 17:30:21 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:33:44.098 17:30:21 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:33:44.098 17:30:21 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:44.098 17:30:21 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@619 -- # data_offset=0 00:33:44.098 17:30:21 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@621 -- # '[' false = true ']' 00:33:44.098 17:30:21 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@624 -- # '[' true = true ']' 00:33:44.098 17:30:21 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@625 -- # local write_unit_size 00:33:44.098 17:30:21 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@628 -- # nbd_start_disks /var/tmp/spdk.sock raid_bdev1 /dev/nbd0 00:33:44.098 17:30:21 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:33:44.098 17:30:21 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@10 -- # bdev_list=('raid_bdev1') 00:33:44.098 17:30:21 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@10 -- # local bdev_list 00:33:44.098 17:30:21 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:33:44.098 17:30:21 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@11 -- # local nbd_list 00:33:44.098 17:30:21 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@12 -- # local i 00:33:44.098 17:30:21 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:33:44.098 17:30:21 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:33:44.098 17:30:21 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk raid_bdev1 /dev/nbd0 00:33:44.357 [2024-11-26 17:30:21.661770] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:33:44.357 /dev/nbd0 00:33:44.357 17:30:21 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:33:44.357 17:30:21 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:33:44.357 17:30:21 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:33:44.357 17:30:21 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@873 -- # local i 00:33:44.357 17:30:21 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:33:44.357 17:30:21 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:33:44.357 17:30:21 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:33:44.357 17:30:21 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@877 -- # break 00:33:44.357 17:30:21 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:33:44.357 17:30:21 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:33:44.357 17:30:21 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:33:44.357 1+0 records in 00:33:44.357 1+0 records out 00:33:44.357 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000839303 s, 4.9 MB/s 00:33:44.357 17:30:21 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:33:44.357 17:30:21 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@890 -- # size=4096 00:33:44.357 17:30:21 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:33:44.357 17:30:21 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:33:44.357 17:30:21 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@893 -- # return 0 00:33:44.357 17:30:21 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:33:44.357 17:30:21 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:33:44.357 17:30:21 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@629 -- # '[' raid1 = raid5f ']' 00:33:44.357 17:30:21 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@633 -- # write_unit_size=1 00:33:44.357 17:30:21 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@635 -- # dd if=/dev/urandom of=/dev/nbd0 bs=512 count=65536 oflag=direct 00:33:48.543 65536+0 records in 00:33:48.543 65536+0 records out 00:33:48.543 33554432 bytes (34 MB, 32 MiB) copied, 4.15737 s, 8.1 MB/s 00:33:48.543 17:30:25 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@636 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:33:48.543 17:30:25 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:33:48.543 17:30:25 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:33:48.543 17:30:25 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@50 -- # local nbd_list 00:33:48.543 17:30:25 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@51 -- # local i 00:33:48.543 17:30:25 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:33:48.543 17:30:25 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:33:48.801 [2024-11-26 17:30:26.165041] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:33:48.801 17:30:26 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:33:48.801 17:30:26 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:33:48.801 17:30:26 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:33:48.801 17:30:26 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:33:48.801 17:30:26 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:33:48.801 17:30:26 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:33:48.801 17:30:26 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:33:48.801 17:30:26 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:33:48.801 17:30:26 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:33:48.801 17:30:26 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:48.801 17:30:26 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:33:48.801 [2024-11-26 17:30:26.185161] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:33:48.801 17:30:26 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:48.801 17:30:26 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:33:48.801 17:30:26 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:33:48.801 17:30:26 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:33:48.801 17:30:26 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:33:48.801 17:30:26 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:33:48.801 17:30:26 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:33:48.801 17:30:26 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:33:48.801 17:30:26 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:33:48.801 17:30:26 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:33:48.801 17:30:26 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:33:48.801 17:30:26 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:33:48.801 17:30:26 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:48.801 17:30:26 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:33:48.801 17:30:26 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:33:48.801 17:30:26 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:48.801 17:30:26 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:33:48.801 "name": "raid_bdev1", 00:33:48.801 "uuid": "d1b7dde0-f4dc-48c2-9386-95df55e30516", 00:33:48.801 "strip_size_kb": 0, 00:33:48.801 "state": "online", 00:33:48.801 "raid_level": "raid1", 00:33:48.801 "superblock": false, 00:33:48.801 "num_base_bdevs": 2, 00:33:48.801 "num_base_bdevs_discovered": 1, 00:33:48.801 "num_base_bdevs_operational": 1, 00:33:48.801 "base_bdevs_list": [ 00:33:48.801 { 00:33:48.801 "name": null, 00:33:48.801 "uuid": "00000000-0000-0000-0000-000000000000", 00:33:48.801 "is_configured": false, 00:33:48.801 "data_offset": 0, 00:33:48.801 "data_size": 65536 00:33:48.801 }, 00:33:48.801 { 00:33:48.801 "name": "BaseBdev2", 00:33:48.801 "uuid": "4a664acf-e9c1-5798-9588-b2ceb964cc7e", 00:33:48.801 "is_configured": true, 00:33:48.801 "data_offset": 0, 00:33:48.801 "data_size": 65536 00:33:48.801 } 00:33:48.801 ] 00:33:48.801 }' 00:33:48.801 17:30:26 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:33:48.801 17:30:26 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:33:49.364 17:30:26 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:33:49.364 17:30:26 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:49.364 17:30:26 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:33:49.364 [2024-11-26 17:30:26.601273] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:33:49.364 [2024-11-26 17:30:26.617843] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000d09bd0 00:33:49.364 17:30:26 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:49.364 17:30:26 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@647 -- # sleep 1 00:33:49.364 [2024-11-26 17:30:26.619978] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:33:50.297 17:30:27 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:33:50.297 17:30:27 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:33:50.297 17:30:27 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:33:50.297 17:30:27 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:33:50.297 17:30:27 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:33:50.297 17:30:27 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:33:50.297 17:30:27 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:33:50.297 17:30:27 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:50.297 17:30:27 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:33:50.297 17:30:27 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:50.297 17:30:27 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:33:50.297 "name": "raid_bdev1", 00:33:50.297 "uuid": "d1b7dde0-f4dc-48c2-9386-95df55e30516", 00:33:50.297 "strip_size_kb": 0, 00:33:50.297 "state": "online", 00:33:50.297 "raid_level": "raid1", 00:33:50.297 "superblock": false, 00:33:50.297 "num_base_bdevs": 2, 00:33:50.297 "num_base_bdevs_discovered": 2, 00:33:50.297 "num_base_bdevs_operational": 2, 00:33:50.297 "process": { 00:33:50.297 "type": "rebuild", 00:33:50.297 "target": "spare", 00:33:50.297 "progress": { 00:33:50.297 "blocks": 20480, 00:33:50.297 "percent": 31 00:33:50.297 } 00:33:50.297 }, 00:33:50.297 "base_bdevs_list": [ 00:33:50.297 { 00:33:50.297 "name": "spare", 00:33:50.297 "uuid": "bc3acd70-e06c-5693-ab0c-05627b98a5d0", 00:33:50.297 "is_configured": true, 00:33:50.297 "data_offset": 0, 00:33:50.297 "data_size": 65536 00:33:50.297 }, 00:33:50.297 { 00:33:50.297 "name": "BaseBdev2", 00:33:50.297 "uuid": "4a664acf-e9c1-5798-9588-b2ceb964cc7e", 00:33:50.297 "is_configured": true, 00:33:50.297 "data_offset": 0, 00:33:50.297 "data_size": 65536 00:33:50.297 } 00:33:50.297 ] 00:33:50.297 }' 00:33:50.297 17:30:27 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:33:50.297 17:30:27 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:33:50.297 17:30:27 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:33:50.555 17:30:27 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:33:50.555 17:30:27 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:33:50.555 17:30:27 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:50.555 17:30:27 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:33:50.555 [2024-11-26 17:30:27.765621] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:33:50.555 [2024-11-26 17:30:27.827379] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:33:50.555 [2024-11-26 17:30:27.827641] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:33:50.555 [2024-11-26 17:30:27.827667] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:33:50.555 [2024-11-26 17:30:27.827682] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:33:50.555 17:30:27 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:50.555 17:30:27 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:33:50.555 17:30:27 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:33:50.555 17:30:27 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:33:50.555 17:30:27 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:33:50.555 17:30:27 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:33:50.555 17:30:27 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:33:50.555 17:30:27 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:33:50.555 17:30:27 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:33:50.555 17:30:27 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:33:50.555 17:30:27 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:33:50.555 17:30:27 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:33:50.555 17:30:27 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:33:50.555 17:30:27 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:50.555 17:30:27 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:33:50.555 17:30:27 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:50.555 17:30:27 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:33:50.555 "name": "raid_bdev1", 00:33:50.555 "uuid": "d1b7dde0-f4dc-48c2-9386-95df55e30516", 00:33:50.555 "strip_size_kb": 0, 00:33:50.555 "state": "online", 00:33:50.555 "raid_level": "raid1", 00:33:50.555 "superblock": false, 00:33:50.555 "num_base_bdevs": 2, 00:33:50.555 "num_base_bdevs_discovered": 1, 00:33:50.555 "num_base_bdevs_operational": 1, 00:33:50.555 "base_bdevs_list": [ 00:33:50.555 { 00:33:50.555 "name": null, 00:33:50.555 "uuid": "00000000-0000-0000-0000-000000000000", 00:33:50.555 "is_configured": false, 00:33:50.556 "data_offset": 0, 00:33:50.556 "data_size": 65536 00:33:50.556 }, 00:33:50.556 { 00:33:50.556 "name": "BaseBdev2", 00:33:50.556 "uuid": "4a664acf-e9c1-5798-9588-b2ceb964cc7e", 00:33:50.556 "is_configured": true, 00:33:50.556 "data_offset": 0, 00:33:50.556 "data_size": 65536 00:33:50.556 } 00:33:50.556 ] 00:33:50.556 }' 00:33:50.556 17:30:27 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:33:50.556 17:30:27 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:33:51.122 17:30:28 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:33:51.122 17:30:28 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:33:51.122 17:30:28 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:33:51.122 17:30:28 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=none 00:33:51.122 17:30:28 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:33:51.122 17:30:28 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:33:51.122 17:30:28 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:33:51.122 17:30:28 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:51.122 17:30:28 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:33:51.122 17:30:28 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:51.122 17:30:28 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:33:51.122 "name": "raid_bdev1", 00:33:51.122 "uuid": "d1b7dde0-f4dc-48c2-9386-95df55e30516", 00:33:51.122 "strip_size_kb": 0, 00:33:51.122 "state": "online", 00:33:51.122 "raid_level": "raid1", 00:33:51.122 "superblock": false, 00:33:51.122 "num_base_bdevs": 2, 00:33:51.122 "num_base_bdevs_discovered": 1, 00:33:51.122 "num_base_bdevs_operational": 1, 00:33:51.122 "base_bdevs_list": [ 00:33:51.122 { 00:33:51.122 "name": null, 00:33:51.122 "uuid": "00000000-0000-0000-0000-000000000000", 00:33:51.122 "is_configured": false, 00:33:51.122 "data_offset": 0, 00:33:51.122 "data_size": 65536 00:33:51.122 }, 00:33:51.122 { 00:33:51.122 "name": "BaseBdev2", 00:33:51.122 "uuid": "4a664acf-e9c1-5798-9588-b2ceb964cc7e", 00:33:51.122 "is_configured": true, 00:33:51.122 "data_offset": 0, 00:33:51.122 "data_size": 65536 00:33:51.122 } 00:33:51.122 ] 00:33:51.122 }' 00:33:51.122 17:30:28 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:33:51.122 17:30:28 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:33:51.122 17:30:28 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:33:51.122 17:30:28 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:33:51.122 17:30:28 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:33:51.122 17:30:28 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:51.122 17:30:28 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:33:51.122 [2024-11-26 17:30:28.462467] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:33:51.122 [2024-11-26 17:30:28.478761] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000d09ca0 00:33:51.122 17:30:28 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:51.122 17:30:28 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@663 -- # sleep 1 00:33:51.122 [2024-11-26 17:30:28.480980] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:33:52.055 17:30:29 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:33:52.055 17:30:29 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:33:52.055 17:30:29 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:33:52.055 17:30:29 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:33:52.055 17:30:29 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:33:52.055 17:30:29 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:33:52.055 17:30:29 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:33:52.055 17:30:29 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:52.055 17:30:29 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:33:52.312 17:30:29 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:52.312 17:30:29 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:33:52.312 "name": "raid_bdev1", 00:33:52.312 "uuid": "d1b7dde0-f4dc-48c2-9386-95df55e30516", 00:33:52.312 "strip_size_kb": 0, 00:33:52.312 "state": "online", 00:33:52.312 "raid_level": "raid1", 00:33:52.312 "superblock": false, 00:33:52.312 "num_base_bdevs": 2, 00:33:52.312 "num_base_bdevs_discovered": 2, 00:33:52.312 "num_base_bdevs_operational": 2, 00:33:52.312 "process": { 00:33:52.312 "type": "rebuild", 00:33:52.312 "target": "spare", 00:33:52.312 "progress": { 00:33:52.312 "blocks": 20480, 00:33:52.312 "percent": 31 00:33:52.312 } 00:33:52.312 }, 00:33:52.312 "base_bdevs_list": [ 00:33:52.312 { 00:33:52.312 "name": "spare", 00:33:52.313 "uuid": "bc3acd70-e06c-5693-ab0c-05627b98a5d0", 00:33:52.313 "is_configured": true, 00:33:52.313 "data_offset": 0, 00:33:52.313 "data_size": 65536 00:33:52.313 }, 00:33:52.313 { 00:33:52.313 "name": "BaseBdev2", 00:33:52.313 "uuid": "4a664acf-e9c1-5798-9588-b2ceb964cc7e", 00:33:52.313 "is_configured": true, 00:33:52.313 "data_offset": 0, 00:33:52.313 "data_size": 65536 00:33:52.313 } 00:33:52.313 ] 00:33:52.313 }' 00:33:52.313 17:30:29 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:33:52.313 17:30:29 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:33:52.313 17:30:29 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:33:52.313 17:30:29 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:33:52.313 17:30:29 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@666 -- # '[' false = true ']' 00:33:52.313 17:30:29 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=2 00:33:52.313 17:30:29 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@693 -- # '[' raid1 = raid1 ']' 00:33:52.313 17:30:29 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@693 -- # '[' 2 -gt 2 ']' 00:33:52.313 17:30:29 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@706 -- # local timeout=383 00:33:52.313 17:30:29 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:33:52.313 17:30:29 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:33:52.313 17:30:29 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:33:52.313 17:30:29 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:33:52.313 17:30:29 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:33:52.313 17:30:29 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:33:52.313 17:30:29 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:33:52.313 17:30:29 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:33:52.313 17:30:29 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:52.313 17:30:29 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:33:52.313 17:30:29 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:52.313 17:30:29 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:33:52.313 "name": "raid_bdev1", 00:33:52.313 "uuid": "d1b7dde0-f4dc-48c2-9386-95df55e30516", 00:33:52.313 "strip_size_kb": 0, 00:33:52.313 "state": "online", 00:33:52.313 "raid_level": "raid1", 00:33:52.313 "superblock": false, 00:33:52.313 "num_base_bdevs": 2, 00:33:52.313 "num_base_bdevs_discovered": 2, 00:33:52.313 "num_base_bdevs_operational": 2, 00:33:52.313 "process": { 00:33:52.313 "type": "rebuild", 00:33:52.313 "target": "spare", 00:33:52.313 "progress": { 00:33:52.313 "blocks": 22528, 00:33:52.313 "percent": 34 00:33:52.313 } 00:33:52.313 }, 00:33:52.313 "base_bdevs_list": [ 00:33:52.313 { 00:33:52.313 "name": "spare", 00:33:52.313 "uuid": "bc3acd70-e06c-5693-ab0c-05627b98a5d0", 00:33:52.313 "is_configured": true, 00:33:52.313 "data_offset": 0, 00:33:52.313 "data_size": 65536 00:33:52.313 }, 00:33:52.313 { 00:33:52.313 "name": "BaseBdev2", 00:33:52.313 "uuid": "4a664acf-e9c1-5798-9588-b2ceb964cc7e", 00:33:52.313 "is_configured": true, 00:33:52.313 "data_offset": 0, 00:33:52.313 "data_size": 65536 00:33:52.313 } 00:33:52.313 ] 00:33:52.313 }' 00:33:52.313 17:30:29 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:33:52.313 17:30:29 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:33:52.313 17:30:29 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:33:52.313 17:30:29 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:33:52.313 17:30:29 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:33:53.688 17:30:30 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:33:53.688 17:30:30 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:33:53.688 17:30:30 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:33:53.688 17:30:30 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:33:53.688 17:30:30 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:33:53.688 17:30:30 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:33:53.688 17:30:30 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:33:53.688 17:30:30 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:33:53.688 17:30:30 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:53.688 17:30:30 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:33:53.688 17:30:30 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:53.688 17:30:30 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:33:53.688 "name": "raid_bdev1", 00:33:53.688 "uuid": "d1b7dde0-f4dc-48c2-9386-95df55e30516", 00:33:53.688 "strip_size_kb": 0, 00:33:53.688 "state": "online", 00:33:53.688 "raid_level": "raid1", 00:33:53.688 "superblock": false, 00:33:53.688 "num_base_bdevs": 2, 00:33:53.688 "num_base_bdevs_discovered": 2, 00:33:53.688 "num_base_bdevs_operational": 2, 00:33:53.688 "process": { 00:33:53.688 "type": "rebuild", 00:33:53.688 "target": "spare", 00:33:53.688 "progress": { 00:33:53.688 "blocks": 45056, 00:33:53.688 "percent": 68 00:33:53.688 } 00:33:53.688 }, 00:33:53.688 "base_bdevs_list": [ 00:33:53.688 { 00:33:53.688 "name": "spare", 00:33:53.688 "uuid": "bc3acd70-e06c-5693-ab0c-05627b98a5d0", 00:33:53.688 "is_configured": true, 00:33:53.688 "data_offset": 0, 00:33:53.688 "data_size": 65536 00:33:53.688 }, 00:33:53.688 { 00:33:53.688 "name": "BaseBdev2", 00:33:53.688 "uuid": "4a664acf-e9c1-5798-9588-b2ceb964cc7e", 00:33:53.688 "is_configured": true, 00:33:53.688 "data_offset": 0, 00:33:53.688 "data_size": 65536 00:33:53.688 } 00:33:53.688 ] 00:33:53.688 }' 00:33:53.688 17:30:30 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:33:53.688 17:30:30 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:33:53.688 17:30:30 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:33:53.688 17:30:30 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:33:53.688 17:30:30 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:33:54.624 [2024-11-26 17:30:31.700479] bdev_raid.c:2900:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:33:54.624 [2024-11-26 17:30:31.700560] bdev_raid.c:2562:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:33:54.624 [2024-11-26 17:30:31.700626] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:33:54.624 17:30:31 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:33:54.624 17:30:31 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:33:54.624 17:30:31 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:33:54.624 17:30:31 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:33:54.624 17:30:31 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:33:54.624 17:30:31 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:33:54.624 17:30:31 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:33:54.624 17:30:31 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:54.624 17:30:31 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:33:54.624 17:30:31 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:33:54.624 17:30:31 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:54.624 17:30:31 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:33:54.624 "name": "raid_bdev1", 00:33:54.624 "uuid": "d1b7dde0-f4dc-48c2-9386-95df55e30516", 00:33:54.624 "strip_size_kb": 0, 00:33:54.624 "state": "online", 00:33:54.624 "raid_level": "raid1", 00:33:54.624 "superblock": false, 00:33:54.624 "num_base_bdevs": 2, 00:33:54.624 "num_base_bdevs_discovered": 2, 00:33:54.624 "num_base_bdevs_operational": 2, 00:33:54.624 "base_bdevs_list": [ 00:33:54.624 { 00:33:54.624 "name": "spare", 00:33:54.624 "uuid": "bc3acd70-e06c-5693-ab0c-05627b98a5d0", 00:33:54.624 "is_configured": true, 00:33:54.624 "data_offset": 0, 00:33:54.624 "data_size": 65536 00:33:54.624 }, 00:33:54.624 { 00:33:54.624 "name": "BaseBdev2", 00:33:54.624 "uuid": "4a664acf-e9c1-5798-9588-b2ceb964cc7e", 00:33:54.624 "is_configured": true, 00:33:54.624 "data_offset": 0, 00:33:54.624 "data_size": 65536 00:33:54.624 } 00:33:54.624 ] 00:33:54.624 }' 00:33:54.624 17:30:31 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:33:54.624 17:30:32 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:33:54.624 17:30:32 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:33:54.624 17:30:32 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:33:54.624 17:30:32 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@709 -- # break 00:33:54.624 17:30:32 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:33:54.624 17:30:32 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:33:54.624 17:30:32 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:33:54.624 17:30:32 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=none 00:33:54.624 17:30:32 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:33:54.624 17:30:32 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:33:54.624 17:30:32 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:33:54.624 17:30:32 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:54.624 17:30:32 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:33:54.624 17:30:32 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:54.883 17:30:32 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:33:54.883 "name": "raid_bdev1", 00:33:54.883 "uuid": "d1b7dde0-f4dc-48c2-9386-95df55e30516", 00:33:54.883 "strip_size_kb": 0, 00:33:54.883 "state": "online", 00:33:54.883 "raid_level": "raid1", 00:33:54.883 "superblock": false, 00:33:54.883 "num_base_bdevs": 2, 00:33:54.883 "num_base_bdevs_discovered": 2, 00:33:54.883 "num_base_bdevs_operational": 2, 00:33:54.883 "base_bdevs_list": [ 00:33:54.883 { 00:33:54.883 "name": "spare", 00:33:54.883 "uuid": "bc3acd70-e06c-5693-ab0c-05627b98a5d0", 00:33:54.883 "is_configured": true, 00:33:54.883 "data_offset": 0, 00:33:54.883 "data_size": 65536 00:33:54.883 }, 00:33:54.883 { 00:33:54.883 "name": "BaseBdev2", 00:33:54.883 "uuid": "4a664acf-e9c1-5798-9588-b2ceb964cc7e", 00:33:54.883 "is_configured": true, 00:33:54.883 "data_offset": 0, 00:33:54.883 "data_size": 65536 00:33:54.883 } 00:33:54.883 ] 00:33:54.883 }' 00:33:54.883 17:30:32 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:33:54.883 17:30:32 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:33:54.883 17:30:32 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:33:54.883 17:30:32 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:33:54.883 17:30:32 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:33:54.883 17:30:32 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:33:54.883 17:30:32 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:33:54.883 17:30:32 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:33:54.883 17:30:32 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:33:54.883 17:30:32 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:33:54.883 17:30:32 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:33:54.883 17:30:32 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:33:54.883 17:30:32 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:33:54.883 17:30:32 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:33:54.883 17:30:32 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:33:54.883 17:30:32 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:33:54.883 17:30:32 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:54.883 17:30:32 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:33:54.883 17:30:32 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:54.883 17:30:32 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:33:54.883 "name": "raid_bdev1", 00:33:54.883 "uuid": "d1b7dde0-f4dc-48c2-9386-95df55e30516", 00:33:54.883 "strip_size_kb": 0, 00:33:54.884 "state": "online", 00:33:54.884 "raid_level": "raid1", 00:33:54.884 "superblock": false, 00:33:54.884 "num_base_bdevs": 2, 00:33:54.884 "num_base_bdevs_discovered": 2, 00:33:54.884 "num_base_bdevs_operational": 2, 00:33:54.884 "base_bdevs_list": [ 00:33:54.884 { 00:33:54.884 "name": "spare", 00:33:54.884 "uuid": "bc3acd70-e06c-5693-ab0c-05627b98a5d0", 00:33:54.884 "is_configured": true, 00:33:54.884 "data_offset": 0, 00:33:54.884 "data_size": 65536 00:33:54.884 }, 00:33:54.884 { 00:33:54.884 "name": "BaseBdev2", 00:33:54.884 "uuid": "4a664acf-e9c1-5798-9588-b2ceb964cc7e", 00:33:54.884 "is_configured": true, 00:33:54.884 "data_offset": 0, 00:33:54.884 "data_size": 65536 00:33:54.884 } 00:33:54.884 ] 00:33:54.884 }' 00:33:54.884 17:30:32 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:33:54.884 17:30:32 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:33:55.465 17:30:32 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:33:55.465 17:30:32 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:55.465 17:30:32 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:33:55.465 [2024-11-26 17:30:32.641561] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:33:55.465 [2024-11-26 17:30:32.641711] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:33:55.465 [2024-11-26 17:30:32.641869] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:33:55.465 [2024-11-26 17:30:32.641944] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:33:55.465 [2024-11-26 17:30:32.641956] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:33:55.465 17:30:32 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:55.465 17:30:32 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:33:55.465 17:30:32 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@720 -- # jq length 00:33:55.465 17:30:32 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:55.465 17:30:32 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:33:55.465 17:30:32 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:55.465 17:30:32 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:33:55.465 17:30:32 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:33:55.465 17:30:32 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@723 -- # '[' false = true ']' 00:33:55.465 17:30:32 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@737 -- # nbd_start_disks /var/tmp/spdk.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 00:33:55.465 17:30:32 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:33:55.465 17:30:32 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev1' 'spare') 00:33:55.465 17:30:32 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@10 -- # local bdev_list 00:33:55.465 17:30:32 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:33:55.465 17:30:32 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@11 -- # local nbd_list 00:33:55.465 17:30:32 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@12 -- # local i 00:33:55.465 17:30:32 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:33:55.465 17:30:32 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:33:55.465 17:30:32 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev1 /dev/nbd0 00:33:55.723 /dev/nbd0 00:33:55.723 17:30:33 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:33:55.723 17:30:33 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:33:55.723 17:30:33 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:33:55.723 17:30:33 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@873 -- # local i 00:33:55.723 17:30:33 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:33:55.723 17:30:33 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:33:55.723 17:30:33 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:33:55.723 17:30:33 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@877 -- # break 00:33:55.723 17:30:33 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:33:55.724 17:30:33 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:33:55.724 17:30:33 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:33:55.724 1+0 records in 00:33:55.724 1+0 records out 00:33:55.724 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000171912 s, 23.8 MB/s 00:33:55.724 17:30:33 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:33:55.724 17:30:33 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@890 -- # size=4096 00:33:55.724 17:30:33 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:33:55.724 17:30:33 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:33:55.724 17:30:33 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@893 -- # return 0 00:33:55.724 17:30:33 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:33:55.724 17:30:33 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:33:55.724 17:30:33 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd1 00:33:55.983 /dev/nbd1 00:33:55.983 17:30:33 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:33:55.983 17:30:33 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:33:55.983 17:30:33 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:33:55.983 17:30:33 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@873 -- # local i 00:33:55.983 17:30:33 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:33:55.983 17:30:33 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:33:55.983 17:30:33 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:33:55.983 17:30:33 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@877 -- # break 00:33:55.983 17:30:33 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:33:55.983 17:30:33 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:33:55.983 17:30:33 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:33:55.983 1+0 records in 00:33:55.983 1+0 records out 00:33:55.983 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000404375 s, 10.1 MB/s 00:33:55.983 17:30:33 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:33:55.983 17:30:33 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@890 -- # size=4096 00:33:55.983 17:30:33 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:33:55.983 17:30:33 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:33:55.983 17:30:33 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@893 -- # return 0 00:33:55.983 17:30:33 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:33:55.983 17:30:33 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:33:55.983 17:30:33 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@738 -- # cmp -i 0 /dev/nbd0 /dev/nbd1 00:33:56.241 17:30:33 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@739 -- # nbd_stop_disks /var/tmp/spdk.sock '/dev/nbd0 /dev/nbd1' 00:33:56.241 17:30:33 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:33:56.241 17:30:33 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:33:56.242 17:30:33 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@50 -- # local nbd_list 00:33:56.242 17:30:33 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@51 -- # local i 00:33:56.242 17:30:33 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:33:56.242 17:30:33 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:33:56.500 17:30:33 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:33:56.500 17:30:33 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:33:56.500 17:30:33 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:33:56.500 17:30:33 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:33:56.500 17:30:33 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:33:56.500 17:30:33 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:33:56.500 17:30:33 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:33:56.500 17:30:33 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:33:56.500 17:30:33 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:33:56.500 17:30:33 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:33:56.757 17:30:34 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:33:56.758 17:30:34 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:33:56.758 17:30:34 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:33:56.758 17:30:34 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:33:56.758 17:30:34 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:33:56.758 17:30:34 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:33:56.758 17:30:34 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:33:56.758 17:30:34 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:33:56.758 17:30:34 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@743 -- # '[' false = true ']' 00:33:56.758 17:30:34 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@784 -- # killprocess 75751 00:33:56.758 17:30:34 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@954 -- # '[' -z 75751 ']' 00:33:56.758 17:30:34 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@958 -- # kill -0 75751 00:33:56.758 17:30:34 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@959 -- # uname 00:33:56.758 17:30:34 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:33:56.758 17:30:34 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 75751 00:33:56.758 17:30:34 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:33:56.758 killing process with pid 75751 00:33:56.758 Received shutdown signal, test time was about 60.000000 seconds 00:33:56.758 00:33:56.758 Latency(us) 00:33:56.758 [2024-11-26T17:30:34.205Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:33:56.758 [2024-11-26T17:30:34.205Z] =================================================================================================================== 00:33:56.758 [2024-11-26T17:30:34.205Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:33:56.758 17:30:34 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:33:56.758 17:30:34 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 75751' 00:33:56.758 17:30:34 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@973 -- # kill 75751 00:33:56.758 [2024-11-26 17:30:34.170757] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:33:56.758 17:30:34 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@978 -- # wait 75751 00:33:57.324 [2024-11-26 17:30:34.481362] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:33:58.258 17:30:35 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@786 -- # return 0 00:33:58.258 00:33:58.258 real 0m16.016s 00:33:58.258 user 0m18.448s 00:33:58.258 sys 0m3.314s 00:33:58.258 17:30:35 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:33:58.258 17:30:35 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:33:58.258 ************************************ 00:33:58.258 END TEST raid_rebuild_test 00:33:58.258 ************************************ 00:33:58.258 17:30:35 bdev_raid -- bdev/bdev_raid.sh@979 -- # run_test raid_rebuild_test_sb raid_rebuild_test raid1 2 true false true 00:33:58.258 17:30:35 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 7 -le 1 ']' 00:33:58.258 17:30:35 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:33:58.258 17:30:35 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:33:58.258 ************************************ 00:33:58.258 START TEST raid_rebuild_test_sb 00:33:58.258 ************************************ 00:33:58.258 17:30:35 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@1129 -- # raid_rebuild_test raid1 2 true false true 00:33:58.258 17:30:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@569 -- # local raid_level=raid1 00:33:58.258 17:30:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=2 00:33:58.258 17:30:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@571 -- # local superblock=true 00:33:58.258 17:30:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@572 -- # local background_io=false 00:33:58.258 17:30:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@573 -- # local verify=true 00:33:58.258 17:30:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:33:58.258 17:30:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:33:58.258 17:30:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:33:58.258 17:30:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:33:58.258 17:30:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:33:58.516 17:30:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:33:58.516 17:30:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:33:58.516 17:30:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:33:58.516 17:30:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:33:58.516 17:30:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:33:58.516 17:30:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:33:58.516 17:30:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # local strip_size 00:33:58.516 17:30:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@577 -- # local create_arg 00:33:58.516 17:30:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:33:58.516 17:30:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@579 -- # local data_offset 00:33:58.516 17:30:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@581 -- # '[' raid1 '!=' raid1 ']' 00:33:58.516 17:30:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@589 -- # strip_size=0 00:33:58.516 17:30:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@592 -- # '[' true = true ']' 00:33:58.516 17:30:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@593 -- # create_arg+=' -s' 00:33:58.516 17:30:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@597 -- # raid_pid=76174 00:33:58.516 17:30:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@598 -- # waitforlisten 76174 00:33:58.516 17:30:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:33:58.516 17:30:35 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@835 -- # '[' -z 76174 ']' 00:33:58.516 17:30:35 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:33:58.516 17:30:35 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@840 -- # local max_retries=100 00:33:58.516 17:30:35 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:33:58.516 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:33:58.516 17:30:35 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@844 -- # xtrace_disable 00:33:58.516 17:30:35 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:33:58.516 I/O size of 3145728 is greater than zero copy threshold (65536). 00:33:58.516 Zero copy mechanism will not be used. 00:33:58.516 [2024-11-26 17:30:35.822628] Starting SPDK v25.01-pre git sha1 f7ce15267 / DPDK 24.03.0 initialization... 00:33:58.516 [2024-11-26 17:30:35.822812] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76174 ] 00:33:58.774 [2024-11-26 17:30:36.011313] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:33:58.774 [2024-11-26 17:30:36.133309] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:33:59.032 [2024-11-26 17:30:36.342201] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:33:59.032 [2024-11-26 17:30:36.342242] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:33:59.290 17:30:36 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:33:59.290 17:30:36 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@868 -- # return 0 00:33:59.290 17:30:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:33:59.290 17:30:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:33:59.290 17:30:36 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:59.290 17:30:36 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:33:59.549 BaseBdev1_malloc 00:33:59.549 17:30:36 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:59.549 17:30:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:33:59.549 17:30:36 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:59.549 17:30:36 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:33:59.549 [2024-11-26 17:30:36.782082] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:33:59.550 [2024-11-26 17:30:36.782314] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:33:59.550 [2024-11-26 17:30:36.782383] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:33:59.550 [2024-11-26 17:30:36.782490] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:33:59.550 [2024-11-26 17:30:36.785197] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:33:59.550 [2024-11-26 17:30:36.785235] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:33:59.550 BaseBdev1 00:33:59.550 17:30:36 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:59.550 17:30:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:33:59.550 17:30:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:33:59.550 17:30:36 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:59.550 17:30:36 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:33:59.550 BaseBdev2_malloc 00:33:59.550 17:30:36 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:59.550 17:30:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:33:59.550 17:30:36 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:59.550 17:30:36 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:33:59.550 [2024-11-26 17:30:36.835991] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:33:59.550 [2024-11-26 17:30:36.836173] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:33:59.550 [2024-11-26 17:30:36.836249] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:33:59.550 [2024-11-26 17:30:36.836347] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:33:59.550 [2024-11-26 17:30:36.838738] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:33:59.550 [2024-11-26 17:30:36.838883] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:33:59.550 BaseBdev2 00:33:59.550 17:30:36 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:59.550 17:30:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 512 -b spare_malloc 00:33:59.550 17:30:36 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:59.550 17:30:36 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:33:59.550 spare_malloc 00:33:59.550 17:30:36 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:59.550 17:30:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:33:59.550 17:30:36 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:59.550 17:30:36 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:33:59.550 spare_delay 00:33:59.550 17:30:36 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:59.550 17:30:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:33:59.550 17:30:36 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:59.550 17:30:36 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:33:59.550 [2024-11-26 17:30:36.909898] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:33:59.550 [2024-11-26 17:30:36.910082] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:33:59.550 [2024-11-26 17:30:36.910150] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:33:59.550 [2024-11-26 17:30:36.910168] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:33:59.550 [2024-11-26 17:30:36.912624] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:33:59.550 [2024-11-26 17:30:36.912775] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:33:59.550 spare 00:33:59.550 17:30:36 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:59.550 17:30:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n raid_bdev1 00:33:59.550 17:30:36 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:59.550 17:30:36 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:33:59.550 [2024-11-26 17:30:36.917962] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:33:59.550 [2024-11-26 17:30:36.920112] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:33:59.550 [2024-11-26 17:30:36.920389] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:33:59.550 [2024-11-26 17:30:36.920412] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:33:59.550 [2024-11-26 17:30:36.920662] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:33:59.550 [2024-11-26 17:30:36.920821] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:33:59.550 [2024-11-26 17:30:36.920831] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:33:59.550 [2024-11-26 17:30:36.920973] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:33:59.550 17:30:36 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:59.550 17:30:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:33:59.550 17:30:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:33:59.550 17:30:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:33:59.550 17:30:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:33:59.550 17:30:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:33:59.550 17:30:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:33:59.550 17:30:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:33:59.550 17:30:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:33:59.550 17:30:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:33:59.550 17:30:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:33:59.550 17:30:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:33:59.550 17:30:36 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:59.550 17:30:36 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:33:59.550 17:30:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:33:59.550 17:30:36 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:59.550 17:30:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:33:59.550 "name": "raid_bdev1", 00:33:59.550 "uuid": "11983f74-e4cf-46a9-a3d1-184ffd8dc33c", 00:33:59.550 "strip_size_kb": 0, 00:33:59.550 "state": "online", 00:33:59.550 "raid_level": "raid1", 00:33:59.550 "superblock": true, 00:33:59.550 "num_base_bdevs": 2, 00:33:59.550 "num_base_bdevs_discovered": 2, 00:33:59.550 "num_base_bdevs_operational": 2, 00:33:59.550 "base_bdevs_list": [ 00:33:59.550 { 00:33:59.550 "name": "BaseBdev1", 00:33:59.550 "uuid": "428c711a-8986-59ed-ace3-a0125e4da568", 00:33:59.550 "is_configured": true, 00:33:59.550 "data_offset": 2048, 00:33:59.550 "data_size": 63488 00:33:59.550 }, 00:33:59.550 { 00:33:59.550 "name": "BaseBdev2", 00:33:59.550 "uuid": "bdd1296c-9716-5a3c-8651-8ec8dd3800af", 00:33:59.550 "is_configured": true, 00:33:59.550 "data_offset": 2048, 00:33:59.550 "data_size": 63488 00:33:59.550 } 00:33:59.550 ] 00:33:59.550 }' 00:33:59.550 17:30:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:33:59.550 17:30:36 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:34:00.116 17:30:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:34:00.116 17:30:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:34:00.116 17:30:37 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:00.116 17:30:37 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:34:00.116 [2024-11-26 17:30:37.378359] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:34:00.116 17:30:37 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:00.116 17:30:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=63488 00:34:00.116 17:30:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:34:00.116 17:30:37 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:00.116 17:30:37 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:34:00.116 17:30:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:34:00.116 17:30:37 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:00.116 17:30:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@619 -- # data_offset=2048 00:34:00.116 17:30:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@621 -- # '[' false = true ']' 00:34:00.116 17:30:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@624 -- # '[' true = true ']' 00:34:00.116 17:30:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@625 -- # local write_unit_size 00:34:00.116 17:30:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@628 -- # nbd_start_disks /var/tmp/spdk.sock raid_bdev1 /dev/nbd0 00:34:00.116 17:30:37 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:34:00.116 17:30:37 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # bdev_list=('raid_bdev1') 00:34:00.116 17:30:37 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # local bdev_list 00:34:00.116 17:30:37 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:34:00.116 17:30:37 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # local nbd_list 00:34:00.116 17:30:37 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@12 -- # local i 00:34:00.116 17:30:37 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:34:00.116 17:30:37 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:34:00.116 17:30:37 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk raid_bdev1 /dev/nbd0 00:34:00.374 [2024-11-26 17:30:37.658218] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:34:00.374 /dev/nbd0 00:34:00.374 17:30:37 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:34:00.374 17:30:37 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:34:00.374 17:30:37 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:34:00.374 17:30:37 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@873 -- # local i 00:34:00.374 17:30:37 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:34:00.374 17:30:37 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:34:00.374 17:30:37 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:34:00.374 17:30:37 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@877 -- # break 00:34:00.374 17:30:37 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:34:00.374 17:30:37 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:34:00.374 17:30:37 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:34:00.374 1+0 records in 00:34:00.374 1+0 records out 00:34:00.374 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000373022 s, 11.0 MB/s 00:34:00.374 17:30:37 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:34:00.374 17:30:37 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@890 -- # size=4096 00:34:00.374 17:30:37 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:34:00.374 17:30:37 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:34:00.374 17:30:37 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@893 -- # return 0 00:34:00.374 17:30:37 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:34:00.374 17:30:37 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:34:00.374 17:30:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@629 -- # '[' raid1 = raid5f ']' 00:34:00.374 17:30:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@633 -- # write_unit_size=1 00:34:00.374 17:30:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@635 -- # dd if=/dev/urandom of=/dev/nbd0 bs=512 count=63488 oflag=direct 00:34:05.639 63488+0 records in 00:34:05.639 63488+0 records out 00:34:05.639 32505856 bytes (33 MB, 31 MiB) copied, 4.90333 s, 6.6 MB/s 00:34:05.639 17:30:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@636 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:34:05.639 17:30:42 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:34:05.639 17:30:42 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:34:05.639 17:30:42 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # local nbd_list 00:34:05.639 17:30:42 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@51 -- # local i 00:34:05.639 17:30:42 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:34:05.639 17:30:42 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:34:05.639 17:30:42 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:34:05.639 [2024-11-26 17:30:42.902320] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:34:05.639 17:30:42 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:34:05.639 17:30:42 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:34:05.639 17:30:42 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:34:05.639 17:30:42 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:34:05.639 17:30:42 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:34:05.639 17:30:42 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:34:05.639 17:30:42 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:34:05.639 17:30:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:34:05.639 17:30:42 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:05.639 17:30:42 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:34:05.639 [2024-11-26 17:30:42.914425] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:34:05.639 17:30:42 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:05.639 17:30:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:34:05.639 17:30:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:34:05.639 17:30:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:34:05.639 17:30:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:34:05.639 17:30:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:34:05.639 17:30:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:34:05.639 17:30:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:34:05.639 17:30:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:34:05.639 17:30:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:34:05.639 17:30:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:34:05.639 17:30:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:34:05.639 17:30:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:34:05.639 17:30:42 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:05.639 17:30:42 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:34:05.639 17:30:42 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:05.639 17:30:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:34:05.639 "name": "raid_bdev1", 00:34:05.639 "uuid": "11983f74-e4cf-46a9-a3d1-184ffd8dc33c", 00:34:05.639 "strip_size_kb": 0, 00:34:05.639 "state": "online", 00:34:05.639 "raid_level": "raid1", 00:34:05.639 "superblock": true, 00:34:05.639 "num_base_bdevs": 2, 00:34:05.639 "num_base_bdevs_discovered": 1, 00:34:05.639 "num_base_bdevs_operational": 1, 00:34:05.639 "base_bdevs_list": [ 00:34:05.639 { 00:34:05.639 "name": null, 00:34:05.639 "uuid": "00000000-0000-0000-0000-000000000000", 00:34:05.639 "is_configured": false, 00:34:05.639 "data_offset": 0, 00:34:05.639 "data_size": 63488 00:34:05.639 }, 00:34:05.639 { 00:34:05.639 "name": "BaseBdev2", 00:34:05.639 "uuid": "bdd1296c-9716-5a3c-8651-8ec8dd3800af", 00:34:05.639 "is_configured": true, 00:34:05.639 "data_offset": 2048, 00:34:05.639 "data_size": 63488 00:34:05.639 } 00:34:05.639 ] 00:34:05.639 }' 00:34:05.639 17:30:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:34:05.639 17:30:42 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:34:05.897 17:30:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:34:05.897 17:30:43 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:05.897 17:30:43 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:34:05.897 [2024-11-26 17:30:43.322526] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:34:05.897 [2024-11-26 17:30:43.341604] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000ca3360 00:34:06.155 17:30:43 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:06.155 17:30:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@647 -- # sleep 1 00:34:06.155 [2024-11-26 17:30:43.343828] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:34:07.088 17:30:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:34:07.088 17:30:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:34:07.088 17:30:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:34:07.088 17:30:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:34:07.088 17:30:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:34:07.088 17:30:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:34:07.088 17:30:44 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:07.088 17:30:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:34:07.088 17:30:44 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:34:07.088 17:30:44 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:07.088 17:30:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:34:07.088 "name": "raid_bdev1", 00:34:07.088 "uuid": "11983f74-e4cf-46a9-a3d1-184ffd8dc33c", 00:34:07.088 "strip_size_kb": 0, 00:34:07.088 "state": "online", 00:34:07.088 "raid_level": "raid1", 00:34:07.088 "superblock": true, 00:34:07.088 "num_base_bdevs": 2, 00:34:07.088 "num_base_bdevs_discovered": 2, 00:34:07.088 "num_base_bdevs_operational": 2, 00:34:07.088 "process": { 00:34:07.088 "type": "rebuild", 00:34:07.088 "target": "spare", 00:34:07.088 "progress": { 00:34:07.088 "blocks": 20480, 00:34:07.088 "percent": 32 00:34:07.088 } 00:34:07.088 }, 00:34:07.088 "base_bdevs_list": [ 00:34:07.088 { 00:34:07.088 "name": "spare", 00:34:07.088 "uuid": "4aa6a3de-c1d5-5588-9cac-52572529ba8c", 00:34:07.088 "is_configured": true, 00:34:07.088 "data_offset": 2048, 00:34:07.088 "data_size": 63488 00:34:07.088 }, 00:34:07.088 { 00:34:07.088 "name": "BaseBdev2", 00:34:07.088 "uuid": "bdd1296c-9716-5a3c-8651-8ec8dd3800af", 00:34:07.088 "is_configured": true, 00:34:07.088 "data_offset": 2048, 00:34:07.088 "data_size": 63488 00:34:07.088 } 00:34:07.088 ] 00:34:07.088 }' 00:34:07.088 17:30:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:34:07.088 17:30:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:34:07.088 17:30:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:34:07.088 17:30:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:34:07.088 17:30:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:34:07.088 17:30:44 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:07.088 17:30:44 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:34:07.088 [2024-11-26 17:30:44.485260] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:34:07.346 [2024-11-26 17:30:44.551568] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:34:07.346 [2024-11-26 17:30:44.551641] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:34:07.346 [2024-11-26 17:30:44.551657] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:34:07.346 [2024-11-26 17:30:44.551668] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:34:07.346 17:30:44 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:07.346 17:30:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:34:07.346 17:30:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:34:07.346 17:30:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:34:07.346 17:30:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:34:07.346 17:30:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:34:07.346 17:30:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:34:07.346 17:30:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:34:07.346 17:30:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:34:07.346 17:30:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:34:07.346 17:30:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:34:07.346 17:30:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:34:07.346 17:30:44 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:07.346 17:30:44 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:34:07.346 17:30:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:34:07.346 17:30:44 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:07.346 17:30:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:34:07.346 "name": "raid_bdev1", 00:34:07.346 "uuid": "11983f74-e4cf-46a9-a3d1-184ffd8dc33c", 00:34:07.346 "strip_size_kb": 0, 00:34:07.346 "state": "online", 00:34:07.346 "raid_level": "raid1", 00:34:07.346 "superblock": true, 00:34:07.346 "num_base_bdevs": 2, 00:34:07.346 "num_base_bdevs_discovered": 1, 00:34:07.346 "num_base_bdevs_operational": 1, 00:34:07.346 "base_bdevs_list": [ 00:34:07.346 { 00:34:07.346 "name": null, 00:34:07.346 "uuid": "00000000-0000-0000-0000-000000000000", 00:34:07.346 "is_configured": false, 00:34:07.346 "data_offset": 0, 00:34:07.346 "data_size": 63488 00:34:07.346 }, 00:34:07.346 { 00:34:07.346 "name": "BaseBdev2", 00:34:07.346 "uuid": "bdd1296c-9716-5a3c-8651-8ec8dd3800af", 00:34:07.346 "is_configured": true, 00:34:07.346 "data_offset": 2048, 00:34:07.346 "data_size": 63488 00:34:07.346 } 00:34:07.346 ] 00:34:07.346 }' 00:34:07.346 17:30:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:34:07.346 17:30:44 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:34:07.605 17:30:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:34:07.605 17:30:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:34:07.605 17:30:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:34:07.605 17:30:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:34:07.605 17:30:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:34:07.605 17:30:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:34:07.605 17:30:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:34:07.605 17:30:45 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:07.605 17:30:45 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:34:07.862 17:30:45 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:07.862 17:30:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:34:07.862 "name": "raid_bdev1", 00:34:07.862 "uuid": "11983f74-e4cf-46a9-a3d1-184ffd8dc33c", 00:34:07.862 "strip_size_kb": 0, 00:34:07.862 "state": "online", 00:34:07.862 "raid_level": "raid1", 00:34:07.862 "superblock": true, 00:34:07.862 "num_base_bdevs": 2, 00:34:07.862 "num_base_bdevs_discovered": 1, 00:34:07.862 "num_base_bdevs_operational": 1, 00:34:07.862 "base_bdevs_list": [ 00:34:07.862 { 00:34:07.862 "name": null, 00:34:07.862 "uuid": "00000000-0000-0000-0000-000000000000", 00:34:07.862 "is_configured": false, 00:34:07.862 "data_offset": 0, 00:34:07.862 "data_size": 63488 00:34:07.862 }, 00:34:07.862 { 00:34:07.862 "name": "BaseBdev2", 00:34:07.862 "uuid": "bdd1296c-9716-5a3c-8651-8ec8dd3800af", 00:34:07.862 "is_configured": true, 00:34:07.863 "data_offset": 2048, 00:34:07.863 "data_size": 63488 00:34:07.863 } 00:34:07.863 ] 00:34:07.863 }' 00:34:07.863 17:30:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:34:07.863 17:30:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:34:07.863 17:30:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:34:07.863 17:30:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:34:07.863 17:30:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:34:07.863 17:30:45 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:07.863 17:30:45 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:34:07.863 [2024-11-26 17:30:45.184210] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:34:07.863 [2024-11-26 17:30:45.200822] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000ca3430 00:34:07.863 17:30:45 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:07.863 17:30:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@663 -- # sleep 1 00:34:07.863 [2024-11-26 17:30:45.203230] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:34:08.797 17:30:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:34:08.797 17:30:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:34:08.797 17:30:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:34:08.797 17:30:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:34:08.797 17:30:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:34:08.797 17:30:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:34:08.797 17:30:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:34:08.797 17:30:46 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:08.797 17:30:46 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:34:08.797 17:30:46 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:09.055 17:30:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:34:09.055 "name": "raid_bdev1", 00:34:09.055 "uuid": "11983f74-e4cf-46a9-a3d1-184ffd8dc33c", 00:34:09.055 "strip_size_kb": 0, 00:34:09.055 "state": "online", 00:34:09.055 "raid_level": "raid1", 00:34:09.055 "superblock": true, 00:34:09.055 "num_base_bdevs": 2, 00:34:09.055 "num_base_bdevs_discovered": 2, 00:34:09.055 "num_base_bdevs_operational": 2, 00:34:09.055 "process": { 00:34:09.055 "type": "rebuild", 00:34:09.055 "target": "spare", 00:34:09.055 "progress": { 00:34:09.055 "blocks": 20480, 00:34:09.055 "percent": 32 00:34:09.055 } 00:34:09.055 }, 00:34:09.055 "base_bdevs_list": [ 00:34:09.055 { 00:34:09.055 "name": "spare", 00:34:09.055 "uuid": "4aa6a3de-c1d5-5588-9cac-52572529ba8c", 00:34:09.055 "is_configured": true, 00:34:09.055 "data_offset": 2048, 00:34:09.055 "data_size": 63488 00:34:09.055 }, 00:34:09.055 { 00:34:09.055 "name": "BaseBdev2", 00:34:09.055 "uuid": "bdd1296c-9716-5a3c-8651-8ec8dd3800af", 00:34:09.055 "is_configured": true, 00:34:09.055 "data_offset": 2048, 00:34:09.055 "data_size": 63488 00:34:09.055 } 00:34:09.055 ] 00:34:09.055 }' 00:34:09.055 17:30:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:34:09.055 17:30:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:34:09.055 17:30:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:34:09.055 17:30:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:34:09.055 17:30:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@666 -- # '[' true = true ']' 00:34:09.055 17:30:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@666 -- # '[' = false ']' 00:34:09.055 /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh: line 666: [: =: unary operator expected 00:34:09.055 17:30:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=2 00:34:09.055 17:30:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@693 -- # '[' raid1 = raid1 ']' 00:34:09.055 17:30:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@693 -- # '[' 2 -gt 2 ']' 00:34:09.055 17:30:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@706 -- # local timeout=400 00:34:09.055 17:30:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:34:09.055 17:30:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:34:09.055 17:30:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:34:09.055 17:30:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:34:09.055 17:30:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:34:09.055 17:30:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:34:09.055 17:30:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:34:09.055 17:30:46 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:09.055 17:30:46 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:34:09.056 17:30:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:34:09.056 17:30:46 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:09.056 17:30:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:34:09.056 "name": "raid_bdev1", 00:34:09.056 "uuid": "11983f74-e4cf-46a9-a3d1-184ffd8dc33c", 00:34:09.056 "strip_size_kb": 0, 00:34:09.056 "state": "online", 00:34:09.056 "raid_level": "raid1", 00:34:09.056 "superblock": true, 00:34:09.056 "num_base_bdevs": 2, 00:34:09.056 "num_base_bdevs_discovered": 2, 00:34:09.056 "num_base_bdevs_operational": 2, 00:34:09.056 "process": { 00:34:09.056 "type": "rebuild", 00:34:09.056 "target": "spare", 00:34:09.056 "progress": { 00:34:09.056 "blocks": 22528, 00:34:09.056 "percent": 35 00:34:09.056 } 00:34:09.056 }, 00:34:09.056 "base_bdevs_list": [ 00:34:09.056 { 00:34:09.056 "name": "spare", 00:34:09.056 "uuid": "4aa6a3de-c1d5-5588-9cac-52572529ba8c", 00:34:09.056 "is_configured": true, 00:34:09.056 "data_offset": 2048, 00:34:09.056 "data_size": 63488 00:34:09.056 }, 00:34:09.056 { 00:34:09.056 "name": "BaseBdev2", 00:34:09.056 "uuid": "bdd1296c-9716-5a3c-8651-8ec8dd3800af", 00:34:09.056 "is_configured": true, 00:34:09.056 "data_offset": 2048, 00:34:09.056 "data_size": 63488 00:34:09.056 } 00:34:09.056 ] 00:34:09.056 }' 00:34:09.056 17:30:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:34:09.056 17:30:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:34:09.056 17:30:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:34:09.056 17:30:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:34:09.056 17:30:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:34:10.429 17:30:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:34:10.429 17:30:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:34:10.429 17:30:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:34:10.429 17:30:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:34:10.429 17:30:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:34:10.429 17:30:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:34:10.429 17:30:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:34:10.429 17:30:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:34:10.429 17:30:47 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:10.429 17:30:47 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:34:10.429 17:30:47 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:10.429 17:30:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:34:10.429 "name": "raid_bdev1", 00:34:10.429 "uuid": "11983f74-e4cf-46a9-a3d1-184ffd8dc33c", 00:34:10.429 "strip_size_kb": 0, 00:34:10.429 "state": "online", 00:34:10.429 "raid_level": "raid1", 00:34:10.429 "superblock": true, 00:34:10.429 "num_base_bdevs": 2, 00:34:10.429 "num_base_bdevs_discovered": 2, 00:34:10.429 "num_base_bdevs_operational": 2, 00:34:10.429 "process": { 00:34:10.429 "type": "rebuild", 00:34:10.429 "target": "spare", 00:34:10.429 "progress": { 00:34:10.429 "blocks": 45056, 00:34:10.429 "percent": 70 00:34:10.429 } 00:34:10.429 }, 00:34:10.429 "base_bdevs_list": [ 00:34:10.429 { 00:34:10.429 "name": "spare", 00:34:10.429 "uuid": "4aa6a3de-c1d5-5588-9cac-52572529ba8c", 00:34:10.429 "is_configured": true, 00:34:10.429 "data_offset": 2048, 00:34:10.429 "data_size": 63488 00:34:10.429 }, 00:34:10.429 { 00:34:10.429 "name": "BaseBdev2", 00:34:10.429 "uuid": "bdd1296c-9716-5a3c-8651-8ec8dd3800af", 00:34:10.429 "is_configured": true, 00:34:10.429 "data_offset": 2048, 00:34:10.429 "data_size": 63488 00:34:10.429 } 00:34:10.429 ] 00:34:10.429 }' 00:34:10.429 17:30:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:34:10.429 17:30:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:34:10.429 17:30:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:34:10.429 17:30:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:34:10.429 17:30:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:34:10.996 [2024-11-26 17:30:48.322802] bdev_raid.c:2900:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:34:10.996 [2024-11-26 17:30:48.323107] bdev_raid.c:2562:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:34:10.996 [2024-11-26 17:30:48.323240] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:34:11.254 17:30:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:34:11.254 17:30:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:34:11.254 17:30:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:34:11.254 17:30:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:34:11.254 17:30:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:34:11.254 17:30:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:34:11.254 17:30:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:34:11.254 17:30:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:34:11.254 17:30:48 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:11.254 17:30:48 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:34:11.254 17:30:48 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:11.254 17:30:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:34:11.254 "name": "raid_bdev1", 00:34:11.254 "uuid": "11983f74-e4cf-46a9-a3d1-184ffd8dc33c", 00:34:11.254 "strip_size_kb": 0, 00:34:11.254 "state": "online", 00:34:11.254 "raid_level": "raid1", 00:34:11.254 "superblock": true, 00:34:11.254 "num_base_bdevs": 2, 00:34:11.254 "num_base_bdevs_discovered": 2, 00:34:11.254 "num_base_bdevs_operational": 2, 00:34:11.254 "base_bdevs_list": [ 00:34:11.254 { 00:34:11.254 "name": "spare", 00:34:11.254 "uuid": "4aa6a3de-c1d5-5588-9cac-52572529ba8c", 00:34:11.254 "is_configured": true, 00:34:11.254 "data_offset": 2048, 00:34:11.254 "data_size": 63488 00:34:11.254 }, 00:34:11.254 { 00:34:11.254 "name": "BaseBdev2", 00:34:11.254 "uuid": "bdd1296c-9716-5a3c-8651-8ec8dd3800af", 00:34:11.254 "is_configured": true, 00:34:11.254 "data_offset": 2048, 00:34:11.254 "data_size": 63488 00:34:11.254 } 00:34:11.254 ] 00:34:11.254 }' 00:34:11.254 17:30:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:34:11.588 17:30:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:34:11.588 17:30:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:34:11.588 17:30:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:34:11.588 17:30:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@709 -- # break 00:34:11.588 17:30:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:34:11.588 17:30:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:34:11.588 17:30:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:34:11.588 17:30:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:34:11.588 17:30:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:34:11.588 17:30:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:34:11.588 17:30:48 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:11.588 17:30:48 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:34:11.588 17:30:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:34:11.588 17:30:48 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:11.588 17:30:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:34:11.588 "name": "raid_bdev1", 00:34:11.588 "uuid": "11983f74-e4cf-46a9-a3d1-184ffd8dc33c", 00:34:11.588 "strip_size_kb": 0, 00:34:11.588 "state": "online", 00:34:11.588 "raid_level": "raid1", 00:34:11.588 "superblock": true, 00:34:11.588 "num_base_bdevs": 2, 00:34:11.588 "num_base_bdevs_discovered": 2, 00:34:11.588 "num_base_bdevs_operational": 2, 00:34:11.588 "base_bdevs_list": [ 00:34:11.588 { 00:34:11.588 "name": "spare", 00:34:11.588 "uuid": "4aa6a3de-c1d5-5588-9cac-52572529ba8c", 00:34:11.588 "is_configured": true, 00:34:11.588 "data_offset": 2048, 00:34:11.588 "data_size": 63488 00:34:11.588 }, 00:34:11.588 { 00:34:11.588 "name": "BaseBdev2", 00:34:11.588 "uuid": "bdd1296c-9716-5a3c-8651-8ec8dd3800af", 00:34:11.588 "is_configured": true, 00:34:11.588 "data_offset": 2048, 00:34:11.588 "data_size": 63488 00:34:11.588 } 00:34:11.588 ] 00:34:11.588 }' 00:34:11.588 17:30:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:34:11.588 17:30:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:34:11.588 17:30:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:34:11.588 17:30:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:34:11.588 17:30:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:34:11.588 17:30:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:34:11.588 17:30:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:34:11.588 17:30:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:34:11.588 17:30:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:34:11.588 17:30:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:34:11.588 17:30:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:34:11.588 17:30:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:34:11.588 17:30:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:34:11.588 17:30:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:34:11.588 17:30:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:34:11.588 17:30:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:34:11.588 17:30:48 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:11.588 17:30:48 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:34:11.588 17:30:48 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:11.588 17:30:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:34:11.588 "name": "raid_bdev1", 00:34:11.588 "uuid": "11983f74-e4cf-46a9-a3d1-184ffd8dc33c", 00:34:11.588 "strip_size_kb": 0, 00:34:11.588 "state": "online", 00:34:11.588 "raid_level": "raid1", 00:34:11.588 "superblock": true, 00:34:11.588 "num_base_bdevs": 2, 00:34:11.588 "num_base_bdevs_discovered": 2, 00:34:11.588 "num_base_bdevs_operational": 2, 00:34:11.588 "base_bdevs_list": [ 00:34:11.588 { 00:34:11.588 "name": "spare", 00:34:11.588 "uuid": "4aa6a3de-c1d5-5588-9cac-52572529ba8c", 00:34:11.588 "is_configured": true, 00:34:11.588 "data_offset": 2048, 00:34:11.588 "data_size": 63488 00:34:11.588 }, 00:34:11.588 { 00:34:11.588 "name": "BaseBdev2", 00:34:11.588 "uuid": "bdd1296c-9716-5a3c-8651-8ec8dd3800af", 00:34:11.588 "is_configured": true, 00:34:11.588 "data_offset": 2048, 00:34:11.588 "data_size": 63488 00:34:11.588 } 00:34:11.588 ] 00:34:11.588 }' 00:34:11.588 17:30:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:34:11.588 17:30:48 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:34:12.161 17:30:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:34:12.161 17:30:49 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:12.161 17:30:49 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:34:12.161 [2024-11-26 17:30:49.399679] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:34:12.161 [2024-11-26 17:30:49.399711] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:34:12.161 [2024-11-26 17:30:49.399792] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:34:12.161 [2024-11-26 17:30:49.399860] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:34:12.161 [2024-11-26 17:30:49.399875] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:34:12.161 17:30:49 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:12.161 17:30:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:34:12.161 17:30:49 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:12.161 17:30:49 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:34:12.161 17:30:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@720 -- # jq length 00:34:12.161 17:30:49 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:12.161 17:30:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:34:12.161 17:30:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:34:12.161 17:30:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@723 -- # '[' false = true ']' 00:34:12.161 17:30:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@737 -- # nbd_start_disks /var/tmp/spdk.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 00:34:12.161 17:30:49 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:34:12.161 17:30:49 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev1' 'spare') 00:34:12.161 17:30:49 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # local bdev_list 00:34:12.161 17:30:49 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:34:12.161 17:30:49 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # local nbd_list 00:34:12.161 17:30:49 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@12 -- # local i 00:34:12.161 17:30:49 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:34:12.161 17:30:49 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:34:12.161 17:30:49 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev1 /dev/nbd0 00:34:12.419 /dev/nbd0 00:34:12.419 17:30:49 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:34:12.419 17:30:49 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:34:12.419 17:30:49 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:34:12.420 17:30:49 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@873 -- # local i 00:34:12.420 17:30:49 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:34:12.420 17:30:49 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:34:12.420 17:30:49 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:34:12.420 17:30:49 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@877 -- # break 00:34:12.420 17:30:49 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:34:12.420 17:30:49 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:34:12.420 17:30:49 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:34:12.420 1+0 records in 00:34:12.420 1+0 records out 00:34:12.420 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00041203 s, 9.9 MB/s 00:34:12.420 17:30:49 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:34:12.420 17:30:49 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@890 -- # size=4096 00:34:12.420 17:30:49 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:34:12.420 17:30:49 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:34:12.420 17:30:49 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@893 -- # return 0 00:34:12.420 17:30:49 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:34:12.420 17:30:49 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:34:12.420 17:30:49 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd1 00:34:12.678 /dev/nbd1 00:34:12.678 17:30:50 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:34:12.678 17:30:50 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:34:12.678 17:30:50 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:34:12.678 17:30:50 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@873 -- # local i 00:34:12.678 17:30:50 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:34:12.678 17:30:50 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:34:12.678 17:30:50 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:34:12.678 17:30:50 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@877 -- # break 00:34:12.678 17:30:50 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:34:12.678 17:30:50 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:34:12.678 17:30:50 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:34:12.678 1+0 records in 00:34:12.678 1+0 records out 00:34:12.678 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000393762 s, 10.4 MB/s 00:34:12.678 17:30:50 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:34:12.678 17:30:50 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@890 -- # size=4096 00:34:12.678 17:30:50 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:34:12.678 17:30:50 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:34:12.678 17:30:50 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@893 -- # return 0 00:34:12.678 17:30:50 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:34:12.678 17:30:50 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:34:12.678 17:30:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@738 -- # cmp -i 1048576 /dev/nbd0 /dev/nbd1 00:34:12.936 17:30:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@739 -- # nbd_stop_disks /var/tmp/spdk.sock '/dev/nbd0 /dev/nbd1' 00:34:12.936 17:30:50 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:34:12.936 17:30:50 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:34:12.936 17:30:50 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # local nbd_list 00:34:12.936 17:30:50 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@51 -- # local i 00:34:12.936 17:30:50 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:34:12.936 17:30:50 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:34:13.194 17:30:50 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:34:13.194 17:30:50 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:34:13.194 17:30:50 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:34:13.194 17:30:50 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:34:13.194 17:30:50 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:34:13.194 17:30:50 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:34:13.194 17:30:50 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:34:13.194 17:30:50 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:34:13.194 17:30:50 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:34:13.194 17:30:50 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:34:13.453 17:30:50 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:34:13.453 17:30:50 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:34:13.453 17:30:50 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:34:13.453 17:30:50 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:34:13.453 17:30:50 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:34:13.453 17:30:50 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:34:13.453 17:30:50 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:34:13.453 17:30:50 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:34:13.453 17:30:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@743 -- # '[' true = true ']' 00:34:13.453 17:30:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@745 -- # rpc_cmd bdev_passthru_delete spare 00:34:13.453 17:30:50 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:13.453 17:30:50 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:34:13.453 17:30:50 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:13.453 17:30:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@746 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:34:13.453 17:30:50 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:13.453 17:30:50 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:34:13.453 [2024-11-26 17:30:50.787499] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:34:13.453 [2024-11-26 17:30:50.787556] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:34:13.453 [2024-11-26 17:30:50.787584] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:34:13.453 [2024-11-26 17:30:50.787597] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:34:13.453 [2024-11-26 17:30:50.790028] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:34:13.453 [2024-11-26 17:30:50.790204] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:34:13.453 [2024-11-26 17:30:50.790320] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:34:13.453 [2024-11-26 17:30:50.790381] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:34:13.453 [2024-11-26 17:30:50.790529] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:34:13.453 spare 00:34:13.453 17:30:50 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:13.453 17:30:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@747 -- # rpc_cmd bdev_wait_for_examine 00:34:13.453 17:30:50 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:13.453 17:30:50 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:34:13.454 [2024-11-26 17:30:50.890621] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007b00 00:34:13.454 [2024-11-26 17:30:50.890667] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:34:13.454 [2024-11-26 17:30:50.891033] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000cc1ae0 00:34:13.454 [2024-11-26 17:30:50.891275] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007b00 00:34:13.454 [2024-11-26 17:30:50.891292] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007b00 00:34:13.454 [2024-11-26 17:30:50.891536] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:34:13.454 17:30:50 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:13.454 17:30:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@749 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:34:13.454 17:30:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:34:13.454 17:30:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:34:13.454 17:30:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:34:13.454 17:30:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:34:13.454 17:30:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:34:13.454 17:30:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:34:13.454 17:30:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:34:13.454 17:30:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:34:13.454 17:30:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:34:13.712 17:30:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:34:13.712 17:30:50 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:13.713 17:30:50 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:34:13.713 17:30:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:34:13.713 17:30:50 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:13.713 17:30:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:34:13.713 "name": "raid_bdev1", 00:34:13.713 "uuid": "11983f74-e4cf-46a9-a3d1-184ffd8dc33c", 00:34:13.713 "strip_size_kb": 0, 00:34:13.713 "state": "online", 00:34:13.713 "raid_level": "raid1", 00:34:13.713 "superblock": true, 00:34:13.713 "num_base_bdevs": 2, 00:34:13.713 "num_base_bdevs_discovered": 2, 00:34:13.713 "num_base_bdevs_operational": 2, 00:34:13.713 "base_bdevs_list": [ 00:34:13.713 { 00:34:13.713 "name": "spare", 00:34:13.713 "uuid": "4aa6a3de-c1d5-5588-9cac-52572529ba8c", 00:34:13.713 "is_configured": true, 00:34:13.713 "data_offset": 2048, 00:34:13.713 "data_size": 63488 00:34:13.713 }, 00:34:13.713 { 00:34:13.713 "name": "BaseBdev2", 00:34:13.713 "uuid": "bdd1296c-9716-5a3c-8651-8ec8dd3800af", 00:34:13.713 "is_configured": true, 00:34:13.713 "data_offset": 2048, 00:34:13.713 "data_size": 63488 00:34:13.713 } 00:34:13.713 ] 00:34:13.713 }' 00:34:13.713 17:30:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:34:13.713 17:30:50 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:34:13.971 17:30:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@750 -- # verify_raid_bdev_process raid_bdev1 none none 00:34:13.971 17:30:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:34:13.971 17:30:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:34:13.971 17:30:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:34:13.971 17:30:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:34:13.971 17:30:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:34:13.971 17:30:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:34:13.971 17:30:51 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:13.971 17:30:51 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:34:13.971 17:30:51 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:14.228 17:30:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:34:14.228 "name": "raid_bdev1", 00:34:14.228 "uuid": "11983f74-e4cf-46a9-a3d1-184ffd8dc33c", 00:34:14.228 "strip_size_kb": 0, 00:34:14.228 "state": "online", 00:34:14.228 "raid_level": "raid1", 00:34:14.228 "superblock": true, 00:34:14.228 "num_base_bdevs": 2, 00:34:14.228 "num_base_bdevs_discovered": 2, 00:34:14.228 "num_base_bdevs_operational": 2, 00:34:14.228 "base_bdevs_list": [ 00:34:14.228 { 00:34:14.228 "name": "spare", 00:34:14.228 "uuid": "4aa6a3de-c1d5-5588-9cac-52572529ba8c", 00:34:14.228 "is_configured": true, 00:34:14.228 "data_offset": 2048, 00:34:14.228 "data_size": 63488 00:34:14.228 }, 00:34:14.228 { 00:34:14.228 "name": "BaseBdev2", 00:34:14.228 "uuid": "bdd1296c-9716-5a3c-8651-8ec8dd3800af", 00:34:14.228 "is_configured": true, 00:34:14.228 "data_offset": 2048, 00:34:14.228 "data_size": 63488 00:34:14.228 } 00:34:14.228 ] 00:34:14.228 }' 00:34:14.228 17:30:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:34:14.228 17:30:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:34:14.228 17:30:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:34:14.228 17:30:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:34:14.228 17:30:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@751 -- # rpc_cmd bdev_raid_get_bdevs all 00:34:14.228 17:30:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@751 -- # jq -r '.[].base_bdevs_list[0].name' 00:34:14.228 17:30:51 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:14.228 17:30:51 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:34:14.228 17:30:51 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:14.228 17:30:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@751 -- # [[ spare == \s\p\a\r\e ]] 00:34:14.228 17:30:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@754 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:34:14.228 17:30:51 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:14.228 17:30:51 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:34:14.228 [2024-11-26 17:30:51.543764] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:34:14.228 17:30:51 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:14.228 17:30:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@755 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:34:14.228 17:30:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:34:14.228 17:30:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:34:14.228 17:30:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:34:14.228 17:30:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:34:14.228 17:30:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:34:14.228 17:30:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:34:14.228 17:30:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:34:14.228 17:30:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:34:14.228 17:30:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:34:14.228 17:30:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:34:14.228 17:30:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:34:14.228 17:30:51 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:14.228 17:30:51 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:34:14.228 17:30:51 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:14.228 17:30:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:34:14.228 "name": "raid_bdev1", 00:34:14.228 "uuid": "11983f74-e4cf-46a9-a3d1-184ffd8dc33c", 00:34:14.228 "strip_size_kb": 0, 00:34:14.228 "state": "online", 00:34:14.228 "raid_level": "raid1", 00:34:14.228 "superblock": true, 00:34:14.228 "num_base_bdevs": 2, 00:34:14.228 "num_base_bdevs_discovered": 1, 00:34:14.228 "num_base_bdevs_operational": 1, 00:34:14.228 "base_bdevs_list": [ 00:34:14.228 { 00:34:14.228 "name": null, 00:34:14.228 "uuid": "00000000-0000-0000-0000-000000000000", 00:34:14.228 "is_configured": false, 00:34:14.228 "data_offset": 0, 00:34:14.228 "data_size": 63488 00:34:14.228 }, 00:34:14.228 { 00:34:14.228 "name": "BaseBdev2", 00:34:14.228 "uuid": "bdd1296c-9716-5a3c-8651-8ec8dd3800af", 00:34:14.228 "is_configured": true, 00:34:14.228 "data_offset": 2048, 00:34:14.228 "data_size": 63488 00:34:14.228 } 00:34:14.228 ] 00:34:14.228 }' 00:34:14.228 17:30:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:34:14.228 17:30:51 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:34:14.794 17:30:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@756 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:34:14.794 17:30:51 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:14.794 17:30:51 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:34:14.794 [2024-11-26 17:30:51.943876] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:34:14.794 [2024-11-26 17:30:51.944084] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:34:14.794 [2024-11-26 17:30:51.944104] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:34:14.794 [2024-11-26 17:30:51.944146] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:34:14.794 [2024-11-26 17:30:51.960637] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000cc1bb0 00:34:14.794 17:30:51 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:14.794 17:30:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@757 -- # sleep 1 00:34:14.794 [2024-11-26 17:30:51.962776] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:34:15.727 17:30:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@758 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:34:15.727 17:30:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:34:15.727 17:30:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:34:15.727 17:30:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:34:15.727 17:30:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:34:15.727 17:30:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:34:15.727 17:30:52 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:15.727 17:30:52 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:34:15.727 17:30:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:34:15.727 17:30:52 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:15.727 17:30:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:34:15.727 "name": "raid_bdev1", 00:34:15.727 "uuid": "11983f74-e4cf-46a9-a3d1-184ffd8dc33c", 00:34:15.727 "strip_size_kb": 0, 00:34:15.727 "state": "online", 00:34:15.727 "raid_level": "raid1", 00:34:15.727 "superblock": true, 00:34:15.727 "num_base_bdevs": 2, 00:34:15.727 "num_base_bdevs_discovered": 2, 00:34:15.727 "num_base_bdevs_operational": 2, 00:34:15.727 "process": { 00:34:15.727 "type": "rebuild", 00:34:15.727 "target": "spare", 00:34:15.727 "progress": { 00:34:15.727 "blocks": 20480, 00:34:15.727 "percent": 32 00:34:15.727 } 00:34:15.727 }, 00:34:15.727 "base_bdevs_list": [ 00:34:15.727 { 00:34:15.727 "name": "spare", 00:34:15.727 "uuid": "4aa6a3de-c1d5-5588-9cac-52572529ba8c", 00:34:15.727 "is_configured": true, 00:34:15.727 "data_offset": 2048, 00:34:15.727 "data_size": 63488 00:34:15.727 }, 00:34:15.727 { 00:34:15.727 "name": "BaseBdev2", 00:34:15.727 "uuid": "bdd1296c-9716-5a3c-8651-8ec8dd3800af", 00:34:15.727 "is_configured": true, 00:34:15.727 "data_offset": 2048, 00:34:15.727 "data_size": 63488 00:34:15.727 } 00:34:15.727 ] 00:34:15.727 }' 00:34:15.727 17:30:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:34:15.727 17:30:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:34:15.727 17:30:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:34:15.727 17:30:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:34:15.727 17:30:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@761 -- # rpc_cmd bdev_passthru_delete spare 00:34:15.727 17:30:53 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:15.727 17:30:53 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:34:15.727 [2024-11-26 17:30:53.112347] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:34:15.727 [2024-11-26 17:30:53.170653] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:34:15.727 [2024-11-26 17:30:53.170725] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:34:15.727 [2024-11-26 17:30:53.170742] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:34:15.727 [2024-11-26 17:30:53.170754] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:34:15.985 17:30:53 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:15.985 17:30:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@762 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:34:15.985 17:30:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:34:15.985 17:30:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:34:15.985 17:30:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:34:15.985 17:30:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:34:15.985 17:30:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:34:15.985 17:30:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:34:15.985 17:30:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:34:15.985 17:30:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:34:15.985 17:30:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:34:15.985 17:30:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:34:15.985 17:30:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:34:15.985 17:30:53 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:15.985 17:30:53 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:34:15.985 17:30:53 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:15.985 17:30:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:34:15.985 "name": "raid_bdev1", 00:34:15.985 "uuid": "11983f74-e4cf-46a9-a3d1-184ffd8dc33c", 00:34:15.985 "strip_size_kb": 0, 00:34:15.985 "state": "online", 00:34:15.985 "raid_level": "raid1", 00:34:15.985 "superblock": true, 00:34:15.985 "num_base_bdevs": 2, 00:34:15.985 "num_base_bdevs_discovered": 1, 00:34:15.985 "num_base_bdevs_operational": 1, 00:34:15.985 "base_bdevs_list": [ 00:34:15.985 { 00:34:15.985 "name": null, 00:34:15.985 "uuid": "00000000-0000-0000-0000-000000000000", 00:34:15.985 "is_configured": false, 00:34:15.985 "data_offset": 0, 00:34:15.985 "data_size": 63488 00:34:15.985 }, 00:34:15.985 { 00:34:15.985 "name": "BaseBdev2", 00:34:15.985 "uuid": "bdd1296c-9716-5a3c-8651-8ec8dd3800af", 00:34:15.985 "is_configured": true, 00:34:15.985 "data_offset": 2048, 00:34:15.985 "data_size": 63488 00:34:15.985 } 00:34:15.985 ] 00:34:15.985 }' 00:34:15.985 17:30:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:34:15.985 17:30:53 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:34:16.242 17:30:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@763 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:34:16.242 17:30:53 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:16.242 17:30:53 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:34:16.242 [2024-11-26 17:30:53.659317] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:34:16.242 [2024-11-26 17:30:53.659532] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:34:16.242 [2024-11-26 17:30:53.659568] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ab80 00:34:16.242 [2024-11-26 17:30:53.659584] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:34:16.242 [2024-11-26 17:30:53.660112] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:34:16.242 [2024-11-26 17:30:53.660158] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:34:16.242 [2024-11-26 17:30:53.660266] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:34:16.242 [2024-11-26 17:30:53.660286] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:34:16.242 [2024-11-26 17:30:53.660301] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:34:16.242 [2024-11-26 17:30:53.660332] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:34:16.242 [2024-11-26 17:30:53.677695] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000cc1c80 00:34:16.242 spare 00:34:16.242 17:30:53 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:16.242 17:30:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@764 -- # sleep 1 00:34:16.242 [2024-11-26 17:30:53.680137] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:34:17.617 17:30:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@765 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:34:17.617 17:30:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:34:17.617 17:30:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:34:17.617 17:30:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:34:17.617 17:30:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:34:17.617 17:30:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:34:17.617 17:30:54 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:17.617 17:30:54 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:34:17.617 17:30:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:34:17.617 17:30:54 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:17.617 17:30:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:34:17.617 "name": "raid_bdev1", 00:34:17.617 "uuid": "11983f74-e4cf-46a9-a3d1-184ffd8dc33c", 00:34:17.617 "strip_size_kb": 0, 00:34:17.617 "state": "online", 00:34:17.617 "raid_level": "raid1", 00:34:17.617 "superblock": true, 00:34:17.617 "num_base_bdevs": 2, 00:34:17.617 "num_base_bdevs_discovered": 2, 00:34:17.617 "num_base_bdevs_operational": 2, 00:34:17.617 "process": { 00:34:17.617 "type": "rebuild", 00:34:17.617 "target": "spare", 00:34:17.617 "progress": { 00:34:17.617 "blocks": 20480, 00:34:17.617 "percent": 32 00:34:17.617 } 00:34:17.617 }, 00:34:17.617 "base_bdevs_list": [ 00:34:17.617 { 00:34:17.617 "name": "spare", 00:34:17.617 "uuid": "4aa6a3de-c1d5-5588-9cac-52572529ba8c", 00:34:17.617 "is_configured": true, 00:34:17.617 "data_offset": 2048, 00:34:17.617 "data_size": 63488 00:34:17.617 }, 00:34:17.617 { 00:34:17.617 "name": "BaseBdev2", 00:34:17.617 "uuid": "bdd1296c-9716-5a3c-8651-8ec8dd3800af", 00:34:17.617 "is_configured": true, 00:34:17.617 "data_offset": 2048, 00:34:17.617 "data_size": 63488 00:34:17.617 } 00:34:17.617 ] 00:34:17.617 }' 00:34:17.617 17:30:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:34:17.617 17:30:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:34:17.617 17:30:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:34:17.617 17:30:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:34:17.617 17:30:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@768 -- # rpc_cmd bdev_passthru_delete spare 00:34:17.617 17:30:54 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:17.617 17:30:54 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:34:17.617 [2024-11-26 17:30:54.832975] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:34:17.617 [2024-11-26 17:30:54.887811] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:34:17.617 [2024-11-26 17:30:54.887879] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:34:17.617 [2024-11-26 17:30:54.887898] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:34:17.617 [2024-11-26 17:30:54.887906] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:34:17.617 17:30:54 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:17.617 17:30:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@769 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:34:17.617 17:30:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:34:17.617 17:30:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:34:17.617 17:30:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:34:17.617 17:30:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:34:17.617 17:30:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:34:17.617 17:30:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:34:17.617 17:30:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:34:17.617 17:30:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:34:17.617 17:30:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:34:17.617 17:30:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:34:17.617 17:30:54 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:17.617 17:30:54 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:34:17.617 17:30:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:34:17.617 17:30:54 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:17.617 17:30:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:34:17.617 "name": "raid_bdev1", 00:34:17.617 "uuid": "11983f74-e4cf-46a9-a3d1-184ffd8dc33c", 00:34:17.617 "strip_size_kb": 0, 00:34:17.617 "state": "online", 00:34:17.617 "raid_level": "raid1", 00:34:17.617 "superblock": true, 00:34:17.617 "num_base_bdevs": 2, 00:34:17.617 "num_base_bdevs_discovered": 1, 00:34:17.617 "num_base_bdevs_operational": 1, 00:34:17.617 "base_bdevs_list": [ 00:34:17.617 { 00:34:17.617 "name": null, 00:34:17.617 "uuid": "00000000-0000-0000-0000-000000000000", 00:34:17.617 "is_configured": false, 00:34:17.617 "data_offset": 0, 00:34:17.617 "data_size": 63488 00:34:17.617 }, 00:34:17.617 { 00:34:17.617 "name": "BaseBdev2", 00:34:17.617 "uuid": "bdd1296c-9716-5a3c-8651-8ec8dd3800af", 00:34:17.617 "is_configured": true, 00:34:17.617 "data_offset": 2048, 00:34:17.617 "data_size": 63488 00:34:17.617 } 00:34:17.617 ] 00:34:17.617 }' 00:34:17.617 17:30:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:34:17.617 17:30:54 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:34:18.256 17:30:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@770 -- # verify_raid_bdev_process raid_bdev1 none none 00:34:18.256 17:30:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:34:18.256 17:30:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:34:18.256 17:30:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:34:18.256 17:30:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:34:18.256 17:30:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:34:18.256 17:30:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:34:18.256 17:30:55 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:18.256 17:30:55 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:34:18.256 17:30:55 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:18.257 17:30:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:34:18.257 "name": "raid_bdev1", 00:34:18.257 "uuid": "11983f74-e4cf-46a9-a3d1-184ffd8dc33c", 00:34:18.257 "strip_size_kb": 0, 00:34:18.257 "state": "online", 00:34:18.257 "raid_level": "raid1", 00:34:18.257 "superblock": true, 00:34:18.257 "num_base_bdevs": 2, 00:34:18.257 "num_base_bdevs_discovered": 1, 00:34:18.257 "num_base_bdevs_operational": 1, 00:34:18.257 "base_bdevs_list": [ 00:34:18.257 { 00:34:18.257 "name": null, 00:34:18.257 "uuid": "00000000-0000-0000-0000-000000000000", 00:34:18.257 "is_configured": false, 00:34:18.257 "data_offset": 0, 00:34:18.257 "data_size": 63488 00:34:18.257 }, 00:34:18.257 { 00:34:18.257 "name": "BaseBdev2", 00:34:18.257 "uuid": "bdd1296c-9716-5a3c-8651-8ec8dd3800af", 00:34:18.257 "is_configured": true, 00:34:18.257 "data_offset": 2048, 00:34:18.257 "data_size": 63488 00:34:18.257 } 00:34:18.257 ] 00:34:18.257 }' 00:34:18.257 17:30:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:34:18.257 17:30:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:34:18.257 17:30:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:34:18.257 17:30:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:34:18.257 17:30:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@773 -- # rpc_cmd bdev_passthru_delete BaseBdev1 00:34:18.257 17:30:55 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:18.257 17:30:55 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:34:18.257 17:30:55 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:18.257 17:30:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@774 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:34:18.257 17:30:55 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:18.257 17:30:55 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:34:18.257 [2024-11-26 17:30:55.519520] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:34:18.257 [2024-11-26 17:30:55.519578] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:34:18.257 [2024-11-26 17:30:55.519610] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b180 00:34:18.257 [2024-11-26 17:30:55.519632] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:34:18.257 [2024-11-26 17:30:55.520094] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:34:18.257 [2024-11-26 17:30:55.520115] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:34:18.257 [2024-11-26 17:30:55.520196] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev BaseBdev1 00:34:18.257 [2024-11-26 17:30:55.520212] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:34:18.257 [2024-11-26 17:30:55.520228] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:34:18.257 [2024-11-26 17:30:55.520239] bdev_raid.c:3894:raid_bdev_examine_done: *ERROR*: Failed to examine bdev BaseBdev1: Invalid argument 00:34:18.257 BaseBdev1 00:34:18.257 17:30:55 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:18.257 17:30:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@775 -- # sleep 1 00:34:19.191 17:30:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@776 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:34:19.191 17:30:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:34:19.191 17:30:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:34:19.191 17:30:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:34:19.191 17:30:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:34:19.191 17:30:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:34:19.191 17:30:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:34:19.191 17:30:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:34:19.191 17:30:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:34:19.191 17:30:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:34:19.191 17:30:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:34:19.191 17:30:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:34:19.191 17:30:56 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:19.191 17:30:56 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:34:19.191 17:30:56 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:19.191 17:30:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:34:19.191 "name": "raid_bdev1", 00:34:19.191 "uuid": "11983f74-e4cf-46a9-a3d1-184ffd8dc33c", 00:34:19.191 "strip_size_kb": 0, 00:34:19.191 "state": "online", 00:34:19.191 "raid_level": "raid1", 00:34:19.191 "superblock": true, 00:34:19.191 "num_base_bdevs": 2, 00:34:19.191 "num_base_bdevs_discovered": 1, 00:34:19.191 "num_base_bdevs_operational": 1, 00:34:19.191 "base_bdevs_list": [ 00:34:19.191 { 00:34:19.191 "name": null, 00:34:19.191 "uuid": "00000000-0000-0000-0000-000000000000", 00:34:19.191 "is_configured": false, 00:34:19.191 "data_offset": 0, 00:34:19.191 "data_size": 63488 00:34:19.191 }, 00:34:19.191 { 00:34:19.191 "name": "BaseBdev2", 00:34:19.191 "uuid": "bdd1296c-9716-5a3c-8651-8ec8dd3800af", 00:34:19.191 "is_configured": true, 00:34:19.191 "data_offset": 2048, 00:34:19.191 "data_size": 63488 00:34:19.191 } 00:34:19.191 ] 00:34:19.191 }' 00:34:19.191 17:30:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:34:19.191 17:30:56 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:34:19.757 17:30:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@777 -- # verify_raid_bdev_process raid_bdev1 none none 00:34:19.757 17:30:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:34:19.757 17:30:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:34:19.757 17:30:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:34:19.758 17:30:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:34:19.758 17:30:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:34:19.758 17:30:57 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:19.758 17:30:57 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:34:19.758 17:30:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:34:19.758 17:30:57 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:19.758 17:30:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:34:19.758 "name": "raid_bdev1", 00:34:19.758 "uuid": "11983f74-e4cf-46a9-a3d1-184ffd8dc33c", 00:34:19.758 "strip_size_kb": 0, 00:34:19.758 "state": "online", 00:34:19.758 "raid_level": "raid1", 00:34:19.758 "superblock": true, 00:34:19.758 "num_base_bdevs": 2, 00:34:19.758 "num_base_bdevs_discovered": 1, 00:34:19.758 "num_base_bdevs_operational": 1, 00:34:19.758 "base_bdevs_list": [ 00:34:19.758 { 00:34:19.758 "name": null, 00:34:19.758 "uuid": "00000000-0000-0000-0000-000000000000", 00:34:19.758 "is_configured": false, 00:34:19.758 "data_offset": 0, 00:34:19.758 "data_size": 63488 00:34:19.758 }, 00:34:19.758 { 00:34:19.758 "name": "BaseBdev2", 00:34:19.758 "uuid": "bdd1296c-9716-5a3c-8651-8ec8dd3800af", 00:34:19.758 "is_configured": true, 00:34:19.758 "data_offset": 2048, 00:34:19.758 "data_size": 63488 00:34:19.758 } 00:34:19.758 ] 00:34:19.758 }' 00:34:19.758 17:30:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:34:19.758 17:30:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:34:19.758 17:30:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:34:19.758 17:30:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:34:19.758 17:30:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@778 -- # NOT rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:34:19.758 17:30:57 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@652 -- # local es=0 00:34:19.758 17:30:57 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:34:19.758 17:30:57 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:34:19.758 17:30:57 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:34:19.758 17:30:57 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:34:19.758 17:30:57 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:34:19.758 17:30:57 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:34:19.758 17:30:57 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:19.758 17:30:57 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:34:19.758 [2024-11-26 17:30:57.147911] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:34:19.758 [2024-11-26 17:30:57.148093] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:34:19.758 [2024-11-26 17:30:57.148115] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:34:19.758 request: 00:34:19.758 { 00:34:19.758 "base_bdev": "BaseBdev1", 00:34:19.758 "raid_bdev": "raid_bdev1", 00:34:19.758 "method": "bdev_raid_add_base_bdev", 00:34:19.758 "req_id": 1 00:34:19.758 } 00:34:19.758 Got JSON-RPC error response 00:34:19.758 response: 00:34:19.758 { 00:34:19.758 "code": -22, 00:34:19.758 "message": "Failed to add base bdev to RAID bdev: Invalid argument" 00:34:19.758 } 00:34:19.758 17:30:57 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:34:19.758 17:30:57 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@655 -- # es=1 00:34:19.758 17:30:57 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:34:19.758 17:30:57 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:34:19.758 17:30:57 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:34:19.758 17:30:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@779 -- # sleep 1 00:34:21.136 17:30:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@780 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:34:21.136 17:30:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:34:21.136 17:30:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:34:21.136 17:30:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:34:21.136 17:30:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:34:21.136 17:30:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:34:21.136 17:30:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:34:21.136 17:30:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:34:21.136 17:30:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:34:21.136 17:30:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:34:21.136 17:30:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:34:21.136 17:30:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:34:21.136 17:30:58 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:21.136 17:30:58 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:34:21.136 17:30:58 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:21.136 17:30:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:34:21.136 "name": "raid_bdev1", 00:34:21.136 "uuid": "11983f74-e4cf-46a9-a3d1-184ffd8dc33c", 00:34:21.136 "strip_size_kb": 0, 00:34:21.136 "state": "online", 00:34:21.136 "raid_level": "raid1", 00:34:21.136 "superblock": true, 00:34:21.136 "num_base_bdevs": 2, 00:34:21.136 "num_base_bdevs_discovered": 1, 00:34:21.136 "num_base_bdevs_operational": 1, 00:34:21.136 "base_bdevs_list": [ 00:34:21.136 { 00:34:21.136 "name": null, 00:34:21.137 "uuid": "00000000-0000-0000-0000-000000000000", 00:34:21.137 "is_configured": false, 00:34:21.137 "data_offset": 0, 00:34:21.137 "data_size": 63488 00:34:21.137 }, 00:34:21.137 { 00:34:21.137 "name": "BaseBdev2", 00:34:21.137 "uuid": "bdd1296c-9716-5a3c-8651-8ec8dd3800af", 00:34:21.137 "is_configured": true, 00:34:21.137 "data_offset": 2048, 00:34:21.137 "data_size": 63488 00:34:21.137 } 00:34:21.137 ] 00:34:21.137 }' 00:34:21.137 17:30:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:34:21.137 17:30:58 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:34:21.396 17:30:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@781 -- # verify_raid_bdev_process raid_bdev1 none none 00:34:21.396 17:30:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:34:21.396 17:30:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:34:21.396 17:30:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:34:21.396 17:30:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:34:21.396 17:30:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:34:21.396 17:30:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:34:21.396 17:30:58 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:21.396 17:30:58 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:34:21.396 17:30:58 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:21.396 17:30:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:34:21.396 "name": "raid_bdev1", 00:34:21.396 "uuid": "11983f74-e4cf-46a9-a3d1-184ffd8dc33c", 00:34:21.396 "strip_size_kb": 0, 00:34:21.396 "state": "online", 00:34:21.396 "raid_level": "raid1", 00:34:21.396 "superblock": true, 00:34:21.396 "num_base_bdevs": 2, 00:34:21.396 "num_base_bdevs_discovered": 1, 00:34:21.396 "num_base_bdevs_operational": 1, 00:34:21.396 "base_bdevs_list": [ 00:34:21.396 { 00:34:21.396 "name": null, 00:34:21.396 "uuid": "00000000-0000-0000-0000-000000000000", 00:34:21.396 "is_configured": false, 00:34:21.396 "data_offset": 0, 00:34:21.396 "data_size": 63488 00:34:21.396 }, 00:34:21.396 { 00:34:21.396 "name": "BaseBdev2", 00:34:21.396 "uuid": "bdd1296c-9716-5a3c-8651-8ec8dd3800af", 00:34:21.396 "is_configured": true, 00:34:21.396 "data_offset": 2048, 00:34:21.396 "data_size": 63488 00:34:21.396 } 00:34:21.396 ] 00:34:21.396 }' 00:34:21.396 17:30:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:34:21.396 17:30:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:34:21.396 17:30:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:34:21.396 17:30:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:34:21.396 17:30:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@784 -- # killprocess 76174 00:34:21.396 17:30:58 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@954 -- # '[' -z 76174 ']' 00:34:21.396 17:30:58 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@958 -- # kill -0 76174 00:34:21.396 17:30:58 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@959 -- # uname 00:34:21.396 17:30:58 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:34:21.396 17:30:58 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 76174 00:34:21.396 killing process with pid 76174 00:34:21.396 Received shutdown signal, test time was about 60.000000 seconds 00:34:21.396 00:34:21.396 Latency(us) 00:34:21.396 [2024-11-26T17:30:58.843Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:34:21.396 [2024-11-26T17:30:58.843Z] =================================================================================================================== 00:34:21.396 [2024-11-26T17:30:58.843Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:34:21.396 17:30:58 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:34:21.396 17:30:58 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:34:21.396 17:30:58 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@972 -- # echo 'killing process with pid 76174' 00:34:21.396 17:30:58 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@973 -- # kill 76174 00:34:21.396 [2024-11-26 17:30:58.788438] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:34:21.396 17:30:58 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@978 -- # wait 76174 00:34:21.396 [2024-11-26 17:30:58.788560] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:34:21.396 [2024-11-26 17:30:58.788610] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:34:21.396 [2024-11-26 17:30:58.788624] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state offline 00:34:21.981 [2024-11-26 17:30:59.103700] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:34:22.919 17:31:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@786 -- # return 0 00:34:22.919 00:34:22.919 real 0m24.576s 00:34:22.919 user 0m29.612s 00:34:22.919 sys 0m4.263s 00:34:22.919 17:31:00 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@1130 -- # xtrace_disable 00:34:22.919 ************************************ 00:34:22.919 END TEST raid_rebuild_test_sb 00:34:22.919 ************************************ 00:34:22.919 17:31:00 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:34:22.919 17:31:00 bdev_raid -- bdev/bdev_raid.sh@980 -- # run_test raid_rebuild_test_io raid_rebuild_test raid1 2 false true true 00:34:22.919 17:31:00 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 7 -le 1 ']' 00:34:22.919 17:31:00 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:34:22.919 17:31:00 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:34:22.919 ************************************ 00:34:22.919 START TEST raid_rebuild_test_io 00:34:22.919 ************************************ 00:34:22.919 17:31:00 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@1129 -- # raid_rebuild_test raid1 2 false true true 00:34:22.919 17:31:00 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@569 -- # local raid_level=raid1 00:34:22.919 17:31:00 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=2 00:34:22.919 17:31:00 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@571 -- # local superblock=false 00:34:22.919 17:31:00 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@572 -- # local background_io=true 00:34:22.919 17:31:00 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@573 -- # local verify=true 00:34:22.919 17:31:00 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:34:22.919 17:31:00 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:34:22.919 17:31:00 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:34:22.919 17:31:00 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:34:22.919 17:31:00 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:34:22.919 17:31:00 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:34:22.919 17:31:00 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:34:22.919 17:31:00 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:34:22.919 17:31:00 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:34:22.919 17:31:00 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:34:22.919 17:31:00 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:34:22.919 17:31:00 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@576 -- # local strip_size 00:34:22.919 17:31:00 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@577 -- # local create_arg 00:34:22.919 17:31:00 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:34:22.919 17:31:00 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@579 -- # local data_offset 00:34:22.919 17:31:00 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@581 -- # '[' raid1 '!=' raid1 ']' 00:34:22.919 17:31:00 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@589 -- # strip_size=0 00:34:22.919 17:31:00 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@592 -- # '[' false = true ']' 00:34:22.919 17:31:00 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@597 -- # raid_pid=76915 00:34:22.919 17:31:00 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:34:22.919 17:31:00 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@598 -- # waitforlisten 76915 00:34:22.919 17:31:00 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@835 -- # '[' -z 76915 ']' 00:34:22.919 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:34:22.919 17:31:00 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:34:22.919 17:31:00 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@840 -- # local max_retries=100 00:34:22.919 17:31:00 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:34:22.919 17:31:00 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@844 -- # xtrace_disable 00:34:22.919 17:31:00 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:34:23.177 I/O size of 3145728 is greater than zero copy threshold (65536). 00:34:23.177 Zero copy mechanism will not be used. 00:34:23.177 [2024-11-26 17:31:00.477503] Starting SPDK v25.01-pre git sha1 f7ce15267 / DPDK 24.03.0 initialization... 00:34:23.177 [2024-11-26 17:31:00.477684] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76915 ] 00:34:23.433 [2024-11-26 17:31:00.669179] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:34:23.433 [2024-11-26 17:31:00.799633] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:34:23.692 [2024-11-26 17:31:01.043630] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:34:23.692 [2024-11-26 17:31:01.043927] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:34:23.950 17:31:01 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:34:23.950 17:31:01 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@868 -- # return 0 00:34:23.950 17:31:01 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:34:23.950 17:31:01 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:34:23.950 17:31:01 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:23.950 17:31:01 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:34:24.209 BaseBdev1_malloc 00:34:24.209 17:31:01 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:24.209 17:31:01 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:34:24.209 17:31:01 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:24.209 17:31:01 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:34:24.209 [2024-11-26 17:31:01.435137] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:34:24.209 [2024-11-26 17:31:01.435343] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:34:24.209 [2024-11-26 17:31:01.435381] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:34:24.209 [2024-11-26 17:31:01.435399] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:34:24.209 [2024-11-26 17:31:01.438164] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:34:24.209 [2024-11-26 17:31:01.438211] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:34:24.209 BaseBdev1 00:34:24.209 17:31:01 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:24.209 17:31:01 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:34:24.209 17:31:01 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:34:24.209 17:31:01 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:24.209 17:31:01 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:34:24.209 BaseBdev2_malloc 00:34:24.209 17:31:01 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:24.209 17:31:01 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:34:24.209 17:31:01 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:24.209 17:31:01 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:34:24.209 [2024-11-26 17:31:01.495458] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:34:24.209 [2024-11-26 17:31:01.495703] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:34:24.209 [2024-11-26 17:31:01.495747] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:34:24.209 [2024-11-26 17:31:01.495767] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:34:24.209 [2024-11-26 17:31:01.499433] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:34:24.209 [2024-11-26 17:31:01.499493] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:34:24.209 BaseBdev2 00:34:24.209 17:31:01 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:24.209 17:31:01 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 512 -b spare_malloc 00:34:24.209 17:31:01 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:24.209 17:31:01 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:34:24.209 spare_malloc 00:34:24.210 17:31:01 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:24.210 17:31:01 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:34:24.210 17:31:01 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:24.210 17:31:01 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:34:24.210 spare_delay 00:34:24.210 17:31:01 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:24.210 17:31:01 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:34:24.210 17:31:01 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:24.210 17:31:01 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:34:24.210 [2024-11-26 17:31:01.578192] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:34:24.210 [2024-11-26 17:31:01.578383] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:34:24.210 [2024-11-26 17:31:01.578417] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:34:24.210 [2024-11-26 17:31:01.578434] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:34:24.210 [2024-11-26 17:31:01.581082] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:34:24.210 [2024-11-26 17:31:01.581120] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:34:24.210 spare 00:34:24.210 17:31:01 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:24.210 17:31:01 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n raid_bdev1 00:34:24.210 17:31:01 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:24.210 17:31:01 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:34:24.210 [2024-11-26 17:31:01.586231] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:34:24.210 [2024-11-26 17:31:01.588585] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:34:24.210 [2024-11-26 17:31:01.588695] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:34:24.210 [2024-11-26 17:31:01.588714] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:34:24.210 [2024-11-26 17:31:01.589010] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:34:24.210 [2024-11-26 17:31:01.589221] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:34:24.210 [2024-11-26 17:31:01.589237] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:34:24.210 [2024-11-26 17:31:01.589418] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:34:24.210 17:31:01 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:24.210 17:31:01 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:34:24.210 17:31:01 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:34:24.210 17:31:01 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:34:24.210 17:31:01 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:34:24.210 17:31:01 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:34:24.210 17:31:01 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:34:24.210 17:31:01 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:34:24.210 17:31:01 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:34:24.210 17:31:01 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:34:24.210 17:31:01 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:34:24.210 17:31:01 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:34:24.210 17:31:01 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:34:24.210 17:31:01 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:24.210 17:31:01 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:34:24.210 17:31:01 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:24.210 17:31:01 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:34:24.210 "name": "raid_bdev1", 00:34:24.210 "uuid": "10d4aa4f-c548-498c-aa1a-b676a60452fa", 00:34:24.210 "strip_size_kb": 0, 00:34:24.210 "state": "online", 00:34:24.210 "raid_level": "raid1", 00:34:24.210 "superblock": false, 00:34:24.210 "num_base_bdevs": 2, 00:34:24.210 "num_base_bdevs_discovered": 2, 00:34:24.210 "num_base_bdevs_operational": 2, 00:34:24.210 "base_bdevs_list": [ 00:34:24.210 { 00:34:24.210 "name": "BaseBdev1", 00:34:24.210 "uuid": "65202e8a-94c3-50d2-890e-0e49d9e23f05", 00:34:24.210 "is_configured": true, 00:34:24.210 "data_offset": 0, 00:34:24.210 "data_size": 65536 00:34:24.210 }, 00:34:24.210 { 00:34:24.210 "name": "BaseBdev2", 00:34:24.210 "uuid": "eea0c21e-656c-5cb2-9c80-f0a9a7e20e86", 00:34:24.210 "is_configured": true, 00:34:24.210 "data_offset": 0, 00:34:24.210 "data_size": 65536 00:34:24.210 } 00:34:24.210 ] 00:34:24.210 }' 00:34:24.210 17:31:01 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:34:24.210 17:31:01 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:34:24.777 17:31:02 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:34:24.777 17:31:02 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:24.777 17:31:02 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:34:24.777 17:31:02 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:34:24.777 [2024-11-26 17:31:02.038784] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:34:24.777 17:31:02 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:24.777 17:31:02 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=65536 00:34:24.777 17:31:02 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:34:24.777 17:31:02 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:34:24.777 17:31:02 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:24.777 17:31:02 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:34:24.777 17:31:02 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:24.777 17:31:02 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@619 -- # data_offset=0 00:34:24.777 17:31:02 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@621 -- # '[' true = true ']' 00:34:24.777 17:31:02 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:34:24.777 17:31:02 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@623 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:34:24.777 17:31:02 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:24.777 17:31:02 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:34:24.777 [2024-11-26 17:31:02.130451] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:34:24.777 17:31:02 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:24.777 17:31:02 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:34:24.777 17:31:02 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:34:24.777 17:31:02 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:34:24.777 17:31:02 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:34:24.777 17:31:02 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:34:24.777 17:31:02 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:34:24.777 17:31:02 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:34:24.777 17:31:02 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:34:24.777 17:31:02 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:34:24.777 17:31:02 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:34:24.777 17:31:02 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:34:24.777 17:31:02 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:34:24.777 17:31:02 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:24.777 17:31:02 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:34:24.777 17:31:02 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:24.777 17:31:02 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:34:24.777 "name": "raid_bdev1", 00:34:24.777 "uuid": "10d4aa4f-c548-498c-aa1a-b676a60452fa", 00:34:24.777 "strip_size_kb": 0, 00:34:24.777 "state": "online", 00:34:24.777 "raid_level": "raid1", 00:34:24.777 "superblock": false, 00:34:24.777 "num_base_bdevs": 2, 00:34:24.777 "num_base_bdevs_discovered": 1, 00:34:24.777 "num_base_bdevs_operational": 1, 00:34:24.777 "base_bdevs_list": [ 00:34:24.777 { 00:34:24.777 "name": null, 00:34:24.777 "uuid": "00000000-0000-0000-0000-000000000000", 00:34:24.777 "is_configured": false, 00:34:24.777 "data_offset": 0, 00:34:24.777 "data_size": 65536 00:34:24.777 }, 00:34:24.777 { 00:34:24.777 "name": "BaseBdev2", 00:34:24.777 "uuid": "eea0c21e-656c-5cb2-9c80-f0a9a7e20e86", 00:34:24.777 "is_configured": true, 00:34:24.777 "data_offset": 0, 00:34:24.777 "data_size": 65536 00:34:24.777 } 00:34:24.777 ] 00:34:24.777 }' 00:34:24.777 17:31:02 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:34:24.777 17:31:02 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:34:25.034 [2024-11-26 17:31:02.268031] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006080 00:34:25.034 I/O size of 3145728 is greater than zero copy threshold (65536). 00:34:25.034 Zero copy mechanism will not be used. 00:34:25.034 Running I/O for 60 seconds... 00:34:25.302 17:31:02 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:34:25.302 17:31:02 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:25.302 17:31:02 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:34:25.302 [2024-11-26 17:31:02.581445] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:34:25.302 17:31:02 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:25.302 17:31:02 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@647 -- # sleep 1 00:34:25.302 [2024-11-26 17:31:02.661474] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006150 00:34:25.302 [2024-11-26 17:31:02.663995] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:34:25.583 [2024-11-26 17:31:02.779578] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:34:25.583 [2024-11-26 17:31:02.780370] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:34:25.583 [2024-11-26 17:31:03.022946] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:34:26.101 209.00 IOPS, 627.00 MiB/s [2024-11-26T17:31:03.548Z] [2024-11-26 17:31:03.373598] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 8192 offset_begin: 6144 offset_end: 12288 00:34:26.101 [2024-11-26 17:31:03.490875] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:34:26.101 [2024-11-26 17:31:03.491232] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:34:26.360 17:31:03 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:34:26.360 17:31:03 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:34:26.360 17:31:03 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:34:26.360 17:31:03 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:34:26.360 17:31:03 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:34:26.360 17:31:03 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:34:26.360 17:31:03 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:26.360 17:31:03 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:34:26.360 17:31:03 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:34:26.360 17:31:03 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:26.360 17:31:03 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:34:26.360 "name": "raid_bdev1", 00:34:26.360 "uuid": "10d4aa4f-c548-498c-aa1a-b676a60452fa", 00:34:26.360 "strip_size_kb": 0, 00:34:26.360 "state": "online", 00:34:26.360 "raid_level": "raid1", 00:34:26.360 "superblock": false, 00:34:26.360 "num_base_bdevs": 2, 00:34:26.360 "num_base_bdevs_discovered": 2, 00:34:26.360 "num_base_bdevs_operational": 2, 00:34:26.360 "process": { 00:34:26.360 "type": "rebuild", 00:34:26.360 "target": "spare", 00:34:26.360 "progress": { 00:34:26.360 "blocks": 10240, 00:34:26.360 "percent": 15 00:34:26.360 } 00:34:26.360 }, 00:34:26.360 "base_bdevs_list": [ 00:34:26.360 { 00:34:26.360 "name": "spare", 00:34:26.360 "uuid": "d471ff88-eaf1-5b0b-957c-e3636a979d92", 00:34:26.360 "is_configured": true, 00:34:26.360 "data_offset": 0, 00:34:26.360 "data_size": 65536 00:34:26.360 }, 00:34:26.360 { 00:34:26.360 "name": "BaseBdev2", 00:34:26.360 "uuid": "eea0c21e-656c-5cb2-9c80-f0a9a7e20e86", 00:34:26.360 "is_configured": true, 00:34:26.360 "data_offset": 0, 00:34:26.360 "data_size": 65536 00:34:26.360 } 00:34:26.360 ] 00:34:26.360 }' 00:34:26.360 17:31:03 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:34:26.360 17:31:03 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:34:26.360 17:31:03 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:34:26.360 17:31:03 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:34:26.360 17:31:03 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:34:26.360 17:31:03 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:26.360 17:31:03 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:34:26.360 [2024-11-26 17:31:03.782673] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:34:26.620 [2024-11-26 17:31:03.827702] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 14336 offset_begin: 12288 offset_end: 18432 00:34:26.620 [2024-11-26 17:31:03.828390] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 14336 offset_begin: 12288 offset_end: 18432 00:34:26.620 [2024-11-26 17:31:03.936573] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:34:26.620 [2024-11-26 17:31:03.953455] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:34:26.620 [2024-11-26 17:31:03.953509] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:34:26.620 [2024-11-26 17:31:03.953530] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:34:26.620 [2024-11-26 17:31:04.003501] bdev_raid.c:1974:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 0 raid_ch: 0x60d000006080 00:34:26.620 17:31:04 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:26.620 17:31:04 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:34:26.620 17:31:04 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:34:26.620 17:31:04 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:34:26.620 17:31:04 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:34:26.620 17:31:04 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:34:26.620 17:31:04 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:34:26.620 17:31:04 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:34:26.620 17:31:04 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:34:26.620 17:31:04 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:34:26.620 17:31:04 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:34:26.620 17:31:04 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:34:26.620 17:31:04 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:34:26.620 17:31:04 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:26.620 17:31:04 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:34:26.620 17:31:04 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:26.880 17:31:04 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:34:26.880 "name": "raid_bdev1", 00:34:26.880 "uuid": "10d4aa4f-c548-498c-aa1a-b676a60452fa", 00:34:26.880 "strip_size_kb": 0, 00:34:26.880 "state": "online", 00:34:26.880 "raid_level": "raid1", 00:34:26.880 "superblock": false, 00:34:26.880 "num_base_bdevs": 2, 00:34:26.880 "num_base_bdevs_discovered": 1, 00:34:26.880 "num_base_bdevs_operational": 1, 00:34:26.880 "base_bdevs_list": [ 00:34:26.880 { 00:34:26.880 "name": null, 00:34:26.880 "uuid": "00000000-0000-0000-0000-000000000000", 00:34:26.880 "is_configured": false, 00:34:26.880 "data_offset": 0, 00:34:26.880 "data_size": 65536 00:34:26.880 }, 00:34:26.880 { 00:34:26.880 "name": "BaseBdev2", 00:34:26.880 "uuid": "eea0c21e-656c-5cb2-9c80-f0a9a7e20e86", 00:34:26.880 "is_configured": true, 00:34:26.880 "data_offset": 0, 00:34:26.880 "data_size": 65536 00:34:26.880 } 00:34:26.880 ] 00:34:26.880 }' 00:34:26.880 17:31:04 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:34:26.880 17:31:04 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:34:27.139 160.50 IOPS, 481.50 MiB/s [2024-11-26T17:31:04.586Z] 17:31:04 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:34:27.139 17:31:04 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:34:27.139 17:31:04 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:34:27.139 17:31:04 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:34:27.139 17:31:04 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:34:27.139 17:31:04 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:34:27.139 17:31:04 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:34:27.139 17:31:04 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:27.139 17:31:04 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:34:27.139 17:31:04 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:27.139 17:31:04 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:34:27.139 "name": "raid_bdev1", 00:34:27.139 "uuid": "10d4aa4f-c548-498c-aa1a-b676a60452fa", 00:34:27.139 "strip_size_kb": 0, 00:34:27.139 "state": "online", 00:34:27.139 "raid_level": "raid1", 00:34:27.139 "superblock": false, 00:34:27.139 "num_base_bdevs": 2, 00:34:27.139 "num_base_bdevs_discovered": 1, 00:34:27.139 "num_base_bdevs_operational": 1, 00:34:27.139 "base_bdevs_list": [ 00:34:27.139 { 00:34:27.139 "name": null, 00:34:27.139 "uuid": "00000000-0000-0000-0000-000000000000", 00:34:27.139 "is_configured": false, 00:34:27.139 "data_offset": 0, 00:34:27.139 "data_size": 65536 00:34:27.139 }, 00:34:27.139 { 00:34:27.139 "name": "BaseBdev2", 00:34:27.139 "uuid": "eea0c21e-656c-5cb2-9c80-f0a9a7e20e86", 00:34:27.139 "is_configured": true, 00:34:27.139 "data_offset": 0, 00:34:27.139 "data_size": 65536 00:34:27.139 } 00:34:27.139 ] 00:34:27.139 }' 00:34:27.139 17:31:04 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:34:27.398 17:31:04 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:34:27.398 17:31:04 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:34:27.398 17:31:04 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:34:27.398 17:31:04 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:34:27.398 17:31:04 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:27.398 17:31:04 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:34:27.398 [2024-11-26 17:31:04.661710] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:34:27.398 17:31:04 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:27.398 17:31:04 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@663 -- # sleep 1 00:34:27.398 [2024-11-26 17:31:04.739688] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:34:27.398 [2024-11-26 17:31:04.741843] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:34:27.656 [2024-11-26 17:31:04.862268] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:34:27.656 [2024-11-26 17:31:04.862848] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:34:27.914 [2024-11-26 17:31:05.110657] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:34:28.173 173.33 IOPS, 520.00 MiB/s [2024-11-26T17:31:05.620Z] [2024-11-26 17:31:05.448094] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 8192 offset_begin: 6144 offset_end: 12288 00:34:28.432 [2024-11-26 17:31:05.655697] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:34:28.432 [2024-11-26 17:31:05.656050] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:34:28.432 17:31:05 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:34:28.432 17:31:05 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:34:28.432 17:31:05 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:34:28.432 17:31:05 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:34:28.432 17:31:05 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:34:28.432 17:31:05 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:34:28.432 17:31:05 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:34:28.432 17:31:05 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:28.433 17:31:05 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:34:28.433 17:31:05 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:28.433 17:31:05 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:34:28.433 "name": "raid_bdev1", 00:34:28.433 "uuid": "10d4aa4f-c548-498c-aa1a-b676a60452fa", 00:34:28.433 "strip_size_kb": 0, 00:34:28.433 "state": "online", 00:34:28.433 "raid_level": "raid1", 00:34:28.433 "superblock": false, 00:34:28.433 "num_base_bdevs": 2, 00:34:28.433 "num_base_bdevs_discovered": 2, 00:34:28.433 "num_base_bdevs_operational": 2, 00:34:28.433 "process": { 00:34:28.433 "type": "rebuild", 00:34:28.433 "target": "spare", 00:34:28.433 "progress": { 00:34:28.433 "blocks": 10240, 00:34:28.433 "percent": 15 00:34:28.433 } 00:34:28.433 }, 00:34:28.433 "base_bdevs_list": [ 00:34:28.433 { 00:34:28.433 "name": "spare", 00:34:28.433 "uuid": "d471ff88-eaf1-5b0b-957c-e3636a979d92", 00:34:28.433 "is_configured": true, 00:34:28.433 "data_offset": 0, 00:34:28.433 "data_size": 65536 00:34:28.433 }, 00:34:28.433 { 00:34:28.433 "name": "BaseBdev2", 00:34:28.433 "uuid": "eea0c21e-656c-5cb2-9c80-f0a9a7e20e86", 00:34:28.433 "is_configured": true, 00:34:28.433 "data_offset": 0, 00:34:28.433 "data_size": 65536 00:34:28.433 } 00:34:28.433 ] 00:34:28.433 }' 00:34:28.433 17:31:05 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:34:28.433 17:31:05 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:34:28.433 17:31:05 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:34:28.433 17:31:05 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:34:28.433 17:31:05 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@666 -- # '[' false = true ']' 00:34:28.433 17:31:05 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=2 00:34:28.433 17:31:05 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@693 -- # '[' raid1 = raid1 ']' 00:34:28.433 17:31:05 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@693 -- # '[' 2 -gt 2 ']' 00:34:28.433 17:31:05 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@706 -- # local timeout=419 00:34:28.433 17:31:05 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:34:28.433 17:31:05 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:34:28.433 17:31:05 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:34:28.433 17:31:05 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:34:28.433 17:31:05 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:34:28.433 17:31:05 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:34:28.433 17:31:05 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:34:28.433 17:31:05 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:28.433 17:31:05 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:34:28.433 17:31:05 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:34:28.433 17:31:05 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:28.691 17:31:05 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:34:28.691 "name": "raid_bdev1", 00:34:28.691 "uuid": "10d4aa4f-c548-498c-aa1a-b676a60452fa", 00:34:28.691 "strip_size_kb": 0, 00:34:28.691 "state": "online", 00:34:28.691 "raid_level": "raid1", 00:34:28.691 "superblock": false, 00:34:28.691 "num_base_bdevs": 2, 00:34:28.691 "num_base_bdevs_discovered": 2, 00:34:28.691 "num_base_bdevs_operational": 2, 00:34:28.691 "process": { 00:34:28.691 "type": "rebuild", 00:34:28.691 "target": "spare", 00:34:28.691 "progress": { 00:34:28.691 "blocks": 10240, 00:34:28.691 "percent": 15 00:34:28.691 } 00:34:28.691 }, 00:34:28.691 "base_bdevs_list": [ 00:34:28.691 { 00:34:28.691 "name": "spare", 00:34:28.691 "uuid": "d471ff88-eaf1-5b0b-957c-e3636a979d92", 00:34:28.691 "is_configured": true, 00:34:28.691 "data_offset": 0, 00:34:28.691 "data_size": 65536 00:34:28.691 }, 00:34:28.691 { 00:34:28.691 "name": "BaseBdev2", 00:34:28.691 "uuid": "eea0c21e-656c-5cb2-9c80-f0a9a7e20e86", 00:34:28.691 "is_configured": true, 00:34:28.691 "data_offset": 0, 00:34:28.691 "data_size": 65536 00:34:28.691 } 00:34:28.691 ] 00:34:28.691 }' 00:34:28.691 17:31:05 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:34:28.691 17:31:05 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:34:28.691 17:31:05 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:34:28.691 17:31:05 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:34:28.691 17:31:05 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@711 -- # sleep 1 00:34:28.950 [2024-11-26 17:31:06.159997] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 16384 offset_begin: 12288 offset_end: 18432 00:34:29.209 145.50 IOPS, 436.50 MiB/s [2024-11-26T17:31:06.656Z] [2024-11-26 17:31:06.419858] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 20480 offset_begin: 18432 offset_end: 24576 00:34:29.209 [2024-11-26 17:31:06.542796] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 22528 offset_begin: 18432 offset_end: 24576 00:34:29.209 [2024-11-26 17:31:06.543080] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 22528 offset_begin: 18432 offset_end: 24576 00:34:29.777 17:31:06 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:34:29.777 17:31:06 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:34:29.777 17:31:06 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:34:29.777 17:31:06 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:34:29.777 17:31:06 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:34:29.777 17:31:06 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:34:29.777 17:31:07 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:34:29.777 17:31:07 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:29.777 17:31:07 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:34:29.777 17:31:07 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:34:29.777 [2024-11-26 17:31:07.010478] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 28672 offset_begin: 24576 offset_end: 30720 00:34:29.777 17:31:07 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:29.777 17:31:07 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:34:29.777 "name": "raid_bdev1", 00:34:29.777 "uuid": "10d4aa4f-c548-498c-aa1a-b676a60452fa", 00:34:29.777 "strip_size_kb": 0, 00:34:29.777 "state": "online", 00:34:29.777 "raid_level": "raid1", 00:34:29.777 "superblock": false, 00:34:29.777 "num_base_bdevs": 2, 00:34:29.777 "num_base_bdevs_discovered": 2, 00:34:29.777 "num_base_bdevs_operational": 2, 00:34:29.777 "process": { 00:34:29.777 "type": "rebuild", 00:34:29.777 "target": "spare", 00:34:29.777 "progress": { 00:34:29.777 "blocks": 26624, 00:34:29.777 "percent": 40 00:34:29.777 } 00:34:29.777 }, 00:34:29.777 "base_bdevs_list": [ 00:34:29.777 { 00:34:29.777 "name": "spare", 00:34:29.777 "uuid": "d471ff88-eaf1-5b0b-957c-e3636a979d92", 00:34:29.777 "is_configured": true, 00:34:29.777 "data_offset": 0, 00:34:29.777 "data_size": 65536 00:34:29.777 }, 00:34:29.777 { 00:34:29.777 "name": "BaseBdev2", 00:34:29.777 "uuid": "eea0c21e-656c-5cb2-9c80-f0a9a7e20e86", 00:34:29.777 "is_configured": true, 00:34:29.777 "data_offset": 0, 00:34:29.777 "data_size": 65536 00:34:29.777 } 00:34:29.777 ] 00:34:29.777 }' 00:34:29.777 17:31:07 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:34:29.777 17:31:07 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:34:29.777 17:31:07 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:34:29.777 17:31:07 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:34:29.777 17:31:07 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@711 -- # sleep 1 00:34:30.036 124.40 IOPS, 373.20 MiB/s [2024-11-26T17:31:07.483Z] [2024-11-26 17:31:07.453670] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 34816 offset_begin: 30720 offset_end: 36864 00:34:30.036 [2024-11-26 17:31:07.453994] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 34816 offset_begin: 30720 offset_end: 36864 00:34:30.603 [2024-11-26 17:31:07.922995] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 40960 offset_begin: 36864 offset_end: 43008 00:34:30.862 17:31:08 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:34:30.862 17:31:08 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:34:30.862 17:31:08 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:34:30.862 17:31:08 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:34:30.862 17:31:08 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:34:30.862 17:31:08 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:34:30.862 [2024-11-26 17:31:08.140477] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 45056 offset_begin: 43008 offset_end: 49152 00:34:30.862 17:31:08 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:34:30.862 17:31:08 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:30.862 17:31:08 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:34:30.862 17:31:08 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:34:30.862 17:31:08 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:30.862 17:31:08 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:34:30.862 "name": "raid_bdev1", 00:34:30.862 "uuid": "10d4aa4f-c548-498c-aa1a-b676a60452fa", 00:34:30.862 "strip_size_kb": 0, 00:34:30.862 "state": "online", 00:34:30.862 "raid_level": "raid1", 00:34:30.862 "superblock": false, 00:34:30.862 "num_base_bdevs": 2, 00:34:30.862 "num_base_bdevs_discovered": 2, 00:34:30.862 "num_base_bdevs_operational": 2, 00:34:30.862 "process": { 00:34:30.862 "type": "rebuild", 00:34:30.862 "target": "spare", 00:34:30.862 "progress": { 00:34:30.862 "blocks": 45056, 00:34:30.862 "percent": 68 00:34:30.862 } 00:34:30.862 }, 00:34:30.862 "base_bdevs_list": [ 00:34:30.862 { 00:34:30.862 "name": "spare", 00:34:30.862 "uuid": "d471ff88-eaf1-5b0b-957c-e3636a979d92", 00:34:30.862 "is_configured": true, 00:34:30.862 "data_offset": 0, 00:34:30.862 "data_size": 65536 00:34:30.862 }, 00:34:30.862 { 00:34:30.862 "name": "BaseBdev2", 00:34:30.862 "uuid": "eea0c21e-656c-5cb2-9c80-f0a9a7e20e86", 00:34:30.862 "is_configured": true, 00:34:30.862 "data_offset": 0, 00:34:30.862 "data_size": 65536 00:34:30.862 } 00:34:30.862 ] 00:34:30.862 }' 00:34:30.862 17:31:08 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:34:30.862 17:31:08 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:34:30.862 17:31:08 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:34:30.862 110.83 IOPS, 332.50 MiB/s [2024-11-26T17:31:08.309Z] 17:31:08 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:34:30.862 17:31:08 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@711 -- # sleep 1 00:34:31.121 [2024-11-26 17:31:08.490207] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 51200 offset_begin: 49152 offset_end: 55296 00:34:31.687 [2024-11-26 17:31:08.842070] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 57344 offset_begin: 55296 offset_end: 61440 00:34:31.945 101.00 IOPS, 303.00 MiB/s [2024-11-26T17:31:09.392Z] 17:31:09 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:34:31.945 17:31:09 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:34:31.945 17:31:09 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:34:31.945 17:31:09 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:34:31.945 17:31:09 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:34:31.945 17:31:09 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:34:31.945 17:31:09 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:34:31.945 17:31:09 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:31.945 17:31:09 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:34:31.945 17:31:09 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:34:31.945 17:31:09 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:31.945 17:31:09 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:34:31.945 "name": "raid_bdev1", 00:34:31.945 "uuid": "10d4aa4f-c548-498c-aa1a-b676a60452fa", 00:34:31.945 "strip_size_kb": 0, 00:34:31.945 "state": "online", 00:34:31.945 "raid_level": "raid1", 00:34:31.945 "superblock": false, 00:34:31.945 "num_base_bdevs": 2, 00:34:31.945 "num_base_bdevs_discovered": 2, 00:34:31.945 "num_base_bdevs_operational": 2, 00:34:31.945 "process": { 00:34:31.945 "type": "rebuild", 00:34:31.945 "target": "spare", 00:34:31.945 "progress": { 00:34:31.945 "blocks": 61440, 00:34:31.945 "percent": 93 00:34:31.945 } 00:34:31.945 }, 00:34:31.945 "base_bdevs_list": [ 00:34:31.945 { 00:34:31.945 "name": "spare", 00:34:31.945 "uuid": "d471ff88-eaf1-5b0b-957c-e3636a979d92", 00:34:31.945 "is_configured": true, 00:34:31.945 "data_offset": 0, 00:34:31.945 "data_size": 65536 00:34:31.945 }, 00:34:31.945 { 00:34:31.945 "name": "BaseBdev2", 00:34:31.945 "uuid": "eea0c21e-656c-5cb2-9c80-f0a9a7e20e86", 00:34:31.945 "is_configured": true, 00:34:31.945 "data_offset": 0, 00:34:31.945 "data_size": 65536 00:34:31.945 } 00:34:31.945 ] 00:34:31.945 }' 00:34:31.945 17:31:09 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:34:31.945 17:31:09 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:34:31.945 17:31:09 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:34:32.203 [2024-11-26 17:31:09.411815] bdev_raid.c:2900:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:34:32.203 17:31:09 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:34:32.203 17:31:09 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@711 -- # sleep 1 00:34:32.203 [2024-11-26 17:31:09.518359] bdev_raid.c:2562:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:34:32.203 [2024-11-26 17:31:09.520474] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:34:33.208 93.88 IOPS, 281.62 MiB/s [2024-11-26T17:31:10.655Z] 17:31:10 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:34:33.208 17:31:10 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:34:33.208 17:31:10 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:34:33.208 17:31:10 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:34:33.208 17:31:10 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:34:33.208 17:31:10 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:34:33.208 17:31:10 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:34:33.208 17:31:10 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:34:33.208 17:31:10 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:33.208 17:31:10 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:34:33.208 17:31:10 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:33.208 17:31:10 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:34:33.208 "name": "raid_bdev1", 00:34:33.208 "uuid": "10d4aa4f-c548-498c-aa1a-b676a60452fa", 00:34:33.208 "strip_size_kb": 0, 00:34:33.208 "state": "online", 00:34:33.208 "raid_level": "raid1", 00:34:33.208 "superblock": false, 00:34:33.208 "num_base_bdevs": 2, 00:34:33.208 "num_base_bdevs_discovered": 2, 00:34:33.208 "num_base_bdevs_operational": 2, 00:34:33.208 "base_bdevs_list": [ 00:34:33.208 { 00:34:33.208 "name": "spare", 00:34:33.208 "uuid": "d471ff88-eaf1-5b0b-957c-e3636a979d92", 00:34:33.208 "is_configured": true, 00:34:33.208 "data_offset": 0, 00:34:33.208 "data_size": 65536 00:34:33.208 }, 00:34:33.208 { 00:34:33.208 "name": "BaseBdev2", 00:34:33.208 "uuid": "eea0c21e-656c-5cb2-9c80-f0a9a7e20e86", 00:34:33.208 "is_configured": true, 00:34:33.208 "data_offset": 0, 00:34:33.208 "data_size": 65536 00:34:33.208 } 00:34:33.208 ] 00:34:33.208 }' 00:34:33.208 17:31:10 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:34:33.208 17:31:10 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:34:33.208 17:31:10 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:34:33.208 17:31:10 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:34:33.208 17:31:10 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@709 -- # break 00:34:33.208 17:31:10 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:34:33.208 17:31:10 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:34:33.208 17:31:10 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:34:33.208 17:31:10 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:34:33.208 17:31:10 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:34:33.208 17:31:10 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:34:33.208 17:31:10 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:34:33.208 17:31:10 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:33.208 17:31:10 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:34:33.208 17:31:10 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:33.208 17:31:10 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:34:33.208 "name": "raid_bdev1", 00:34:33.208 "uuid": "10d4aa4f-c548-498c-aa1a-b676a60452fa", 00:34:33.208 "strip_size_kb": 0, 00:34:33.208 "state": "online", 00:34:33.208 "raid_level": "raid1", 00:34:33.208 "superblock": false, 00:34:33.208 "num_base_bdevs": 2, 00:34:33.208 "num_base_bdevs_discovered": 2, 00:34:33.208 "num_base_bdevs_operational": 2, 00:34:33.208 "base_bdevs_list": [ 00:34:33.208 { 00:34:33.208 "name": "spare", 00:34:33.208 "uuid": "d471ff88-eaf1-5b0b-957c-e3636a979d92", 00:34:33.208 "is_configured": true, 00:34:33.208 "data_offset": 0, 00:34:33.208 "data_size": 65536 00:34:33.208 }, 00:34:33.208 { 00:34:33.208 "name": "BaseBdev2", 00:34:33.208 "uuid": "eea0c21e-656c-5cb2-9c80-f0a9a7e20e86", 00:34:33.208 "is_configured": true, 00:34:33.208 "data_offset": 0, 00:34:33.208 "data_size": 65536 00:34:33.208 } 00:34:33.208 ] 00:34:33.208 }' 00:34:33.208 17:31:10 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:34:33.468 17:31:10 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:34:33.468 17:31:10 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:34:33.468 17:31:10 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:34:33.468 17:31:10 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:34:33.468 17:31:10 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:34:33.468 17:31:10 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:34:33.468 17:31:10 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:34:33.468 17:31:10 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:34:33.468 17:31:10 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:34:33.468 17:31:10 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:34:33.468 17:31:10 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:34:33.468 17:31:10 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:34:33.468 17:31:10 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:34:33.468 17:31:10 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:34:33.468 17:31:10 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:33.468 17:31:10 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:34:33.468 17:31:10 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:34:33.468 17:31:10 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:33.468 17:31:10 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:34:33.468 "name": "raid_bdev1", 00:34:33.468 "uuid": "10d4aa4f-c548-498c-aa1a-b676a60452fa", 00:34:33.468 "strip_size_kb": 0, 00:34:33.468 "state": "online", 00:34:33.468 "raid_level": "raid1", 00:34:33.468 "superblock": false, 00:34:33.468 "num_base_bdevs": 2, 00:34:33.468 "num_base_bdevs_discovered": 2, 00:34:33.468 "num_base_bdevs_operational": 2, 00:34:33.468 "base_bdevs_list": [ 00:34:33.468 { 00:34:33.468 "name": "spare", 00:34:33.468 "uuid": "d471ff88-eaf1-5b0b-957c-e3636a979d92", 00:34:33.468 "is_configured": true, 00:34:33.468 "data_offset": 0, 00:34:33.468 "data_size": 65536 00:34:33.468 }, 00:34:33.468 { 00:34:33.468 "name": "BaseBdev2", 00:34:33.468 "uuid": "eea0c21e-656c-5cb2-9c80-f0a9a7e20e86", 00:34:33.468 "is_configured": true, 00:34:33.468 "data_offset": 0, 00:34:33.468 "data_size": 65536 00:34:33.468 } 00:34:33.468 ] 00:34:33.468 }' 00:34:33.468 17:31:10 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:34:33.468 17:31:10 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:34:34.038 17:31:11 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:34:34.038 17:31:11 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:34.038 17:31:11 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:34:34.038 [2024-11-26 17:31:11.181562] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:34:34.038 [2024-11-26 17:31:11.181607] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:34:34.038 00:34:34.038 Latency(us) 00:34:34.038 [2024-11-26T17:31:11.485Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:34:34.038 Job: raid_bdev1 (Core Mask 0x1, workload: randrw, percentage: 50, depth: 2, IO size: 3145728) 00:34:34.038 raid_bdev1 : 8.99 87.20 261.60 0.00 0.00 16465.02 312.08 114344.72 00:34:34.038 [2024-11-26T17:31:11.485Z] =================================================================================================================== 00:34:34.038 [2024-11-26T17:31:11.485Z] Total : 87.20 261.60 0.00 0.00 16465.02 312.08 114344.72 00:34:34.038 [2024-11-26 17:31:11.287984] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:34:34.038 [2024-11-26 17:31:11.288081] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:34:34.038 [2024-11-26 17:31:11.288169] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:34:34.038 [2024-11-26 17:31:11.288191] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:34:34.038 { 00:34:34.038 "results": [ 00:34:34.038 { 00:34:34.038 "job": "raid_bdev1", 00:34:34.038 "core_mask": "0x1", 00:34:34.038 "workload": "randrw", 00:34:34.038 "percentage": 50, 00:34:34.038 "status": "finished", 00:34:34.038 "queue_depth": 2, 00:34:34.038 "io_size": 3145728, 00:34:34.038 "runtime": 8.990663, 00:34:34.038 "iops": 87.20157790365404, 00:34:34.038 "mibps": 261.60473371096214, 00:34:34.038 "io_failed": 0, 00:34:34.038 "io_timeout": 0, 00:34:34.038 "avg_latency_us": 16465.021885325557, 00:34:34.038 "min_latency_us": 312.0761904761905, 00:34:34.038 "max_latency_us": 114344.71619047619 00:34:34.038 } 00:34:34.038 ], 00:34:34.038 "core_count": 1 00:34:34.038 } 00:34:34.038 17:31:11 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:34.038 17:31:11 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:34:34.038 17:31:11 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:34.038 17:31:11 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:34:34.038 17:31:11 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@720 -- # jq length 00:34:34.038 17:31:11 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:34.038 17:31:11 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:34:34.038 17:31:11 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:34:34.038 17:31:11 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@723 -- # '[' true = true ']' 00:34:34.038 17:31:11 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@725 -- # nbd_start_disks /var/tmp/spdk.sock spare /dev/nbd0 00:34:34.038 17:31:11 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:34:34.038 17:31:11 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@10 -- # bdev_list=('spare') 00:34:34.038 17:31:11 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@10 -- # local bdev_list 00:34:34.038 17:31:11 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:34:34.038 17:31:11 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@11 -- # local nbd_list 00:34:34.038 17:31:11 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@12 -- # local i 00:34:34.038 17:31:11 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:34:34.038 17:31:11 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:34:34.038 17:31:11 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd0 00:34:34.298 /dev/nbd0 00:34:34.298 17:31:11 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:34:34.298 17:31:11 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:34:34.298 17:31:11 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:34:34.298 17:31:11 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@873 -- # local i 00:34:34.298 17:31:11 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:34:34.298 17:31:11 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:34:34.298 17:31:11 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:34:34.298 17:31:11 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@877 -- # break 00:34:34.298 17:31:11 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:34:34.298 17:31:11 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:34:34.298 17:31:11 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:34:34.298 1+0 records in 00:34:34.298 1+0 records out 00:34:34.298 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000261116 s, 15.7 MB/s 00:34:34.298 17:31:11 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:34:34.298 17:31:11 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@890 -- # size=4096 00:34:34.298 17:31:11 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:34:34.298 17:31:11 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:34:34.298 17:31:11 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@893 -- # return 0 00:34:34.298 17:31:11 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:34:34.298 17:31:11 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:34:34.298 17:31:11 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@726 -- # for bdev in "${base_bdevs[@]:1}" 00:34:34.298 17:31:11 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@727 -- # '[' -z BaseBdev2 ']' 00:34:34.298 17:31:11 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@730 -- # nbd_start_disks /var/tmp/spdk.sock BaseBdev2 /dev/nbd1 00:34:34.298 17:31:11 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:34:34.298 17:31:11 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev2') 00:34:34.298 17:31:11 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@10 -- # local bdev_list 00:34:34.298 17:31:11 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd1') 00:34:34.298 17:31:11 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@11 -- # local nbd_list 00:34:34.298 17:31:11 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@12 -- # local i 00:34:34.298 17:31:11 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:34:34.298 17:31:11 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:34:34.298 17:31:11 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev2 /dev/nbd1 00:34:34.558 /dev/nbd1 00:34:34.558 17:31:11 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:34:34.558 17:31:11 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:34:34.558 17:31:11 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:34:34.558 17:31:11 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@873 -- # local i 00:34:34.558 17:31:11 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:34:34.558 17:31:11 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:34:34.558 17:31:11 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:34:34.558 17:31:11 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@877 -- # break 00:34:34.558 17:31:11 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:34:34.558 17:31:11 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:34:34.558 17:31:11 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:34:34.558 1+0 records in 00:34:34.558 1+0 records out 00:34:34.558 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000278659 s, 14.7 MB/s 00:34:34.558 17:31:11 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:34:34.558 17:31:11 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@890 -- # size=4096 00:34:34.558 17:31:11 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:34:34.558 17:31:11 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:34:34.558 17:31:11 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@893 -- # return 0 00:34:34.558 17:31:11 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:34:34.558 17:31:11 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:34:34.558 17:31:11 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@731 -- # cmp -i 0 /dev/nbd0 /dev/nbd1 00:34:34.818 17:31:12 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@732 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd1 00:34:34.818 17:31:12 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:34:34.818 17:31:12 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd1') 00:34:34.818 17:31:12 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@50 -- # local nbd_list 00:34:34.818 17:31:12 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@51 -- # local i 00:34:34.818 17:31:12 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:34:34.818 17:31:12 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:34:35.077 17:31:12 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:34:35.077 17:31:12 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:34:35.077 17:31:12 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:34:35.077 17:31:12 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:34:35.077 17:31:12 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:34:35.077 17:31:12 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:34:35.077 17:31:12 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@41 -- # break 00:34:35.077 17:31:12 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@45 -- # return 0 00:34:35.077 17:31:12 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@734 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:34:35.077 17:31:12 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:34:35.077 17:31:12 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:34:35.077 17:31:12 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@50 -- # local nbd_list 00:34:35.077 17:31:12 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@51 -- # local i 00:34:35.077 17:31:12 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:34:35.078 17:31:12 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:34:35.078 17:31:12 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:34:35.078 17:31:12 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:34:35.078 17:31:12 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:34:35.078 17:31:12 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:34:35.078 17:31:12 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:34:35.078 17:31:12 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:34:35.078 17:31:12 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@41 -- # break 00:34:35.078 17:31:12 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@45 -- # return 0 00:34:35.078 17:31:12 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@743 -- # '[' false = true ']' 00:34:35.078 17:31:12 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@784 -- # killprocess 76915 00:34:35.078 17:31:12 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@954 -- # '[' -z 76915 ']' 00:34:35.078 17:31:12 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@958 -- # kill -0 76915 00:34:35.078 17:31:12 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@959 -- # uname 00:34:35.078 17:31:12 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:34:35.078 17:31:12 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 76915 00:34:35.337 17:31:12 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:34:35.337 17:31:12 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:34:35.337 17:31:12 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@972 -- # echo 'killing process with pid 76915' 00:34:35.337 killing process with pid 76915 00:34:35.337 17:31:12 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@973 -- # kill 76915 00:34:35.337 Received shutdown signal, test time was about 10.267584 seconds 00:34:35.337 00:34:35.337 Latency(us) 00:34:35.337 [2024-11-26T17:31:12.784Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:34:35.337 [2024-11-26T17:31:12.784Z] =================================================================================================================== 00:34:35.337 [2024-11-26T17:31:12.784Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:34:35.337 [2024-11-26 17:31:12.538456] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:34:35.337 17:31:12 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@978 -- # wait 76915 00:34:35.596 [2024-11-26 17:31:12.833961] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:34:36.974 17:31:14 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@786 -- # return 0 00:34:36.974 00:34:36.974 real 0m13.685s 00:34:36.974 user 0m17.026s 00:34:36.974 sys 0m1.689s 00:34:36.974 17:31:14 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@1130 -- # xtrace_disable 00:34:36.974 ************************************ 00:34:36.974 END TEST raid_rebuild_test_io 00:34:36.974 ************************************ 00:34:36.974 17:31:14 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:34:36.974 17:31:14 bdev_raid -- bdev/bdev_raid.sh@981 -- # run_test raid_rebuild_test_sb_io raid_rebuild_test raid1 2 true true true 00:34:36.974 17:31:14 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 7 -le 1 ']' 00:34:36.974 17:31:14 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:34:36.974 17:31:14 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:34:36.974 ************************************ 00:34:36.974 START TEST raid_rebuild_test_sb_io 00:34:36.974 ************************************ 00:34:36.974 17:31:14 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@1129 -- # raid_rebuild_test raid1 2 true true true 00:34:36.974 17:31:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@569 -- # local raid_level=raid1 00:34:36.974 17:31:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=2 00:34:36.974 17:31:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@571 -- # local superblock=true 00:34:36.974 17:31:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@572 -- # local background_io=true 00:34:36.974 17:31:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@573 -- # local verify=true 00:34:36.974 17:31:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:34:36.974 17:31:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:34:36.974 17:31:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:34:36.974 17:31:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:34:36.974 17:31:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:34:36.974 17:31:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:34:36.974 17:31:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:34:36.974 17:31:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:34:36.974 17:31:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:34:36.974 17:31:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:34:36.974 17:31:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:34:36.974 17:31:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@576 -- # local strip_size 00:34:36.974 17:31:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@577 -- # local create_arg 00:34:36.974 17:31:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:34:36.974 17:31:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@579 -- # local data_offset 00:34:36.974 17:31:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@581 -- # '[' raid1 '!=' raid1 ']' 00:34:36.974 17:31:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@589 -- # strip_size=0 00:34:36.974 17:31:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@592 -- # '[' true = true ']' 00:34:36.974 17:31:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@593 -- # create_arg+=' -s' 00:34:36.974 17:31:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@597 -- # raid_pid=77315 00:34:36.974 17:31:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@598 -- # waitforlisten 77315 00:34:36.974 17:31:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:34:36.974 17:31:14 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@835 -- # '[' -z 77315 ']' 00:34:36.974 17:31:14 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:34:36.974 17:31:14 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@840 -- # local max_retries=100 00:34:36.974 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:34:36.974 17:31:14 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:34:36.974 17:31:14 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@844 -- # xtrace_disable 00:34:36.974 17:31:14 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:34:36.974 I/O size of 3145728 is greater than zero copy threshold (65536). 00:34:36.974 Zero copy mechanism will not be used. 00:34:36.974 [2024-11-26 17:31:14.188316] Starting SPDK v25.01-pre git sha1 f7ce15267 / DPDK 24.03.0 initialization... 00:34:36.974 [2024-11-26 17:31:14.188458] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid77315 ] 00:34:36.974 [2024-11-26 17:31:14.353330] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:34:37.233 [2024-11-26 17:31:14.463802] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:34:37.233 [2024-11-26 17:31:14.671203] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:34:37.233 [2024-11-26 17:31:14.671245] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:34:37.800 17:31:14 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:34:37.800 17:31:14 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@868 -- # return 0 00:34:37.800 17:31:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:34:37.800 17:31:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:34:37.800 17:31:14 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:37.800 17:31:14 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:34:37.800 BaseBdev1_malloc 00:34:37.800 17:31:15 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:37.800 17:31:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:34:37.800 17:31:15 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:37.800 17:31:15 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:34:37.800 [2024-11-26 17:31:15.040946] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:34:37.800 [2024-11-26 17:31:15.041008] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:34:37.800 [2024-11-26 17:31:15.041033] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:34:37.800 [2024-11-26 17:31:15.041059] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:34:37.800 [2024-11-26 17:31:15.043439] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:34:37.800 [2024-11-26 17:31:15.043484] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:34:37.800 BaseBdev1 00:34:37.800 17:31:15 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:37.800 17:31:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:34:37.800 17:31:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:34:37.800 17:31:15 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:37.800 17:31:15 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:34:37.800 BaseBdev2_malloc 00:34:37.800 17:31:15 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:37.800 17:31:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:34:37.800 17:31:15 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:37.800 17:31:15 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:34:37.800 [2024-11-26 17:31:15.086759] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:34:37.800 [2024-11-26 17:31:15.086822] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:34:37.801 [2024-11-26 17:31:15.086845] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:34:37.801 [2024-11-26 17:31:15.086859] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:34:37.801 [2024-11-26 17:31:15.089198] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:34:37.801 [2024-11-26 17:31:15.089250] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:34:37.801 BaseBdev2 00:34:37.801 17:31:15 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:37.801 17:31:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 512 -b spare_malloc 00:34:37.801 17:31:15 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:37.801 17:31:15 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:34:37.801 spare_malloc 00:34:37.801 17:31:15 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:37.801 17:31:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:34:37.801 17:31:15 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:37.801 17:31:15 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:34:37.801 spare_delay 00:34:37.801 17:31:15 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:37.801 17:31:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:34:37.801 17:31:15 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:37.801 17:31:15 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:34:37.801 [2024-11-26 17:31:15.160266] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:34:37.801 [2024-11-26 17:31:15.160329] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:34:37.801 [2024-11-26 17:31:15.160352] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:34:37.801 [2024-11-26 17:31:15.160366] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:34:37.801 [2024-11-26 17:31:15.162874] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:34:37.801 [2024-11-26 17:31:15.162918] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:34:37.801 spare 00:34:37.801 17:31:15 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:37.801 17:31:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n raid_bdev1 00:34:37.801 17:31:15 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:37.801 17:31:15 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:34:37.801 [2024-11-26 17:31:15.172332] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:34:37.801 [2024-11-26 17:31:15.174375] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:34:37.801 [2024-11-26 17:31:15.174541] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:34:37.801 [2024-11-26 17:31:15.174558] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:34:37.801 [2024-11-26 17:31:15.174812] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:34:37.801 [2024-11-26 17:31:15.174979] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:34:37.801 [2024-11-26 17:31:15.174993] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:34:37.801 [2024-11-26 17:31:15.175192] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:34:37.801 17:31:15 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:37.801 17:31:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:34:37.801 17:31:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:34:37.801 17:31:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:34:37.801 17:31:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:34:37.801 17:31:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:34:37.801 17:31:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:34:37.801 17:31:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:34:37.801 17:31:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:34:37.801 17:31:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:34:37.801 17:31:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:34:37.801 17:31:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:34:37.801 17:31:15 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:37.801 17:31:15 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:34:37.801 17:31:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:34:37.801 17:31:15 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:37.801 17:31:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:34:37.801 "name": "raid_bdev1", 00:34:37.801 "uuid": "8f4d75dc-85cf-412c-ab70-ee3dda132df2", 00:34:37.801 "strip_size_kb": 0, 00:34:37.801 "state": "online", 00:34:37.801 "raid_level": "raid1", 00:34:37.801 "superblock": true, 00:34:37.801 "num_base_bdevs": 2, 00:34:37.801 "num_base_bdevs_discovered": 2, 00:34:37.801 "num_base_bdevs_operational": 2, 00:34:37.801 "base_bdevs_list": [ 00:34:37.801 { 00:34:37.801 "name": "BaseBdev1", 00:34:37.801 "uuid": "485f9e7a-f93a-54f1-b5d8-65741eb8a492", 00:34:37.801 "is_configured": true, 00:34:37.801 "data_offset": 2048, 00:34:37.801 "data_size": 63488 00:34:37.801 }, 00:34:37.801 { 00:34:37.801 "name": "BaseBdev2", 00:34:37.801 "uuid": "ac5cfe90-519a-5216-99d4-13731cd962ce", 00:34:37.801 "is_configured": true, 00:34:37.801 "data_offset": 2048, 00:34:37.801 "data_size": 63488 00:34:37.801 } 00:34:37.801 ] 00:34:37.801 }' 00:34:37.801 17:31:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:34:37.801 17:31:15 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:34:38.368 17:31:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:34:38.368 17:31:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:34:38.368 17:31:15 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:38.368 17:31:15 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:34:38.368 [2024-11-26 17:31:15.632679] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:34:38.368 17:31:15 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:38.368 17:31:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=63488 00:34:38.368 17:31:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:34:38.368 17:31:15 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:38.368 17:31:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:34:38.368 17:31:15 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:34:38.368 17:31:15 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:38.368 17:31:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@619 -- # data_offset=2048 00:34:38.368 17:31:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@621 -- # '[' true = true ']' 00:34:38.368 17:31:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:34:38.368 17:31:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@623 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:34:38.368 17:31:15 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:38.368 17:31:15 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:34:38.368 [2024-11-26 17:31:15.720420] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:34:38.368 17:31:15 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:38.368 17:31:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:34:38.368 17:31:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:34:38.368 17:31:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:34:38.368 17:31:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:34:38.368 17:31:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:34:38.368 17:31:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:34:38.368 17:31:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:34:38.368 17:31:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:34:38.368 17:31:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:34:38.368 17:31:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:34:38.368 17:31:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:34:38.368 17:31:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:34:38.368 17:31:15 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:38.368 17:31:15 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:34:38.368 17:31:15 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:38.368 17:31:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:34:38.368 "name": "raid_bdev1", 00:34:38.368 "uuid": "8f4d75dc-85cf-412c-ab70-ee3dda132df2", 00:34:38.368 "strip_size_kb": 0, 00:34:38.368 "state": "online", 00:34:38.368 "raid_level": "raid1", 00:34:38.368 "superblock": true, 00:34:38.368 "num_base_bdevs": 2, 00:34:38.368 "num_base_bdevs_discovered": 1, 00:34:38.368 "num_base_bdevs_operational": 1, 00:34:38.368 "base_bdevs_list": [ 00:34:38.368 { 00:34:38.368 "name": null, 00:34:38.368 "uuid": "00000000-0000-0000-0000-000000000000", 00:34:38.368 "is_configured": false, 00:34:38.368 "data_offset": 0, 00:34:38.368 "data_size": 63488 00:34:38.368 }, 00:34:38.368 { 00:34:38.368 "name": "BaseBdev2", 00:34:38.368 "uuid": "ac5cfe90-519a-5216-99d4-13731cd962ce", 00:34:38.368 "is_configured": true, 00:34:38.368 "data_offset": 2048, 00:34:38.368 "data_size": 63488 00:34:38.368 } 00:34:38.368 ] 00:34:38.368 }' 00:34:38.368 17:31:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:34:38.368 17:31:15 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:34:38.627 [2024-11-26 17:31:15.847917] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006080 00:34:38.627 I/O size of 3145728 is greater than zero copy threshold (65536). 00:34:38.627 Zero copy mechanism will not be used. 00:34:38.627 Running I/O for 60 seconds... 00:34:38.887 17:31:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:34:38.887 17:31:16 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:38.887 17:31:16 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:34:38.887 [2024-11-26 17:31:16.201496] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:34:38.887 17:31:16 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:38.887 17:31:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@647 -- # sleep 1 00:34:38.887 [2024-11-26 17:31:16.277932] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006150 00:34:38.887 [2024-11-26 17:31:16.280106] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:34:39.145 [2024-11-26 17:31:16.403328] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:34:39.145 [2024-11-26 17:31:16.530664] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:34:39.145 [2024-11-26 17:31:16.530945] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:34:39.712 154.00 IOPS, 462.00 MiB/s [2024-11-26T17:31:17.159Z] [2024-11-26 17:31:16.988989] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:34:39.712 [2024-11-26 17:31:16.989323] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:34:39.971 17:31:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:34:39.971 17:31:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:34:39.971 17:31:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:34:39.971 17:31:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:34:39.971 17:31:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:34:39.971 17:31:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:34:39.971 17:31:17 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:39.971 17:31:17 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:34:39.971 17:31:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:34:39.971 17:31:17 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:39.971 17:31:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:34:39.971 "name": "raid_bdev1", 00:34:39.971 "uuid": "8f4d75dc-85cf-412c-ab70-ee3dda132df2", 00:34:39.971 "strip_size_kb": 0, 00:34:39.971 "state": "online", 00:34:39.971 "raid_level": "raid1", 00:34:39.971 "superblock": true, 00:34:39.971 "num_base_bdevs": 2, 00:34:39.971 "num_base_bdevs_discovered": 2, 00:34:39.971 "num_base_bdevs_operational": 2, 00:34:39.971 "process": { 00:34:39.971 "type": "rebuild", 00:34:39.971 "target": "spare", 00:34:39.971 "progress": { 00:34:39.971 "blocks": 12288, 00:34:39.971 "percent": 19 00:34:39.971 } 00:34:39.971 }, 00:34:39.971 "base_bdevs_list": [ 00:34:39.971 { 00:34:39.971 "name": "spare", 00:34:39.971 "uuid": "3168293e-fe48-5943-b0af-ef52f25dd98e", 00:34:39.971 "is_configured": true, 00:34:39.971 "data_offset": 2048, 00:34:39.971 "data_size": 63488 00:34:39.971 }, 00:34:39.971 { 00:34:39.971 "name": "BaseBdev2", 00:34:39.971 "uuid": "ac5cfe90-519a-5216-99d4-13731cd962ce", 00:34:39.971 "is_configured": true, 00:34:39.971 "data_offset": 2048, 00:34:39.971 "data_size": 63488 00:34:39.971 } 00:34:39.971 ] 00:34:39.971 }' 00:34:39.971 17:31:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:34:39.971 17:31:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:34:39.971 17:31:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:34:39.971 [2024-11-26 17:31:17.343199] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 14336 offset_begin: 12288 offset_end: 18432 00:34:39.971 [2024-11-26 17:31:17.343805] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 14336 offset_begin: 12288 offset_end: 18432 00:34:39.971 17:31:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:34:39.971 17:31:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:34:39.971 17:31:17 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:39.971 17:31:17 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:34:39.971 [2024-11-26 17:31:17.384853] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:34:40.229 [2024-11-26 17:31:17.470716] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:34:40.229 [2024-11-26 17:31:17.485357] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:34:40.229 [2024-11-26 17:31:17.485434] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:34:40.229 [2024-11-26 17:31:17.485450] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:34:40.229 [2024-11-26 17:31:17.531073] bdev_raid.c:1974:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 0 raid_ch: 0x60d000006080 00:34:40.229 17:31:17 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:40.229 17:31:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:34:40.229 17:31:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:34:40.229 17:31:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:34:40.229 17:31:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:34:40.229 17:31:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:34:40.229 17:31:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:34:40.229 17:31:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:34:40.229 17:31:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:34:40.229 17:31:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:34:40.229 17:31:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:34:40.229 17:31:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:34:40.229 17:31:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:34:40.229 17:31:17 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:40.229 17:31:17 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:34:40.229 17:31:17 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:40.229 17:31:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:34:40.229 "name": "raid_bdev1", 00:34:40.229 "uuid": "8f4d75dc-85cf-412c-ab70-ee3dda132df2", 00:34:40.229 "strip_size_kb": 0, 00:34:40.229 "state": "online", 00:34:40.229 "raid_level": "raid1", 00:34:40.229 "superblock": true, 00:34:40.229 "num_base_bdevs": 2, 00:34:40.229 "num_base_bdevs_discovered": 1, 00:34:40.229 "num_base_bdevs_operational": 1, 00:34:40.229 "base_bdevs_list": [ 00:34:40.229 { 00:34:40.229 "name": null, 00:34:40.229 "uuid": "00000000-0000-0000-0000-000000000000", 00:34:40.229 "is_configured": false, 00:34:40.229 "data_offset": 0, 00:34:40.229 "data_size": 63488 00:34:40.229 }, 00:34:40.229 { 00:34:40.229 "name": "BaseBdev2", 00:34:40.229 "uuid": "ac5cfe90-519a-5216-99d4-13731cd962ce", 00:34:40.229 "is_configured": true, 00:34:40.229 "data_offset": 2048, 00:34:40.229 "data_size": 63488 00:34:40.229 } 00:34:40.229 ] 00:34:40.229 }' 00:34:40.229 17:31:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:34:40.229 17:31:17 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:34:40.745 152.00 IOPS, 456.00 MiB/s [2024-11-26T17:31:18.192Z] 17:31:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:34:40.745 17:31:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:34:40.745 17:31:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:34:40.745 17:31:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:34:40.745 17:31:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:34:40.745 17:31:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:34:40.745 17:31:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:34:40.745 17:31:18 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:40.745 17:31:18 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:34:40.745 17:31:18 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:40.745 17:31:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:34:40.745 "name": "raid_bdev1", 00:34:40.745 "uuid": "8f4d75dc-85cf-412c-ab70-ee3dda132df2", 00:34:40.745 "strip_size_kb": 0, 00:34:40.745 "state": "online", 00:34:40.745 "raid_level": "raid1", 00:34:40.745 "superblock": true, 00:34:40.745 "num_base_bdevs": 2, 00:34:40.745 "num_base_bdevs_discovered": 1, 00:34:40.745 "num_base_bdevs_operational": 1, 00:34:40.745 "base_bdevs_list": [ 00:34:40.745 { 00:34:40.745 "name": null, 00:34:40.745 "uuid": "00000000-0000-0000-0000-000000000000", 00:34:40.745 "is_configured": false, 00:34:40.745 "data_offset": 0, 00:34:40.745 "data_size": 63488 00:34:40.745 }, 00:34:40.745 { 00:34:40.746 "name": "BaseBdev2", 00:34:40.746 "uuid": "ac5cfe90-519a-5216-99d4-13731cd962ce", 00:34:40.746 "is_configured": true, 00:34:40.746 "data_offset": 2048, 00:34:40.746 "data_size": 63488 00:34:40.746 } 00:34:40.746 ] 00:34:40.746 }' 00:34:40.746 17:31:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:34:40.746 17:31:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:34:40.746 17:31:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:34:40.746 17:31:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:34:40.746 17:31:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:34:40.746 17:31:18 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:40.746 17:31:18 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:34:40.746 [2024-11-26 17:31:18.150578] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:34:41.005 17:31:18 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:41.005 17:31:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@663 -- # sleep 1 00:34:41.005 [2024-11-26 17:31:18.229955] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:34:41.005 [2024-11-26 17:31:18.232239] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:34:41.005 [2024-11-26 17:31:18.346415] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:34:41.005 [2024-11-26 17:31:18.347000] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:34:41.264 [2024-11-26 17:31:18.562016] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:34:41.264 [2024-11-26 17:31:18.562390] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:34:41.783 142.67 IOPS, 428.00 MiB/s [2024-11-26T17:31:19.230Z] [2024-11-26 17:31:19.014549] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:34:41.783 17:31:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:34:41.783 17:31:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:34:41.783 17:31:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:34:41.783 17:31:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:34:41.783 17:31:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:34:41.783 17:31:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:34:41.783 17:31:19 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:41.783 17:31:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:34:41.783 17:31:19 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:34:41.783 17:31:19 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:42.041 17:31:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:34:42.041 "name": "raid_bdev1", 00:34:42.041 "uuid": "8f4d75dc-85cf-412c-ab70-ee3dda132df2", 00:34:42.041 "strip_size_kb": 0, 00:34:42.041 "state": "online", 00:34:42.041 "raid_level": "raid1", 00:34:42.041 "superblock": true, 00:34:42.041 "num_base_bdevs": 2, 00:34:42.041 "num_base_bdevs_discovered": 2, 00:34:42.041 "num_base_bdevs_operational": 2, 00:34:42.041 "process": { 00:34:42.041 "type": "rebuild", 00:34:42.041 "target": "spare", 00:34:42.041 "progress": { 00:34:42.041 "blocks": 10240, 00:34:42.041 "percent": 16 00:34:42.041 } 00:34:42.041 }, 00:34:42.041 "base_bdevs_list": [ 00:34:42.041 { 00:34:42.041 "name": "spare", 00:34:42.041 "uuid": "3168293e-fe48-5943-b0af-ef52f25dd98e", 00:34:42.041 "is_configured": true, 00:34:42.041 "data_offset": 2048, 00:34:42.041 "data_size": 63488 00:34:42.041 }, 00:34:42.041 { 00:34:42.041 "name": "BaseBdev2", 00:34:42.041 "uuid": "ac5cfe90-519a-5216-99d4-13731cd962ce", 00:34:42.041 "is_configured": true, 00:34:42.041 "data_offset": 2048, 00:34:42.041 "data_size": 63488 00:34:42.041 } 00:34:42.041 ] 00:34:42.041 }' 00:34:42.041 17:31:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:34:42.041 17:31:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:34:42.041 17:31:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:34:42.041 [2024-11-26 17:31:19.339677] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 14336 offset_begin: 12288 offset_end: 18432 00:34:42.041 [2024-11-26 17:31:19.340240] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 14336 offset_begin: 12288 offset_end: 18432 00:34:42.042 17:31:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:34:42.042 17:31:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@666 -- # '[' true = true ']' 00:34:42.042 17:31:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@666 -- # '[' = false ']' 00:34:42.042 /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh: line 666: [: =: unary operator expected 00:34:42.042 17:31:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=2 00:34:42.042 17:31:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@693 -- # '[' raid1 = raid1 ']' 00:34:42.042 17:31:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@693 -- # '[' 2 -gt 2 ']' 00:34:42.042 17:31:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@706 -- # local timeout=433 00:34:42.042 17:31:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:34:42.042 17:31:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:34:42.042 17:31:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:34:42.042 17:31:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:34:42.042 17:31:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:34:42.042 17:31:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:34:42.042 17:31:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:34:42.042 17:31:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:34:42.042 17:31:19 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:42.042 17:31:19 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:34:42.042 17:31:19 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:42.042 17:31:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:34:42.042 "name": "raid_bdev1", 00:34:42.042 "uuid": "8f4d75dc-85cf-412c-ab70-ee3dda132df2", 00:34:42.042 "strip_size_kb": 0, 00:34:42.042 "state": "online", 00:34:42.042 "raid_level": "raid1", 00:34:42.042 "superblock": true, 00:34:42.042 "num_base_bdevs": 2, 00:34:42.042 "num_base_bdevs_discovered": 2, 00:34:42.042 "num_base_bdevs_operational": 2, 00:34:42.042 "process": { 00:34:42.042 "type": "rebuild", 00:34:42.042 "target": "spare", 00:34:42.042 "progress": { 00:34:42.042 "blocks": 14336, 00:34:42.042 "percent": 22 00:34:42.042 } 00:34:42.042 }, 00:34:42.042 "base_bdevs_list": [ 00:34:42.042 { 00:34:42.042 "name": "spare", 00:34:42.042 "uuid": "3168293e-fe48-5943-b0af-ef52f25dd98e", 00:34:42.042 "is_configured": true, 00:34:42.042 "data_offset": 2048, 00:34:42.042 "data_size": 63488 00:34:42.042 }, 00:34:42.042 { 00:34:42.042 "name": "BaseBdev2", 00:34:42.042 "uuid": "ac5cfe90-519a-5216-99d4-13731cd962ce", 00:34:42.042 "is_configured": true, 00:34:42.042 "data_offset": 2048, 00:34:42.042 "data_size": 63488 00:34:42.042 } 00:34:42.042 ] 00:34:42.042 }' 00:34:42.042 17:31:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:34:42.042 17:31:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:34:42.042 17:31:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:34:42.042 17:31:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:34:42.042 17:31:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@711 -- # sleep 1 00:34:42.300 [2024-11-26 17:31:19.554619] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 16384 offset_begin: 12288 offset_end: 18432 00:34:42.300 [2024-11-26 17:31:19.554967] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 16384 offset_begin: 12288 offset_end: 18432 00:34:42.564 125.50 IOPS, 376.50 MiB/s [2024-11-26T17:31:20.011Z] [2024-11-26 17:31:19.989791] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 22528 offset_begin: 18432 offset_end: 24576 00:34:42.564 [2024-11-26 17:31:19.990094] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 22528 offset_begin: 18432 offset_end: 24576 00:34:43.161 17:31:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:34:43.161 17:31:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:34:43.161 17:31:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:34:43.161 17:31:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:34:43.161 17:31:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:34:43.161 17:31:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:34:43.161 17:31:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:34:43.161 17:31:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:34:43.161 17:31:20 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:43.161 17:31:20 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:34:43.161 17:31:20 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:43.161 17:31:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:34:43.161 "name": "raid_bdev1", 00:34:43.161 "uuid": "8f4d75dc-85cf-412c-ab70-ee3dda132df2", 00:34:43.161 "strip_size_kb": 0, 00:34:43.161 "state": "online", 00:34:43.161 "raid_level": "raid1", 00:34:43.161 "superblock": true, 00:34:43.161 "num_base_bdevs": 2, 00:34:43.161 "num_base_bdevs_discovered": 2, 00:34:43.161 "num_base_bdevs_operational": 2, 00:34:43.161 "process": { 00:34:43.161 "type": "rebuild", 00:34:43.161 "target": "spare", 00:34:43.161 "progress": { 00:34:43.161 "blocks": 28672, 00:34:43.161 "percent": 45 00:34:43.161 } 00:34:43.161 }, 00:34:43.161 "base_bdevs_list": [ 00:34:43.161 { 00:34:43.161 "name": "spare", 00:34:43.161 "uuid": "3168293e-fe48-5943-b0af-ef52f25dd98e", 00:34:43.161 "is_configured": true, 00:34:43.161 "data_offset": 2048, 00:34:43.161 "data_size": 63488 00:34:43.161 }, 00:34:43.161 { 00:34:43.161 "name": "BaseBdev2", 00:34:43.161 "uuid": "ac5cfe90-519a-5216-99d4-13731cd962ce", 00:34:43.161 "is_configured": true, 00:34:43.161 "data_offset": 2048, 00:34:43.161 "data_size": 63488 00:34:43.161 } 00:34:43.161 ] 00:34:43.161 }' 00:34:43.161 17:31:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:34:43.161 17:31:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:34:43.161 17:31:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:34:43.420 17:31:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:34:43.420 17:31:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@711 -- # sleep 1 00:34:43.420 [2024-11-26 17:31:20.655915] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 32768 offset_begin: 30720 offset_end: 36864 00:34:43.420 [2024-11-26 17:31:20.772032] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 34816 offset_begin: 30720 offset_end: 36864 00:34:43.988 111.60 IOPS, 334.80 MiB/s [2024-11-26T17:31:21.435Z] [2024-11-26 17:31:21.371503] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 45056 offset_begin: 43008 offset_end: 49152 00:34:44.247 [2024-11-26 17:31:21.586389] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 47104 offset_begin: 43008 offset_end: 49152 00:34:44.247 17:31:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:34:44.247 17:31:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:34:44.247 17:31:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:34:44.247 17:31:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:34:44.247 17:31:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:34:44.247 17:31:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:34:44.247 17:31:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:34:44.247 17:31:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:34:44.247 17:31:21 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:44.247 17:31:21 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:34:44.247 17:31:21 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:44.507 17:31:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:34:44.507 "name": "raid_bdev1", 00:34:44.507 "uuid": "8f4d75dc-85cf-412c-ab70-ee3dda132df2", 00:34:44.507 "strip_size_kb": 0, 00:34:44.507 "state": "online", 00:34:44.507 "raid_level": "raid1", 00:34:44.507 "superblock": true, 00:34:44.507 "num_base_bdevs": 2, 00:34:44.507 "num_base_bdevs_discovered": 2, 00:34:44.507 "num_base_bdevs_operational": 2, 00:34:44.507 "process": { 00:34:44.507 "type": "rebuild", 00:34:44.507 "target": "spare", 00:34:44.507 "progress": { 00:34:44.507 "blocks": 47104, 00:34:44.507 "percent": 74 00:34:44.507 } 00:34:44.507 }, 00:34:44.507 "base_bdevs_list": [ 00:34:44.507 { 00:34:44.507 "name": "spare", 00:34:44.507 "uuid": "3168293e-fe48-5943-b0af-ef52f25dd98e", 00:34:44.507 "is_configured": true, 00:34:44.507 "data_offset": 2048, 00:34:44.507 "data_size": 63488 00:34:44.507 }, 00:34:44.507 { 00:34:44.507 "name": "BaseBdev2", 00:34:44.507 "uuid": "ac5cfe90-519a-5216-99d4-13731cd962ce", 00:34:44.507 "is_configured": true, 00:34:44.507 "data_offset": 2048, 00:34:44.507 "data_size": 63488 00:34:44.507 } 00:34:44.507 ] 00:34:44.507 }' 00:34:44.507 17:31:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:34:44.507 17:31:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:34:44.507 17:31:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:34:44.507 17:31:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:34:44.507 17:31:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@711 -- # sleep 1 00:34:44.507 [2024-11-26 17:31:21.818279] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 51200 offset_begin: 49152 offset_end: 55296 00:34:45.442 101.17 IOPS, 303.50 MiB/s [2024-11-26T17:31:22.889Z] [2024-11-26 17:31:22.564896] bdev_raid.c:2900:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:34:45.442 [2024-11-26 17:31:22.654357] bdev_raid.c:2562:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:34:45.442 [2024-11-26 17:31:22.656566] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:34:45.442 17:31:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:34:45.442 17:31:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:34:45.442 17:31:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:34:45.442 17:31:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:34:45.442 17:31:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:34:45.442 17:31:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:34:45.442 17:31:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:34:45.442 17:31:22 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:45.442 17:31:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:34:45.442 17:31:22 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:34:45.442 17:31:22 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:45.442 17:31:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:34:45.442 "name": "raid_bdev1", 00:34:45.442 "uuid": "8f4d75dc-85cf-412c-ab70-ee3dda132df2", 00:34:45.442 "strip_size_kb": 0, 00:34:45.442 "state": "online", 00:34:45.443 "raid_level": "raid1", 00:34:45.443 "superblock": true, 00:34:45.443 "num_base_bdevs": 2, 00:34:45.443 "num_base_bdevs_discovered": 2, 00:34:45.443 "num_base_bdevs_operational": 2, 00:34:45.443 "base_bdevs_list": [ 00:34:45.443 { 00:34:45.443 "name": "spare", 00:34:45.443 "uuid": "3168293e-fe48-5943-b0af-ef52f25dd98e", 00:34:45.443 "is_configured": true, 00:34:45.443 "data_offset": 2048, 00:34:45.443 "data_size": 63488 00:34:45.443 }, 00:34:45.443 { 00:34:45.443 "name": "BaseBdev2", 00:34:45.443 "uuid": "ac5cfe90-519a-5216-99d4-13731cd962ce", 00:34:45.443 "is_configured": true, 00:34:45.443 "data_offset": 2048, 00:34:45.443 "data_size": 63488 00:34:45.443 } 00:34:45.443 ] 00:34:45.443 }' 00:34:45.443 17:31:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:34:45.702 91.71 IOPS, 275.14 MiB/s [2024-11-26T17:31:23.149Z] 17:31:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:34:45.702 17:31:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:34:45.702 17:31:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:34:45.702 17:31:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@709 -- # break 00:34:45.702 17:31:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:34:45.702 17:31:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:34:45.702 17:31:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:34:45.702 17:31:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:34:45.702 17:31:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:34:45.702 17:31:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:34:45.702 17:31:22 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:45.702 17:31:22 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:34:45.702 17:31:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:34:45.702 17:31:22 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:45.702 17:31:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:34:45.702 "name": "raid_bdev1", 00:34:45.702 "uuid": "8f4d75dc-85cf-412c-ab70-ee3dda132df2", 00:34:45.702 "strip_size_kb": 0, 00:34:45.702 "state": "online", 00:34:45.702 "raid_level": "raid1", 00:34:45.702 "superblock": true, 00:34:45.702 "num_base_bdevs": 2, 00:34:45.702 "num_base_bdevs_discovered": 2, 00:34:45.702 "num_base_bdevs_operational": 2, 00:34:45.702 "base_bdevs_list": [ 00:34:45.702 { 00:34:45.702 "name": "spare", 00:34:45.702 "uuid": "3168293e-fe48-5943-b0af-ef52f25dd98e", 00:34:45.702 "is_configured": true, 00:34:45.702 "data_offset": 2048, 00:34:45.702 "data_size": 63488 00:34:45.702 }, 00:34:45.702 { 00:34:45.702 "name": "BaseBdev2", 00:34:45.702 "uuid": "ac5cfe90-519a-5216-99d4-13731cd962ce", 00:34:45.702 "is_configured": true, 00:34:45.702 "data_offset": 2048, 00:34:45.702 "data_size": 63488 00:34:45.702 } 00:34:45.702 ] 00:34:45.702 }' 00:34:45.702 17:31:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:34:45.702 17:31:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:34:45.702 17:31:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:34:45.702 17:31:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:34:45.702 17:31:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:34:45.702 17:31:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:34:45.702 17:31:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:34:45.702 17:31:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:34:45.702 17:31:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:34:45.702 17:31:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:34:45.702 17:31:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:34:45.702 17:31:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:34:45.702 17:31:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:34:45.702 17:31:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:34:45.702 17:31:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:34:45.702 17:31:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:34:45.702 17:31:23 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:45.702 17:31:23 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:34:45.702 17:31:23 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:45.702 17:31:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:34:45.702 "name": "raid_bdev1", 00:34:45.702 "uuid": "8f4d75dc-85cf-412c-ab70-ee3dda132df2", 00:34:45.702 "strip_size_kb": 0, 00:34:45.702 "state": "online", 00:34:45.702 "raid_level": "raid1", 00:34:45.702 "superblock": true, 00:34:45.702 "num_base_bdevs": 2, 00:34:45.702 "num_base_bdevs_discovered": 2, 00:34:45.702 "num_base_bdevs_operational": 2, 00:34:45.702 "base_bdevs_list": [ 00:34:45.702 { 00:34:45.702 "name": "spare", 00:34:45.702 "uuid": "3168293e-fe48-5943-b0af-ef52f25dd98e", 00:34:45.702 "is_configured": true, 00:34:45.702 "data_offset": 2048, 00:34:45.702 "data_size": 63488 00:34:45.702 }, 00:34:45.702 { 00:34:45.702 "name": "BaseBdev2", 00:34:45.702 "uuid": "ac5cfe90-519a-5216-99d4-13731cd962ce", 00:34:45.702 "is_configured": true, 00:34:45.702 "data_offset": 2048, 00:34:45.702 "data_size": 63488 00:34:45.702 } 00:34:45.702 ] 00:34:45.702 }' 00:34:45.702 17:31:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:34:45.702 17:31:23 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:34:46.270 17:31:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:34:46.270 17:31:23 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:46.270 17:31:23 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:34:46.270 [2024-11-26 17:31:23.499492] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:34:46.270 [2024-11-26 17:31:23.499682] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:34:46.270 00:34:46.270 Latency(us) 00:34:46.270 [2024-11-26T17:31:23.717Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:34:46.270 Job: raid_bdev1 (Core Mask 0x1, workload: randrw, percentage: 50, depth: 2, IO size: 3145728) 00:34:46.270 raid_bdev1 : 7.71 86.13 258.40 0.00 0.00 15850.21 310.13 133818.27 00:34:46.270 [2024-11-26T17:31:23.717Z] =================================================================================================================== 00:34:46.270 [2024-11-26T17:31:23.717Z] Total : 86.13 258.40 0.00 0.00 15850.21 310.13 133818.27 00:34:46.270 [2024-11-26 17:31:23.581734] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:34:46.270 [2024-11-26 17:31:23.581804] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:34:46.270 [2024-11-26 17:31:23.581878] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:34:46.270 [2024-11-26 17:31:23.581893] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:34:46.270 { 00:34:46.270 "results": [ 00:34:46.271 { 00:34:46.271 "job": "raid_bdev1", 00:34:46.271 "core_mask": "0x1", 00:34:46.271 "workload": "randrw", 00:34:46.271 "percentage": 50, 00:34:46.271 "status": "finished", 00:34:46.271 "queue_depth": 2, 00:34:46.271 "io_size": 3145728, 00:34:46.271 "runtime": 7.709096, 00:34:46.271 "iops": 86.13201859206319, 00:34:46.271 "mibps": 258.39605577618954, 00:34:46.271 "io_failed": 0, 00:34:46.271 "io_timeout": 0, 00:34:46.271 "avg_latency_us": 15850.214297188753, 00:34:46.271 "min_latency_us": 310.1257142857143, 00:34:46.271 "max_latency_us": 133818.2704761905 00:34:46.271 } 00:34:46.271 ], 00:34:46.271 "core_count": 1 00:34:46.271 } 00:34:46.271 17:31:23 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:46.271 17:31:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@720 -- # jq length 00:34:46.271 17:31:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:34:46.271 17:31:23 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:46.271 17:31:23 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:34:46.271 17:31:23 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:46.271 17:31:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:34:46.271 17:31:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:34:46.271 17:31:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@723 -- # '[' true = true ']' 00:34:46.271 17:31:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@725 -- # nbd_start_disks /var/tmp/spdk.sock spare /dev/nbd0 00:34:46.271 17:31:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:34:46.271 17:31:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@10 -- # bdev_list=('spare') 00:34:46.271 17:31:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@10 -- # local bdev_list 00:34:46.271 17:31:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:34:46.271 17:31:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@11 -- # local nbd_list 00:34:46.271 17:31:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@12 -- # local i 00:34:46.271 17:31:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:34:46.271 17:31:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:34:46.271 17:31:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd0 00:34:46.530 /dev/nbd0 00:34:46.530 17:31:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:34:46.530 17:31:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:34:46.530 17:31:23 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:34:46.530 17:31:23 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@873 -- # local i 00:34:46.530 17:31:23 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:34:46.789 17:31:23 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:34:46.789 17:31:23 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:34:46.789 17:31:23 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@877 -- # break 00:34:46.789 17:31:23 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:34:46.789 17:31:23 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:34:46.789 17:31:23 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:34:46.789 1+0 records in 00:34:46.789 1+0 records out 00:34:46.789 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000625528 s, 6.5 MB/s 00:34:46.789 17:31:23 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:34:46.789 17:31:23 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@890 -- # size=4096 00:34:46.789 17:31:23 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:34:46.789 17:31:23 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:34:46.789 17:31:23 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@893 -- # return 0 00:34:46.789 17:31:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:34:46.789 17:31:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:34:46.789 17:31:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@726 -- # for bdev in "${base_bdevs[@]:1}" 00:34:46.789 17:31:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@727 -- # '[' -z BaseBdev2 ']' 00:34:46.789 17:31:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@730 -- # nbd_start_disks /var/tmp/spdk.sock BaseBdev2 /dev/nbd1 00:34:46.789 17:31:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:34:46.789 17:31:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev2') 00:34:46.789 17:31:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@10 -- # local bdev_list 00:34:46.789 17:31:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd1') 00:34:46.789 17:31:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@11 -- # local nbd_list 00:34:46.789 17:31:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@12 -- # local i 00:34:46.789 17:31:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:34:46.789 17:31:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:34:46.789 17:31:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev2 /dev/nbd1 00:34:47.048 /dev/nbd1 00:34:47.048 17:31:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:34:47.048 17:31:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:34:47.048 17:31:24 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:34:47.048 17:31:24 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@873 -- # local i 00:34:47.048 17:31:24 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:34:47.048 17:31:24 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:34:47.048 17:31:24 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:34:47.048 17:31:24 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@877 -- # break 00:34:47.048 17:31:24 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:34:47.048 17:31:24 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:34:47.048 17:31:24 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:34:47.048 1+0 records in 00:34:47.048 1+0 records out 00:34:47.048 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000435967 s, 9.4 MB/s 00:34:47.049 17:31:24 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:34:47.049 17:31:24 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@890 -- # size=4096 00:34:47.049 17:31:24 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:34:47.049 17:31:24 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:34:47.049 17:31:24 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@893 -- # return 0 00:34:47.049 17:31:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:34:47.049 17:31:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:34:47.049 17:31:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@731 -- # cmp -i 1048576 /dev/nbd0 /dev/nbd1 00:34:47.308 17:31:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@732 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd1 00:34:47.308 17:31:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:34:47.308 17:31:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd1') 00:34:47.308 17:31:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@50 -- # local nbd_list 00:34:47.308 17:31:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@51 -- # local i 00:34:47.308 17:31:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:34:47.308 17:31:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:34:47.567 17:31:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:34:47.567 17:31:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:34:47.567 17:31:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:34:47.567 17:31:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:34:47.567 17:31:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:34:47.567 17:31:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:34:47.567 17:31:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@41 -- # break 00:34:47.567 17:31:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@45 -- # return 0 00:34:47.567 17:31:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@734 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:34:47.567 17:31:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:34:47.567 17:31:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:34:47.567 17:31:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@50 -- # local nbd_list 00:34:47.567 17:31:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@51 -- # local i 00:34:47.567 17:31:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:34:47.567 17:31:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:34:47.827 17:31:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:34:47.827 17:31:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:34:47.827 17:31:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:34:47.827 17:31:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:34:47.827 17:31:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:34:47.827 17:31:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:34:47.827 17:31:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@41 -- # break 00:34:47.827 17:31:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@45 -- # return 0 00:34:47.827 17:31:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@743 -- # '[' true = true ']' 00:34:47.827 17:31:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@745 -- # rpc_cmd bdev_passthru_delete spare 00:34:47.827 17:31:25 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:47.827 17:31:25 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:34:47.827 17:31:25 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:47.827 17:31:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@746 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:34:47.827 17:31:25 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:47.827 17:31:25 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:34:47.827 [2024-11-26 17:31:25.102019] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:34:47.827 [2024-11-26 17:31:25.102224] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:34:47.827 [2024-11-26 17:31:25.102307] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a580 00:34:47.827 [2024-11-26 17:31:25.102398] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:34:47.827 [2024-11-26 17:31:25.105045] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:34:47.827 [2024-11-26 17:31:25.105215] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:34:47.827 [2024-11-26 17:31:25.105403] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:34:47.827 [2024-11-26 17:31:25.105561] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:34:47.827 [2024-11-26 17:31:25.105729] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:34:47.827 spare 00:34:47.827 17:31:25 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:47.827 17:31:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@747 -- # rpc_cmd bdev_wait_for_examine 00:34:47.827 17:31:25 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:47.827 17:31:25 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:34:47.827 [2024-11-26 17:31:25.205837] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007b00 00:34:47.827 [2024-11-26 17:31:25.205875] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:34:47.827 [2024-11-26 17:31:25.206257] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00002b0d0 00:34:47.827 [2024-11-26 17:31:25.206455] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007b00 00:34:47.827 [2024-11-26 17:31:25.206475] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007b00 00:34:47.827 [2024-11-26 17:31:25.206678] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:34:47.827 17:31:25 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:47.827 17:31:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@749 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:34:47.827 17:31:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:34:47.827 17:31:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:34:47.827 17:31:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:34:47.827 17:31:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:34:47.827 17:31:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:34:47.827 17:31:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:34:47.827 17:31:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:34:47.827 17:31:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:34:47.827 17:31:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:34:47.827 17:31:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:34:47.827 17:31:25 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:47.827 17:31:25 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:34:47.827 17:31:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:34:47.827 17:31:25 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:47.827 17:31:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:34:47.827 "name": "raid_bdev1", 00:34:47.827 "uuid": "8f4d75dc-85cf-412c-ab70-ee3dda132df2", 00:34:47.827 "strip_size_kb": 0, 00:34:47.827 "state": "online", 00:34:47.827 "raid_level": "raid1", 00:34:47.827 "superblock": true, 00:34:47.827 "num_base_bdevs": 2, 00:34:47.827 "num_base_bdevs_discovered": 2, 00:34:47.827 "num_base_bdevs_operational": 2, 00:34:47.827 "base_bdevs_list": [ 00:34:47.827 { 00:34:47.827 "name": "spare", 00:34:47.827 "uuid": "3168293e-fe48-5943-b0af-ef52f25dd98e", 00:34:47.827 "is_configured": true, 00:34:47.827 "data_offset": 2048, 00:34:47.827 "data_size": 63488 00:34:47.827 }, 00:34:47.827 { 00:34:47.827 "name": "BaseBdev2", 00:34:47.827 "uuid": "ac5cfe90-519a-5216-99d4-13731cd962ce", 00:34:47.827 "is_configured": true, 00:34:47.827 "data_offset": 2048, 00:34:47.827 "data_size": 63488 00:34:47.827 } 00:34:47.827 ] 00:34:47.827 }' 00:34:47.827 17:31:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:34:47.827 17:31:25 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:34:48.396 17:31:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@750 -- # verify_raid_bdev_process raid_bdev1 none none 00:34:48.396 17:31:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:34:48.396 17:31:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:34:48.396 17:31:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:34:48.396 17:31:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:34:48.396 17:31:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:34:48.396 17:31:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:34:48.396 17:31:25 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:48.396 17:31:25 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:34:48.396 17:31:25 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:48.396 17:31:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:34:48.396 "name": "raid_bdev1", 00:34:48.396 "uuid": "8f4d75dc-85cf-412c-ab70-ee3dda132df2", 00:34:48.396 "strip_size_kb": 0, 00:34:48.396 "state": "online", 00:34:48.396 "raid_level": "raid1", 00:34:48.396 "superblock": true, 00:34:48.396 "num_base_bdevs": 2, 00:34:48.396 "num_base_bdevs_discovered": 2, 00:34:48.396 "num_base_bdevs_operational": 2, 00:34:48.396 "base_bdevs_list": [ 00:34:48.396 { 00:34:48.396 "name": "spare", 00:34:48.396 "uuid": "3168293e-fe48-5943-b0af-ef52f25dd98e", 00:34:48.396 "is_configured": true, 00:34:48.396 "data_offset": 2048, 00:34:48.396 "data_size": 63488 00:34:48.396 }, 00:34:48.396 { 00:34:48.396 "name": "BaseBdev2", 00:34:48.396 "uuid": "ac5cfe90-519a-5216-99d4-13731cd962ce", 00:34:48.396 "is_configured": true, 00:34:48.396 "data_offset": 2048, 00:34:48.396 "data_size": 63488 00:34:48.396 } 00:34:48.396 ] 00:34:48.396 }' 00:34:48.396 17:31:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:34:48.396 17:31:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:34:48.396 17:31:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:34:48.396 17:31:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:34:48.396 17:31:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@751 -- # rpc_cmd bdev_raid_get_bdevs all 00:34:48.396 17:31:25 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:48.396 17:31:25 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:34:48.396 17:31:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@751 -- # jq -r '.[].base_bdevs_list[0].name' 00:34:48.396 17:31:25 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:48.396 17:31:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@751 -- # [[ spare == \s\p\a\r\e ]] 00:34:48.396 17:31:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@754 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:34:48.396 17:31:25 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:48.396 17:31:25 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:34:48.396 [2024-11-26 17:31:25.790820] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:34:48.396 17:31:25 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:48.396 17:31:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@755 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:34:48.396 17:31:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:34:48.396 17:31:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:34:48.396 17:31:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:34:48.396 17:31:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:34:48.396 17:31:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:34:48.396 17:31:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:34:48.396 17:31:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:34:48.396 17:31:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:34:48.396 17:31:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:34:48.396 17:31:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:34:48.396 17:31:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:34:48.396 17:31:25 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:48.396 17:31:25 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:34:48.396 17:31:25 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:48.396 17:31:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:34:48.396 "name": "raid_bdev1", 00:34:48.396 "uuid": "8f4d75dc-85cf-412c-ab70-ee3dda132df2", 00:34:48.396 "strip_size_kb": 0, 00:34:48.396 "state": "online", 00:34:48.396 "raid_level": "raid1", 00:34:48.396 "superblock": true, 00:34:48.396 "num_base_bdevs": 2, 00:34:48.396 "num_base_bdevs_discovered": 1, 00:34:48.396 "num_base_bdevs_operational": 1, 00:34:48.396 "base_bdevs_list": [ 00:34:48.396 { 00:34:48.396 "name": null, 00:34:48.396 "uuid": "00000000-0000-0000-0000-000000000000", 00:34:48.396 "is_configured": false, 00:34:48.396 "data_offset": 0, 00:34:48.396 "data_size": 63488 00:34:48.396 }, 00:34:48.396 { 00:34:48.396 "name": "BaseBdev2", 00:34:48.396 "uuid": "ac5cfe90-519a-5216-99d4-13731cd962ce", 00:34:48.396 "is_configured": true, 00:34:48.396 "data_offset": 2048, 00:34:48.396 "data_size": 63488 00:34:48.396 } 00:34:48.396 ] 00:34:48.396 }' 00:34:48.397 17:31:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:34:48.397 17:31:25 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:34:48.965 17:31:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@756 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:34:48.965 17:31:26 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:48.965 17:31:26 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:34:48.965 [2024-11-26 17:31:26.194970] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:34:48.965 [2024-11-26 17:31:26.195186] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:34:48.965 [2024-11-26 17:31:26.195203] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:34:48.965 [2024-11-26 17:31:26.195250] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:34:48.965 [2024-11-26 17:31:26.212554] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00002b1a0 00:34:48.965 17:31:26 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:48.965 17:31:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@757 -- # sleep 1 00:34:48.965 [2024-11-26 17:31:26.214838] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:34:49.903 17:31:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@758 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:34:49.903 17:31:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:34:49.903 17:31:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:34:49.903 17:31:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:34:49.903 17:31:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:34:49.903 17:31:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:34:49.903 17:31:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:34:49.903 17:31:27 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:49.903 17:31:27 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:34:49.903 17:31:27 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:49.903 17:31:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:34:49.903 "name": "raid_bdev1", 00:34:49.903 "uuid": "8f4d75dc-85cf-412c-ab70-ee3dda132df2", 00:34:49.903 "strip_size_kb": 0, 00:34:49.903 "state": "online", 00:34:49.903 "raid_level": "raid1", 00:34:49.903 "superblock": true, 00:34:49.903 "num_base_bdevs": 2, 00:34:49.903 "num_base_bdevs_discovered": 2, 00:34:49.903 "num_base_bdevs_operational": 2, 00:34:49.903 "process": { 00:34:49.903 "type": "rebuild", 00:34:49.903 "target": "spare", 00:34:49.903 "progress": { 00:34:49.903 "blocks": 20480, 00:34:49.903 "percent": 32 00:34:49.903 } 00:34:49.903 }, 00:34:49.903 "base_bdevs_list": [ 00:34:49.903 { 00:34:49.903 "name": "spare", 00:34:49.903 "uuid": "3168293e-fe48-5943-b0af-ef52f25dd98e", 00:34:49.903 "is_configured": true, 00:34:49.903 "data_offset": 2048, 00:34:49.903 "data_size": 63488 00:34:49.903 }, 00:34:49.903 { 00:34:49.903 "name": "BaseBdev2", 00:34:49.903 "uuid": "ac5cfe90-519a-5216-99d4-13731cd962ce", 00:34:49.903 "is_configured": true, 00:34:49.903 "data_offset": 2048, 00:34:49.903 "data_size": 63488 00:34:49.903 } 00:34:49.903 ] 00:34:49.903 }' 00:34:49.903 17:31:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:34:49.903 17:31:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:34:49.903 17:31:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:34:50.162 17:31:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:34:50.162 17:31:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@761 -- # rpc_cmd bdev_passthru_delete spare 00:34:50.162 17:31:27 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:50.162 17:31:27 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:34:50.162 [2024-11-26 17:31:27.356767] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:34:50.162 [2024-11-26 17:31:27.422816] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:34:50.162 [2024-11-26 17:31:27.422905] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:34:50.162 [2024-11-26 17:31:27.422925] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:34:50.162 [2024-11-26 17:31:27.422935] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:34:50.162 17:31:27 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:50.163 17:31:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@762 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:34:50.163 17:31:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:34:50.163 17:31:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:34:50.163 17:31:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:34:50.163 17:31:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:34:50.163 17:31:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:34:50.163 17:31:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:34:50.163 17:31:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:34:50.163 17:31:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:34:50.163 17:31:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:34:50.163 17:31:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:34:50.163 17:31:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:34:50.163 17:31:27 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:50.163 17:31:27 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:34:50.163 17:31:27 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:50.163 17:31:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:34:50.163 "name": "raid_bdev1", 00:34:50.163 "uuid": "8f4d75dc-85cf-412c-ab70-ee3dda132df2", 00:34:50.163 "strip_size_kb": 0, 00:34:50.163 "state": "online", 00:34:50.163 "raid_level": "raid1", 00:34:50.163 "superblock": true, 00:34:50.163 "num_base_bdevs": 2, 00:34:50.163 "num_base_bdevs_discovered": 1, 00:34:50.163 "num_base_bdevs_operational": 1, 00:34:50.163 "base_bdevs_list": [ 00:34:50.163 { 00:34:50.163 "name": null, 00:34:50.163 "uuid": "00000000-0000-0000-0000-000000000000", 00:34:50.163 "is_configured": false, 00:34:50.163 "data_offset": 0, 00:34:50.163 "data_size": 63488 00:34:50.163 }, 00:34:50.163 { 00:34:50.163 "name": "BaseBdev2", 00:34:50.163 "uuid": "ac5cfe90-519a-5216-99d4-13731cd962ce", 00:34:50.163 "is_configured": true, 00:34:50.163 "data_offset": 2048, 00:34:50.163 "data_size": 63488 00:34:50.163 } 00:34:50.163 ] 00:34:50.163 }' 00:34:50.163 17:31:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:34:50.163 17:31:27 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:34:50.729 17:31:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@763 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:34:50.729 17:31:27 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:50.729 17:31:27 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:34:50.729 [2024-11-26 17:31:27.924455] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:34:50.729 [2024-11-26 17:31:27.924528] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:34:50.729 [2024-11-26 17:31:27.924558] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ae80 00:34:50.729 [2024-11-26 17:31:27.924569] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:34:50.729 [2024-11-26 17:31:27.925106] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:34:50.729 [2024-11-26 17:31:27.925129] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:34:50.729 [2024-11-26 17:31:27.925234] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:34:50.729 [2024-11-26 17:31:27.925249] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:34:50.729 [2024-11-26 17:31:27.925264] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:34:50.729 [2024-11-26 17:31:27.925287] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:34:50.729 [2024-11-26 17:31:27.942749] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00002b270 00:34:50.729 spare 00:34:50.729 17:31:27 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:50.729 17:31:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@764 -- # sleep 1 00:34:50.729 [2024-11-26 17:31:27.945122] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:34:51.666 17:31:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@765 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:34:51.666 17:31:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:34:51.666 17:31:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:34:51.666 17:31:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:34:51.666 17:31:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:34:51.666 17:31:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:34:51.666 17:31:28 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:51.666 17:31:28 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:34:51.666 17:31:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:34:51.666 17:31:28 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:51.666 17:31:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:34:51.666 "name": "raid_bdev1", 00:34:51.666 "uuid": "8f4d75dc-85cf-412c-ab70-ee3dda132df2", 00:34:51.666 "strip_size_kb": 0, 00:34:51.666 "state": "online", 00:34:51.666 "raid_level": "raid1", 00:34:51.666 "superblock": true, 00:34:51.666 "num_base_bdevs": 2, 00:34:51.666 "num_base_bdevs_discovered": 2, 00:34:51.666 "num_base_bdevs_operational": 2, 00:34:51.666 "process": { 00:34:51.666 "type": "rebuild", 00:34:51.666 "target": "spare", 00:34:51.666 "progress": { 00:34:51.666 "blocks": 20480, 00:34:51.666 "percent": 32 00:34:51.666 } 00:34:51.666 }, 00:34:51.666 "base_bdevs_list": [ 00:34:51.666 { 00:34:51.666 "name": "spare", 00:34:51.666 "uuid": "3168293e-fe48-5943-b0af-ef52f25dd98e", 00:34:51.666 "is_configured": true, 00:34:51.666 "data_offset": 2048, 00:34:51.666 "data_size": 63488 00:34:51.666 }, 00:34:51.666 { 00:34:51.666 "name": "BaseBdev2", 00:34:51.666 "uuid": "ac5cfe90-519a-5216-99d4-13731cd962ce", 00:34:51.666 "is_configured": true, 00:34:51.666 "data_offset": 2048, 00:34:51.666 "data_size": 63488 00:34:51.666 } 00:34:51.666 ] 00:34:51.666 }' 00:34:51.666 17:31:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:34:51.666 17:31:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:34:51.666 17:31:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:34:51.666 17:31:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:34:51.666 17:31:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@768 -- # rpc_cmd bdev_passthru_delete spare 00:34:51.666 17:31:29 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:51.666 17:31:29 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:34:51.666 [2024-11-26 17:31:29.090026] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:34:51.925 [2024-11-26 17:31:29.152852] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:34:51.925 [2024-11-26 17:31:29.152927] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:34:51.925 [2024-11-26 17:31:29.152943] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:34:51.925 [2024-11-26 17:31:29.152954] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:34:51.925 17:31:29 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:51.925 17:31:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@769 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:34:51.925 17:31:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:34:51.925 17:31:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:34:51.925 17:31:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:34:51.925 17:31:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:34:51.925 17:31:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:34:51.925 17:31:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:34:51.925 17:31:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:34:51.925 17:31:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:34:51.925 17:31:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:34:51.925 17:31:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:34:51.925 17:31:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:34:51.925 17:31:29 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:51.925 17:31:29 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:34:51.925 17:31:29 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:51.925 17:31:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:34:51.925 "name": "raid_bdev1", 00:34:51.925 "uuid": "8f4d75dc-85cf-412c-ab70-ee3dda132df2", 00:34:51.925 "strip_size_kb": 0, 00:34:51.925 "state": "online", 00:34:51.925 "raid_level": "raid1", 00:34:51.925 "superblock": true, 00:34:51.925 "num_base_bdevs": 2, 00:34:51.925 "num_base_bdevs_discovered": 1, 00:34:51.925 "num_base_bdevs_operational": 1, 00:34:51.925 "base_bdevs_list": [ 00:34:51.925 { 00:34:51.925 "name": null, 00:34:51.925 "uuid": "00000000-0000-0000-0000-000000000000", 00:34:51.925 "is_configured": false, 00:34:51.925 "data_offset": 0, 00:34:51.925 "data_size": 63488 00:34:51.925 }, 00:34:51.925 { 00:34:51.925 "name": "BaseBdev2", 00:34:51.925 "uuid": "ac5cfe90-519a-5216-99d4-13731cd962ce", 00:34:51.925 "is_configured": true, 00:34:51.925 "data_offset": 2048, 00:34:51.925 "data_size": 63488 00:34:51.925 } 00:34:51.925 ] 00:34:51.925 }' 00:34:51.925 17:31:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:34:51.925 17:31:29 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:34:52.494 17:31:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@770 -- # verify_raid_bdev_process raid_bdev1 none none 00:34:52.494 17:31:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:34:52.494 17:31:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:34:52.494 17:31:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:34:52.494 17:31:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:34:52.495 17:31:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:34:52.495 17:31:29 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:52.495 17:31:29 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:34:52.495 17:31:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:34:52.495 17:31:29 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:52.495 17:31:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:34:52.495 "name": "raid_bdev1", 00:34:52.495 "uuid": "8f4d75dc-85cf-412c-ab70-ee3dda132df2", 00:34:52.495 "strip_size_kb": 0, 00:34:52.495 "state": "online", 00:34:52.495 "raid_level": "raid1", 00:34:52.495 "superblock": true, 00:34:52.495 "num_base_bdevs": 2, 00:34:52.495 "num_base_bdevs_discovered": 1, 00:34:52.495 "num_base_bdevs_operational": 1, 00:34:52.495 "base_bdevs_list": [ 00:34:52.495 { 00:34:52.495 "name": null, 00:34:52.495 "uuid": "00000000-0000-0000-0000-000000000000", 00:34:52.495 "is_configured": false, 00:34:52.495 "data_offset": 0, 00:34:52.495 "data_size": 63488 00:34:52.495 }, 00:34:52.495 { 00:34:52.495 "name": "BaseBdev2", 00:34:52.495 "uuid": "ac5cfe90-519a-5216-99d4-13731cd962ce", 00:34:52.495 "is_configured": true, 00:34:52.495 "data_offset": 2048, 00:34:52.495 "data_size": 63488 00:34:52.495 } 00:34:52.495 ] 00:34:52.495 }' 00:34:52.495 17:31:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:34:52.495 17:31:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:34:52.495 17:31:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:34:52.495 17:31:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:34:52.495 17:31:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@773 -- # rpc_cmd bdev_passthru_delete BaseBdev1 00:34:52.495 17:31:29 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:52.495 17:31:29 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:34:52.495 17:31:29 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:52.495 17:31:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@774 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:34:52.495 17:31:29 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:52.495 17:31:29 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:34:52.495 [2024-11-26 17:31:29.781477] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:34:52.495 [2024-11-26 17:31:29.781544] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:34:52.495 [2024-11-26 17:31:29.781573] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b480 00:34:52.495 [2024-11-26 17:31:29.781592] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:34:52.495 [2024-11-26 17:31:29.782041] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:34:52.495 [2024-11-26 17:31:29.782084] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:34:52.495 [2024-11-26 17:31:29.782171] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev BaseBdev1 00:34:52.495 [2024-11-26 17:31:29.782193] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:34:52.495 [2024-11-26 17:31:29.782203] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:34:52.495 [2024-11-26 17:31:29.782217] bdev_raid.c:3894:raid_bdev_examine_done: *ERROR*: Failed to examine bdev BaseBdev1: Invalid argument 00:34:52.495 BaseBdev1 00:34:52.495 17:31:29 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:52.495 17:31:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@775 -- # sleep 1 00:34:53.432 17:31:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@776 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:34:53.432 17:31:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:34:53.432 17:31:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:34:53.432 17:31:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:34:53.432 17:31:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:34:53.432 17:31:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:34:53.432 17:31:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:34:53.432 17:31:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:34:53.432 17:31:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:34:53.432 17:31:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:34:53.432 17:31:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:34:53.432 17:31:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:34:53.432 17:31:30 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:53.432 17:31:30 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:34:53.432 17:31:30 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:53.432 17:31:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:34:53.432 "name": "raid_bdev1", 00:34:53.432 "uuid": "8f4d75dc-85cf-412c-ab70-ee3dda132df2", 00:34:53.432 "strip_size_kb": 0, 00:34:53.432 "state": "online", 00:34:53.432 "raid_level": "raid1", 00:34:53.432 "superblock": true, 00:34:53.432 "num_base_bdevs": 2, 00:34:53.432 "num_base_bdevs_discovered": 1, 00:34:53.432 "num_base_bdevs_operational": 1, 00:34:53.432 "base_bdevs_list": [ 00:34:53.432 { 00:34:53.432 "name": null, 00:34:53.432 "uuid": "00000000-0000-0000-0000-000000000000", 00:34:53.432 "is_configured": false, 00:34:53.432 "data_offset": 0, 00:34:53.432 "data_size": 63488 00:34:53.432 }, 00:34:53.432 { 00:34:53.432 "name": "BaseBdev2", 00:34:53.432 "uuid": "ac5cfe90-519a-5216-99d4-13731cd962ce", 00:34:53.432 "is_configured": true, 00:34:53.432 "data_offset": 2048, 00:34:53.432 "data_size": 63488 00:34:53.432 } 00:34:53.432 ] 00:34:53.432 }' 00:34:53.432 17:31:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:34:53.432 17:31:30 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:34:54.001 17:31:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@777 -- # verify_raid_bdev_process raid_bdev1 none none 00:34:54.001 17:31:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:34:54.001 17:31:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:34:54.001 17:31:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:34:54.001 17:31:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:34:54.001 17:31:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:34:54.001 17:31:31 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:54.001 17:31:31 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:34:54.001 17:31:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:34:54.001 17:31:31 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:54.001 17:31:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:34:54.001 "name": "raid_bdev1", 00:34:54.001 "uuid": "8f4d75dc-85cf-412c-ab70-ee3dda132df2", 00:34:54.001 "strip_size_kb": 0, 00:34:54.001 "state": "online", 00:34:54.001 "raid_level": "raid1", 00:34:54.001 "superblock": true, 00:34:54.001 "num_base_bdevs": 2, 00:34:54.001 "num_base_bdevs_discovered": 1, 00:34:54.001 "num_base_bdevs_operational": 1, 00:34:54.001 "base_bdevs_list": [ 00:34:54.001 { 00:34:54.001 "name": null, 00:34:54.001 "uuid": "00000000-0000-0000-0000-000000000000", 00:34:54.001 "is_configured": false, 00:34:54.001 "data_offset": 0, 00:34:54.001 "data_size": 63488 00:34:54.001 }, 00:34:54.001 { 00:34:54.001 "name": "BaseBdev2", 00:34:54.001 "uuid": "ac5cfe90-519a-5216-99d4-13731cd962ce", 00:34:54.001 "is_configured": true, 00:34:54.001 "data_offset": 2048, 00:34:54.001 "data_size": 63488 00:34:54.001 } 00:34:54.001 ] 00:34:54.001 }' 00:34:54.001 17:31:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:34:54.001 17:31:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:34:54.001 17:31:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:34:54.001 17:31:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:34:54.001 17:31:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@778 -- # NOT rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:34:54.001 17:31:31 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@652 -- # local es=0 00:34:54.001 17:31:31 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:34:54.001 17:31:31 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:34:54.001 17:31:31 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:34:54.001 17:31:31 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:34:54.001 17:31:31 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:34:54.001 17:31:31 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:34:54.001 17:31:31 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:54.001 17:31:31 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:34:54.001 [2024-11-26 17:31:31.390041] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:34:54.001 [2024-11-26 17:31:31.390224] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:34:54.001 [2024-11-26 17:31:31.390240] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:34:54.001 request: 00:34:54.001 { 00:34:54.001 "base_bdev": "BaseBdev1", 00:34:54.001 "raid_bdev": "raid_bdev1", 00:34:54.001 "method": "bdev_raid_add_base_bdev", 00:34:54.001 "req_id": 1 00:34:54.001 } 00:34:54.001 Got JSON-RPC error response 00:34:54.001 response: 00:34:54.001 { 00:34:54.001 "code": -22, 00:34:54.001 "message": "Failed to add base bdev to RAID bdev: Invalid argument" 00:34:54.001 } 00:34:54.002 17:31:31 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:34:54.002 17:31:31 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@655 -- # es=1 00:34:54.002 17:31:31 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:34:54.002 17:31:31 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:34:54.002 17:31:31 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:34:54.002 17:31:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@779 -- # sleep 1 00:34:55.401 17:31:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@780 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:34:55.401 17:31:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:34:55.401 17:31:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:34:55.401 17:31:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:34:55.401 17:31:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:34:55.401 17:31:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:34:55.401 17:31:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:34:55.401 17:31:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:34:55.401 17:31:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:34:55.401 17:31:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:34:55.401 17:31:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:34:55.401 17:31:32 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:55.401 17:31:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:34:55.401 17:31:32 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:34:55.401 17:31:32 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:55.401 17:31:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:34:55.401 "name": "raid_bdev1", 00:34:55.401 "uuid": "8f4d75dc-85cf-412c-ab70-ee3dda132df2", 00:34:55.402 "strip_size_kb": 0, 00:34:55.402 "state": "online", 00:34:55.402 "raid_level": "raid1", 00:34:55.402 "superblock": true, 00:34:55.402 "num_base_bdevs": 2, 00:34:55.402 "num_base_bdevs_discovered": 1, 00:34:55.402 "num_base_bdevs_operational": 1, 00:34:55.402 "base_bdevs_list": [ 00:34:55.402 { 00:34:55.402 "name": null, 00:34:55.402 "uuid": "00000000-0000-0000-0000-000000000000", 00:34:55.402 "is_configured": false, 00:34:55.402 "data_offset": 0, 00:34:55.402 "data_size": 63488 00:34:55.402 }, 00:34:55.402 { 00:34:55.402 "name": "BaseBdev2", 00:34:55.402 "uuid": "ac5cfe90-519a-5216-99d4-13731cd962ce", 00:34:55.402 "is_configured": true, 00:34:55.402 "data_offset": 2048, 00:34:55.402 "data_size": 63488 00:34:55.402 } 00:34:55.402 ] 00:34:55.402 }' 00:34:55.402 17:31:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:34:55.402 17:31:32 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:34:55.661 17:31:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@781 -- # verify_raid_bdev_process raid_bdev1 none none 00:34:55.661 17:31:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:34:55.661 17:31:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:34:55.661 17:31:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:34:55.661 17:31:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:34:55.661 17:31:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:34:55.661 17:31:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:34:55.661 17:31:32 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:55.661 17:31:32 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:34:55.661 17:31:32 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:55.661 17:31:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:34:55.661 "name": "raid_bdev1", 00:34:55.661 "uuid": "8f4d75dc-85cf-412c-ab70-ee3dda132df2", 00:34:55.661 "strip_size_kb": 0, 00:34:55.661 "state": "online", 00:34:55.661 "raid_level": "raid1", 00:34:55.661 "superblock": true, 00:34:55.661 "num_base_bdevs": 2, 00:34:55.661 "num_base_bdevs_discovered": 1, 00:34:55.661 "num_base_bdevs_operational": 1, 00:34:55.661 "base_bdevs_list": [ 00:34:55.661 { 00:34:55.661 "name": null, 00:34:55.661 "uuid": "00000000-0000-0000-0000-000000000000", 00:34:55.661 "is_configured": false, 00:34:55.661 "data_offset": 0, 00:34:55.661 "data_size": 63488 00:34:55.661 }, 00:34:55.661 { 00:34:55.661 "name": "BaseBdev2", 00:34:55.661 "uuid": "ac5cfe90-519a-5216-99d4-13731cd962ce", 00:34:55.661 "is_configured": true, 00:34:55.661 "data_offset": 2048, 00:34:55.661 "data_size": 63488 00:34:55.661 } 00:34:55.661 ] 00:34:55.661 }' 00:34:55.661 17:31:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:34:55.661 17:31:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:34:55.661 17:31:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:34:55.661 17:31:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:34:55.661 17:31:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@784 -- # killprocess 77315 00:34:55.661 17:31:33 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@954 -- # '[' -z 77315 ']' 00:34:55.661 17:31:33 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@958 -- # kill -0 77315 00:34:55.661 17:31:33 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@959 -- # uname 00:34:55.661 17:31:33 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:34:55.661 17:31:33 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 77315 00:34:55.661 killing process with pid 77315 00:34:55.661 Received shutdown signal, test time was about 17.215389 seconds 00:34:55.661 00:34:55.661 Latency(us) 00:34:55.661 [2024-11-26T17:31:33.108Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:34:55.661 [2024-11-26T17:31:33.108Z] =================================================================================================================== 00:34:55.661 [2024-11-26T17:31:33.108Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:34:55.661 17:31:33 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:34:55.662 17:31:33 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:34:55.662 17:31:33 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@972 -- # echo 'killing process with pid 77315' 00:34:55.662 17:31:33 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@973 -- # kill 77315 00:34:55.662 [2024-11-26 17:31:33.065762] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:34:55.662 17:31:33 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@978 -- # wait 77315 00:34:55.662 [2024-11-26 17:31:33.065892] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:34:55.662 [2024-11-26 17:31:33.065948] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:34:55.662 [2024-11-26 17:31:33.065960] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state offline 00:34:55.920 [2024-11-26 17:31:33.303738] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:34:57.299 ************************************ 00:34:57.299 END TEST raid_rebuild_test_sb_io 00:34:57.299 ************************************ 00:34:57.299 17:31:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@786 -- # return 0 00:34:57.299 00:34:57.299 real 0m20.423s 00:34:57.299 user 0m26.814s 00:34:57.299 sys 0m2.316s 00:34:57.299 17:31:34 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@1130 -- # xtrace_disable 00:34:57.299 17:31:34 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:34:57.299 17:31:34 bdev_raid -- bdev/bdev_raid.sh@977 -- # for n in 2 4 00:34:57.299 17:31:34 bdev_raid -- bdev/bdev_raid.sh@978 -- # run_test raid_rebuild_test raid_rebuild_test raid1 4 false false true 00:34:57.299 17:31:34 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 7 -le 1 ']' 00:34:57.299 17:31:34 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:34:57.299 17:31:34 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:34:57.299 ************************************ 00:34:57.299 START TEST raid_rebuild_test 00:34:57.299 ************************************ 00:34:57.299 17:31:34 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@1129 -- # raid_rebuild_test raid1 4 false false true 00:34:57.299 17:31:34 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@569 -- # local raid_level=raid1 00:34:57.299 17:31:34 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=4 00:34:57.299 17:31:34 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@571 -- # local superblock=false 00:34:57.299 17:31:34 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@572 -- # local background_io=false 00:34:57.299 17:31:34 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@573 -- # local verify=true 00:34:57.299 17:31:34 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:34:57.299 17:31:34 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:34:57.299 17:31:34 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:34:57.299 17:31:34 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:34:57.299 17:31:34 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:34:57.299 17:31:34 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:34:57.299 17:31:34 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:34:57.299 17:31:34 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:34:57.300 17:31:34 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@576 -- # echo BaseBdev3 00:34:57.300 17:31:34 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:34:57.300 17:31:34 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:34:57.300 17:31:34 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@576 -- # echo BaseBdev4 00:34:57.300 17:31:34 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:34:57.300 17:31:34 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:34:57.300 17:31:34 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:34:57.300 17:31:34 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:34:57.300 17:31:34 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:34:57.300 17:31:34 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@576 -- # local strip_size 00:34:57.300 17:31:34 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@577 -- # local create_arg 00:34:57.300 17:31:34 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:34:57.300 17:31:34 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@579 -- # local data_offset 00:34:57.300 17:31:34 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@581 -- # '[' raid1 '!=' raid1 ']' 00:34:57.300 17:31:34 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@589 -- # strip_size=0 00:34:57.300 17:31:34 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@592 -- # '[' false = true ']' 00:34:57.300 17:31:34 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@597 -- # raid_pid=77999 00:34:57.300 17:31:34 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@598 -- # waitforlisten 77999 00:34:57.300 17:31:34 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@835 -- # '[' -z 77999 ']' 00:34:57.300 17:31:34 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:34:57.300 17:31:34 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:34:57.300 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:34:57.300 17:31:34 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:34:57.300 17:31:34 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:34:57.300 17:31:34 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:34:57.300 17:31:34 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:34:57.300 I/O size of 3145728 is greater than zero copy threshold (65536). 00:34:57.300 Zero copy mechanism will not be used. 00:34:57.300 [2024-11-26 17:31:34.720463] Starting SPDK v25.01-pre git sha1 f7ce15267 / DPDK 24.03.0 initialization... 00:34:57.300 [2024-11-26 17:31:34.720638] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid77999 ] 00:34:57.559 [2024-11-26 17:31:34.910059] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:34:57.819 [2024-11-26 17:31:35.027584] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:34:57.819 [2024-11-26 17:31:35.243689] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:34:57.819 [2024-11-26 17:31:35.243730] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:34:58.388 17:31:35 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:34:58.388 17:31:35 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@868 -- # return 0 00:34:58.388 17:31:35 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:34:58.388 17:31:35 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:34:58.388 17:31:35 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:58.388 17:31:35 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:34:58.388 BaseBdev1_malloc 00:34:58.388 17:31:35 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:58.388 17:31:35 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:34:58.388 17:31:35 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:58.388 17:31:35 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:34:58.388 [2024-11-26 17:31:35.697734] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:34:58.388 [2024-11-26 17:31:35.697798] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:34:58.388 [2024-11-26 17:31:35.697824] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:34:58.388 [2024-11-26 17:31:35.697838] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:34:58.388 [2024-11-26 17:31:35.700465] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:34:58.388 [2024-11-26 17:31:35.700526] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:34:58.388 BaseBdev1 00:34:58.388 17:31:35 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:58.388 17:31:35 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:34:58.388 17:31:35 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:34:58.388 17:31:35 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:58.388 17:31:35 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:34:58.388 BaseBdev2_malloc 00:34:58.388 17:31:35 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:58.388 17:31:35 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:34:58.388 17:31:35 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:58.388 17:31:35 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:34:58.388 [2024-11-26 17:31:35.755075] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:34:58.388 [2024-11-26 17:31:35.755259] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:34:58.388 [2024-11-26 17:31:35.755320] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:34:58.388 [2024-11-26 17:31:35.755407] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:34:58.388 [2024-11-26 17:31:35.757781] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:34:58.388 [2024-11-26 17:31:35.757957] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:34:58.388 BaseBdev2 00:34:58.388 17:31:35 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:58.388 17:31:35 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:34:58.388 17:31:35 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:34:58.388 17:31:35 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:58.388 17:31:35 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:34:58.388 BaseBdev3_malloc 00:34:58.388 17:31:35 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:58.388 17:31:35 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev3_malloc -p BaseBdev3 00:34:58.388 17:31:35 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:58.388 17:31:35 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:34:58.388 [2024-11-26 17:31:35.822908] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev3_malloc 00:34:58.388 [2024-11-26 17:31:35.822969] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:34:58.388 [2024-11-26 17:31:35.822995] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:34:58.388 [2024-11-26 17:31:35.823010] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:34:58.388 [2024-11-26 17:31:35.825524] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:34:58.388 [2024-11-26 17:31:35.825566] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:34:58.388 BaseBdev3 00:34:58.388 17:31:35 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:58.388 17:31:35 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:34:58.388 17:31:35 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:34:58.388 17:31:35 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:58.388 17:31:35 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:34:58.648 BaseBdev4_malloc 00:34:58.648 17:31:35 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:58.648 17:31:35 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev4_malloc -p BaseBdev4 00:34:58.648 17:31:35 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:58.648 17:31:35 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:34:58.648 [2024-11-26 17:31:35.875529] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev4_malloc 00:34:58.648 [2024-11-26 17:31:35.875702] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:34:58.648 [2024-11-26 17:31:35.875756] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:34:58.648 [2024-11-26 17:31:35.875852] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:34:58.648 [2024-11-26 17:31:35.878239] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:34:58.648 [2024-11-26 17:31:35.878376] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:34:58.648 BaseBdev4 00:34:58.648 17:31:35 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:58.648 17:31:35 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 512 -b spare_malloc 00:34:58.648 17:31:35 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:58.648 17:31:35 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:34:58.648 spare_malloc 00:34:58.648 17:31:35 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:58.648 17:31:35 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:34:58.648 17:31:35 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:58.648 17:31:35 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:34:58.648 spare_delay 00:34:58.648 17:31:35 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:58.648 17:31:35 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:34:58.648 17:31:35 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:58.648 17:31:35 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:34:58.648 [2024-11-26 17:31:35.943969] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:34:58.648 [2024-11-26 17:31:35.944144] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:34:58.648 [2024-11-26 17:31:35.944199] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a880 00:34:58.648 [2024-11-26 17:31:35.944306] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:34:58.648 [2024-11-26 17:31:35.946764] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:34:58.648 [2024-11-26 17:31:35.946912] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:34:58.648 spare 00:34:58.648 17:31:35 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:58.648 17:31:35 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n raid_bdev1 00:34:58.648 17:31:35 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:58.648 17:31:35 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:34:58.648 [2024-11-26 17:31:35.952014] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:34:58.648 [2024-11-26 17:31:35.954246] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:34:58.648 [2024-11-26 17:31:35.954358] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:34:58.648 [2024-11-26 17:31:35.954529] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:34:58.648 [2024-11-26 17:31:35.954664] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:34:58.648 [2024-11-26 17:31:35.954734] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:34:58.648 [2024-11-26 17:31:35.955203] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:34:58.648 [2024-11-26 17:31:35.955509] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:34:58.648 [2024-11-26 17:31:35.955617] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:34:58.648 [2024-11-26 17:31:35.955942] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:34:58.648 17:31:35 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:58.648 17:31:35 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 4 00:34:58.648 17:31:35 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:34:58.648 17:31:35 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:34:58.648 17:31:35 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:34:58.648 17:31:35 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:34:58.648 17:31:35 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:34:58.648 17:31:35 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:34:58.648 17:31:35 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:34:58.648 17:31:35 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:34:58.648 17:31:35 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:34:58.648 17:31:35 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:34:58.648 17:31:35 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:34:58.648 17:31:35 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:58.648 17:31:35 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:34:58.648 17:31:35 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:58.648 17:31:36 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:34:58.648 "name": "raid_bdev1", 00:34:58.648 "uuid": "389c7e6b-568d-4d93-8099-2e8450273649", 00:34:58.648 "strip_size_kb": 0, 00:34:58.648 "state": "online", 00:34:58.648 "raid_level": "raid1", 00:34:58.648 "superblock": false, 00:34:58.648 "num_base_bdevs": 4, 00:34:58.648 "num_base_bdevs_discovered": 4, 00:34:58.648 "num_base_bdevs_operational": 4, 00:34:58.648 "base_bdevs_list": [ 00:34:58.648 { 00:34:58.648 "name": "BaseBdev1", 00:34:58.648 "uuid": "77d50810-7e8a-5f0c-960a-b17848f17417", 00:34:58.648 "is_configured": true, 00:34:58.648 "data_offset": 0, 00:34:58.648 "data_size": 65536 00:34:58.648 }, 00:34:58.648 { 00:34:58.648 "name": "BaseBdev2", 00:34:58.648 "uuid": "01fbcc41-d8ee-521a-8fec-ec17b4a37d81", 00:34:58.648 "is_configured": true, 00:34:58.648 "data_offset": 0, 00:34:58.648 "data_size": 65536 00:34:58.648 }, 00:34:58.648 { 00:34:58.648 "name": "BaseBdev3", 00:34:58.648 "uuid": "4093972b-59e4-502c-a5e0-db551010b880", 00:34:58.648 "is_configured": true, 00:34:58.648 "data_offset": 0, 00:34:58.648 "data_size": 65536 00:34:58.648 }, 00:34:58.648 { 00:34:58.648 "name": "BaseBdev4", 00:34:58.648 "uuid": "5e4c0fcc-6f88-59d6-a643-dce65ca3b3b5", 00:34:58.648 "is_configured": true, 00:34:58.648 "data_offset": 0, 00:34:58.648 "data_size": 65536 00:34:58.648 } 00:34:58.648 ] 00:34:58.648 }' 00:34:58.648 17:31:36 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:34:58.648 17:31:36 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:34:59.216 17:31:36 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:34:59.216 17:31:36 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:59.216 17:31:36 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:34:59.216 17:31:36 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:34:59.216 [2024-11-26 17:31:36.404498] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:34:59.216 17:31:36 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:59.216 17:31:36 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=65536 00:34:59.216 17:31:36 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:34:59.216 17:31:36 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:59.216 17:31:36 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:34:59.216 17:31:36 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:34:59.216 17:31:36 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:59.216 17:31:36 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@619 -- # data_offset=0 00:34:59.216 17:31:36 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@621 -- # '[' false = true ']' 00:34:59.216 17:31:36 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@624 -- # '[' true = true ']' 00:34:59.216 17:31:36 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@625 -- # local write_unit_size 00:34:59.216 17:31:36 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@628 -- # nbd_start_disks /var/tmp/spdk.sock raid_bdev1 /dev/nbd0 00:34:59.216 17:31:36 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:34:59.216 17:31:36 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@10 -- # bdev_list=('raid_bdev1') 00:34:59.216 17:31:36 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@10 -- # local bdev_list 00:34:59.217 17:31:36 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:34:59.217 17:31:36 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@11 -- # local nbd_list 00:34:59.217 17:31:36 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@12 -- # local i 00:34:59.217 17:31:36 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:34:59.217 17:31:36 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:34:59.217 17:31:36 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk raid_bdev1 /dev/nbd0 00:34:59.476 [2024-11-26 17:31:36.756246] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006150 00:34:59.476 /dev/nbd0 00:34:59.476 17:31:36 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:34:59.476 17:31:36 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:34:59.476 17:31:36 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:34:59.476 17:31:36 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@873 -- # local i 00:34:59.476 17:31:36 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:34:59.476 17:31:36 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:34:59.476 17:31:36 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:34:59.476 17:31:36 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@877 -- # break 00:34:59.476 17:31:36 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:34:59.476 17:31:36 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:34:59.476 17:31:36 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:34:59.476 1+0 records in 00:34:59.476 1+0 records out 00:34:59.476 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000280391 s, 14.6 MB/s 00:34:59.476 17:31:36 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:34:59.476 17:31:36 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@890 -- # size=4096 00:34:59.476 17:31:36 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:34:59.476 17:31:36 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:34:59.476 17:31:36 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@893 -- # return 0 00:34:59.476 17:31:36 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:34:59.476 17:31:36 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:34:59.476 17:31:36 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@629 -- # '[' raid1 = raid5f ']' 00:34:59.476 17:31:36 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@633 -- # write_unit_size=1 00:34:59.476 17:31:36 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@635 -- # dd if=/dev/urandom of=/dev/nbd0 bs=512 count=65536 oflag=direct 00:35:07.594 65536+0 records in 00:35:07.594 65536+0 records out 00:35:07.594 33554432 bytes (34 MB, 32 MiB) copied, 6.83766 s, 4.9 MB/s 00:35:07.594 17:31:43 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@636 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:35:07.594 17:31:43 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:35:07.594 17:31:43 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:35:07.594 17:31:43 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@50 -- # local nbd_list 00:35:07.594 17:31:43 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@51 -- # local i 00:35:07.594 17:31:43 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:35:07.594 17:31:43 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:35:07.594 17:31:43 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:35:07.594 [2024-11-26 17:31:43.865908] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:35:07.594 17:31:43 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:35:07.594 17:31:43 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:35:07.594 17:31:43 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:35:07.594 17:31:43 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:35:07.594 17:31:43 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:35:07.594 17:31:43 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:35:07.594 17:31:43 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:35:07.594 17:31:43 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:35:07.594 17:31:43 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:07.594 17:31:43 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:35:07.594 [2024-11-26 17:31:43.882017] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:35:07.594 17:31:43 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:07.595 17:31:43 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:35:07.595 17:31:43 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:35:07.595 17:31:43 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:35:07.595 17:31:43 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:35:07.595 17:31:43 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:35:07.595 17:31:43 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:35:07.595 17:31:43 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:35:07.595 17:31:43 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:35:07.595 17:31:43 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:35:07.595 17:31:43 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:35:07.595 17:31:43 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:35:07.595 17:31:43 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:07.595 17:31:43 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:35:07.595 17:31:43 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:35:07.595 17:31:43 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:07.595 17:31:43 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:35:07.595 "name": "raid_bdev1", 00:35:07.595 "uuid": "389c7e6b-568d-4d93-8099-2e8450273649", 00:35:07.595 "strip_size_kb": 0, 00:35:07.595 "state": "online", 00:35:07.595 "raid_level": "raid1", 00:35:07.595 "superblock": false, 00:35:07.595 "num_base_bdevs": 4, 00:35:07.595 "num_base_bdevs_discovered": 3, 00:35:07.595 "num_base_bdevs_operational": 3, 00:35:07.595 "base_bdevs_list": [ 00:35:07.595 { 00:35:07.595 "name": null, 00:35:07.595 "uuid": "00000000-0000-0000-0000-000000000000", 00:35:07.595 "is_configured": false, 00:35:07.595 "data_offset": 0, 00:35:07.595 "data_size": 65536 00:35:07.595 }, 00:35:07.595 { 00:35:07.595 "name": "BaseBdev2", 00:35:07.595 "uuid": "01fbcc41-d8ee-521a-8fec-ec17b4a37d81", 00:35:07.595 "is_configured": true, 00:35:07.595 "data_offset": 0, 00:35:07.595 "data_size": 65536 00:35:07.595 }, 00:35:07.595 { 00:35:07.595 "name": "BaseBdev3", 00:35:07.595 "uuid": "4093972b-59e4-502c-a5e0-db551010b880", 00:35:07.595 "is_configured": true, 00:35:07.595 "data_offset": 0, 00:35:07.595 "data_size": 65536 00:35:07.595 }, 00:35:07.595 { 00:35:07.595 "name": "BaseBdev4", 00:35:07.595 "uuid": "5e4c0fcc-6f88-59d6-a643-dce65ca3b3b5", 00:35:07.595 "is_configured": true, 00:35:07.595 "data_offset": 0, 00:35:07.595 "data_size": 65536 00:35:07.595 } 00:35:07.595 ] 00:35:07.595 }' 00:35:07.595 17:31:43 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:35:07.595 17:31:43 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:35:07.595 17:31:44 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:35:07.595 17:31:44 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:07.595 17:31:44 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:35:07.595 [2024-11-26 17:31:44.322136] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:35:07.595 [2024-11-26 17:31:44.339270] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000d09d70 00:35:07.595 17:31:44 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:07.595 17:31:44 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@647 -- # sleep 1 00:35:07.595 [2024-11-26 17:31:44.341441] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:35:08.196 17:31:45 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:35:08.196 17:31:45 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:35:08.196 17:31:45 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:35:08.196 17:31:45 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:35:08.196 17:31:45 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:35:08.196 17:31:45 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:35:08.196 17:31:45 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:08.196 17:31:45 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:35:08.196 17:31:45 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:35:08.196 17:31:45 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:08.196 17:31:45 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:35:08.196 "name": "raid_bdev1", 00:35:08.196 "uuid": "389c7e6b-568d-4d93-8099-2e8450273649", 00:35:08.196 "strip_size_kb": 0, 00:35:08.196 "state": "online", 00:35:08.196 "raid_level": "raid1", 00:35:08.196 "superblock": false, 00:35:08.196 "num_base_bdevs": 4, 00:35:08.196 "num_base_bdevs_discovered": 4, 00:35:08.196 "num_base_bdevs_operational": 4, 00:35:08.196 "process": { 00:35:08.196 "type": "rebuild", 00:35:08.196 "target": "spare", 00:35:08.196 "progress": { 00:35:08.196 "blocks": 20480, 00:35:08.196 "percent": 31 00:35:08.196 } 00:35:08.196 }, 00:35:08.196 "base_bdevs_list": [ 00:35:08.196 { 00:35:08.196 "name": "spare", 00:35:08.196 "uuid": "7d51a1bb-3c55-58cd-a340-02f28949d1cf", 00:35:08.196 "is_configured": true, 00:35:08.196 "data_offset": 0, 00:35:08.196 "data_size": 65536 00:35:08.196 }, 00:35:08.196 { 00:35:08.196 "name": "BaseBdev2", 00:35:08.196 "uuid": "01fbcc41-d8ee-521a-8fec-ec17b4a37d81", 00:35:08.196 "is_configured": true, 00:35:08.196 "data_offset": 0, 00:35:08.196 "data_size": 65536 00:35:08.196 }, 00:35:08.196 { 00:35:08.196 "name": "BaseBdev3", 00:35:08.196 "uuid": "4093972b-59e4-502c-a5e0-db551010b880", 00:35:08.196 "is_configured": true, 00:35:08.196 "data_offset": 0, 00:35:08.196 "data_size": 65536 00:35:08.196 }, 00:35:08.196 { 00:35:08.196 "name": "BaseBdev4", 00:35:08.196 "uuid": "5e4c0fcc-6f88-59d6-a643-dce65ca3b3b5", 00:35:08.196 "is_configured": true, 00:35:08.196 "data_offset": 0, 00:35:08.196 "data_size": 65536 00:35:08.196 } 00:35:08.196 ] 00:35:08.196 }' 00:35:08.196 17:31:45 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:35:08.196 17:31:45 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:35:08.196 17:31:45 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:35:08.196 17:31:45 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:35:08.196 17:31:45 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:35:08.196 17:31:45 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:08.196 17:31:45 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:35:08.196 [2024-11-26 17:31:45.490956] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:35:08.196 [2024-11-26 17:31:45.549330] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:35:08.196 [2024-11-26 17:31:45.549628] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:35:08.196 [2024-11-26 17:31:45.549741] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:35:08.196 [2024-11-26 17:31:45.549786] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:35:08.196 17:31:45 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:08.196 17:31:45 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:35:08.196 17:31:45 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:35:08.196 17:31:45 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:35:08.196 17:31:45 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:35:08.196 17:31:45 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:35:08.196 17:31:45 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:35:08.196 17:31:45 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:35:08.196 17:31:45 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:35:08.196 17:31:45 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:35:08.196 17:31:45 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:35:08.196 17:31:45 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:35:08.196 17:31:45 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:35:08.196 17:31:45 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:08.196 17:31:45 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:35:08.196 17:31:45 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:08.196 17:31:45 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:35:08.196 "name": "raid_bdev1", 00:35:08.196 "uuid": "389c7e6b-568d-4d93-8099-2e8450273649", 00:35:08.196 "strip_size_kb": 0, 00:35:08.196 "state": "online", 00:35:08.196 "raid_level": "raid1", 00:35:08.196 "superblock": false, 00:35:08.196 "num_base_bdevs": 4, 00:35:08.196 "num_base_bdevs_discovered": 3, 00:35:08.196 "num_base_bdevs_operational": 3, 00:35:08.196 "base_bdevs_list": [ 00:35:08.196 { 00:35:08.196 "name": null, 00:35:08.196 "uuid": "00000000-0000-0000-0000-000000000000", 00:35:08.196 "is_configured": false, 00:35:08.196 "data_offset": 0, 00:35:08.196 "data_size": 65536 00:35:08.196 }, 00:35:08.196 { 00:35:08.196 "name": "BaseBdev2", 00:35:08.196 "uuid": "01fbcc41-d8ee-521a-8fec-ec17b4a37d81", 00:35:08.196 "is_configured": true, 00:35:08.196 "data_offset": 0, 00:35:08.196 "data_size": 65536 00:35:08.196 }, 00:35:08.196 { 00:35:08.196 "name": "BaseBdev3", 00:35:08.196 "uuid": "4093972b-59e4-502c-a5e0-db551010b880", 00:35:08.196 "is_configured": true, 00:35:08.196 "data_offset": 0, 00:35:08.196 "data_size": 65536 00:35:08.196 }, 00:35:08.196 { 00:35:08.196 "name": "BaseBdev4", 00:35:08.196 "uuid": "5e4c0fcc-6f88-59d6-a643-dce65ca3b3b5", 00:35:08.196 "is_configured": true, 00:35:08.196 "data_offset": 0, 00:35:08.196 "data_size": 65536 00:35:08.196 } 00:35:08.196 ] 00:35:08.197 }' 00:35:08.197 17:31:45 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:35:08.197 17:31:45 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:35:08.765 17:31:46 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:35:08.765 17:31:46 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:35:08.765 17:31:46 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:35:08.765 17:31:46 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=none 00:35:08.765 17:31:46 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:35:08.765 17:31:46 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:35:08.765 17:31:46 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:08.765 17:31:46 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:35:08.765 17:31:46 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:35:08.765 17:31:46 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:08.765 17:31:46 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:35:08.765 "name": "raid_bdev1", 00:35:08.765 "uuid": "389c7e6b-568d-4d93-8099-2e8450273649", 00:35:08.765 "strip_size_kb": 0, 00:35:08.765 "state": "online", 00:35:08.765 "raid_level": "raid1", 00:35:08.765 "superblock": false, 00:35:08.765 "num_base_bdevs": 4, 00:35:08.765 "num_base_bdevs_discovered": 3, 00:35:08.765 "num_base_bdevs_operational": 3, 00:35:08.765 "base_bdevs_list": [ 00:35:08.765 { 00:35:08.765 "name": null, 00:35:08.765 "uuid": "00000000-0000-0000-0000-000000000000", 00:35:08.765 "is_configured": false, 00:35:08.765 "data_offset": 0, 00:35:08.765 "data_size": 65536 00:35:08.765 }, 00:35:08.765 { 00:35:08.765 "name": "BaseBdev2", 00:35:08.765 "uuid": "01fbcc41-d8ee-521a-8fec-ec17b4a37d81", 00:35:08.765 "is_configured": true, 00:35:08.765 "data_offset": 0, 00:35:08.765 "data_size": 65536 00:35:08.765 }, 00:35:08.765 { 00:35:08.765 "name": "BaseBdev3", 00:35:08.765 "uuid": "4093972b-59e4-502c-a5e0-db551010b880", 00:35:08.765 "is_configured": true, 00:35:08.765 "data_offset": 0, 00:35:08.765 "data_size": 65536 00:35:08.765 }, 00:35:08.765 { 00:35:08.765 "name": "BaseBdev4", 00:35:08.765 "uuid": "5e4c0fcc-6f88-59d6-a643-dce65ca3b3b5", 00:35:08.765 "is_configured": true, 00:35:08.765 "data_offset": 0, 00:35:08.765 "data_size": 65536 00:35:08.765 } 00:35:08.765 ] 00:35:08.765 }' 00:35:08.765 17:31:46 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:35:08.765 17:31:46 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:35:08.765 17:31:46 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:35:08.765 17:31:46 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:35:08.765 17:31:46 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:35:08.765 17:31:46 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:08.765 17:31:46 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:35:08.765 [2024-11-26 17:31:46.173213] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:35:08.765 [2024-11-26 17:31:46.187806] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000d09e40 00:35:08.765 17:31:46 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:08.765 17:31:46 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@663 -- # sleep 1 00:35:08.765 [2024-11-26 17:31:46.190000] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:35:10.143 17:31:47 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:35:10.143 17:31:47 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:35:10.143 17:31:47 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:35:10.143 17:31:47 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:35:10.143 17:31:47 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:35:10.143 17:31:47 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:35:10.143 17:31:47 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:10.143 17:31:47 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:35:10.143 17:31:47 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:35:10.144 17:31:47 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:10.144 17:31:47 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:35:10.144 "name": "raid_bdev1", 00:35:10.144 "uuid": "389c7e6b-568d-4d93-8099-2e8450273649", 00:35:10.144 "strip_size_kb": 0, 00:35:10.144 "state": "online", 00:35:10.144 "raid_level": "raid1", 00:35:10.144 "superblock": false, 00:35:10.144 "num_base_bdevs": 4, 00:35:10.144 "num_base_bdevs_discovered": 4, 00:35:10.144 "num_base_bdevs_operational": 4, 00:35:10.144 "process": { 00:35:10.144 "type": "rebuild", 00:35:10.144 "target": "spare", 00:35:10.144 "progress": { 00:35:10.144 "blocks": 20480, 00:35:10.144 "percent": 31 00:35:10.144 } 00:35:10.144 }, 00:35:10.144 "base_bdevs_list": [ 00:35:10.144 { 00:35:10.144 "name": "spare", 00:35:10.144 "uuid": "7d51a1bb-3c55-58cd-a340-02f28949d1cf", 00:35:10.144 "is_configured": true, 00:35:10.144 "data_offset": 0, 00:35:10.144 "data_size": 65536 00:35:10.144 }, 00:35:10.144 { 00:35:10.144 "name": "BaseBdev2", 00:35:10.144 "uuid": "01fbcc41-d8ee-521a-8fec-ec17b4a37d81", 00:35:10.144 "is_configured": true, 00:35:10.144 "data_offset": 0, 00:35:10.144 "data_size": 65536 00:35:10.144 }, 00:35:10.144 { 00:35:10.144 "name": "BaseBdev3", 00:35:10.144 "uuid": "4093972b-59e4-502c-a5e0-db551010b880", 00:35:10.144 "is_configured": true, 00:35:10.144 "data_offset": 0, 00:35:10.144 "data_size": 65536 00:35:10.144 }, 00:35:10.144 { 00:35:10.144 "name": "BaseBdev4", 00:35:10.144 "uuid": "5e4c0fcc-6f88-59d6-a643-dce65ca3b3b5", 00:35:10.144 "is_configured": true, 00:35:10.144 "data_offset": 0, 00:35:10.144 "data_size": 65536 00:35:10.144 } 00:35:10.144 ] 00:35:10.144 }' 00:35:10.144 17:31:47 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:35:10.144 17:31:47 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:35:10.144 17:31:47 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:35:10.144 17:31:47 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:35:10.144 17:31:47 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@666 -- # '[' false = true ']' 00:35:10.144 17:31:47 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=4 00:35:10.144 17:31:47 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@693 -- # '[' raid1 = raid1 ']' 00:35:10.144 17:31:47 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@693 -- # '[' 4 -gt 2 ']' 00:35:10.144 17:31:47 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@695 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:35:10.144 17:31:47 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:10.144 17:31:47 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:35:10.144 [2024-11-26 17:31:47.335568] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:35:10.144 [2024-11-26 17:31:47.397911] bdev_raid.c:1974:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 1 raid_ch: 0x60d000d09e40 00:35:10.144 17:31:47 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:10.144 17:31:47 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@698 -- # base_bdevs[1]= 00:35:10.144 17:31:47 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@699 -- # (( num_base_bdevs_operational-- )) 00:35:10.144 17:31:47 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@702 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:35:10.144 17:31:47 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:35:10.144 17:31:47 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:35:10.144 17:31:47 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:35:10.144 17:31:47 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:35:10.144 17:31:47 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:35:10.144 17:31:47 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:35:10.144 17:31:47 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:10.144 17:31:47 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:35:10.144 17:31:47 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:10.144 17:31:47 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:35:10.144 "name": "raid_bdev1", 00:35:10.144 "uuid": "389c7e6b-568d-4d93-8099-2e8450273649", 00:35:10.144 "strip_size_kb": 0, 00:35:10.144 "state": "online", 00:35:10.144 "raid_level": "raid1", 00:35:10.144 "superblock": false, 00:35:10.144 "num_base_bdevs": 4, 00:35:10.144 "num_base_bdevs_discovered": 3, 00:35:10.144 "num_base_bdevs_operational": 3, 00:35:10.144 "process": { 00:35:10.144 "type": "rebuild", 00:35:10.144 "target": "spare", 00:35:10.144 "progress": { 00:35:10.144 "blocks": 24576, 00:35:10.144 "percent": 37 00:35:10.144 } 00:35:10.144 }, 00:35:10.144 "base_bdevs_list": [ 00:35:10.144 { 00:35:10.144 "name": "spare", 00:35:10.144 "uuid": "7d51a1bb-3c55-58cd-a340-02f28949d1cf", 00:35:10.144 "is_configured": true, 00:35:10.144 "data_offset": 0, 00:35:10.144 "data_size": 65536 00:35:10.144 }, 00:35:10.144 { 00:35:10.144 "name": null, 00:35:10.144 "uuid": "00000000-0000-0000-0000-000000000000", 00:35:10.144 "is_configured": false, 00:35:10.144 "data_offset": 0, 00:35:10.144 "data_size": 65536 00:35:10.144 }, 00:35:10.144 { 00:35:10.144 "name": "BaseBdev3", 00:35:10.144 "uuid": "4093972b-59e4-502c-a5e0-db551010b880", 00:35:10.144 "is_configured": true, 00:35:10.144 "data_offset": 0, 00:35:10.144 "data_size": 65536 00:35:10.144 }, 00:35:10.144 { 00:35:10.144 "name": "BaseBdev4", 00:35:10.144 "uuid": "5e4c0fcc-6f88-59d6-a643-dce65ca3b3b5", 00:35:10.144 "is_configured": true, 00:35:10.144 "data_offset": 0, 00:35:10.144 "data_size": 65536 00:35:10.144 } 00:35:10.144 ] 00:35:10.144 }' 00:35:10.144 17:31:47 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:35:10.144 17:31:47 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:35:10.144 17:31:47 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:35:10.144 17:31:47 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:35:10.144 17:31:47 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@706 -- # local timeout=461 00:35:10.144 17:31:47 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:35:10.144 17:31:47 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:35:10.144 17:31:47 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:35:10.144 17:31:47 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:35:10.144 17:31:47 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:35:10.144 17:31:47 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:35:10.144 17:31:47 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:35:10.144 17:31:47 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:35:10.144 17:31:47 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:10.144 17:31:47 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:35:10.144 17:31:47 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:10.144 17:31:47 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:35:10.144 "name": "raid_bdev1", 00:35:10.144 "uuid": "389c7e6b-568d-4d93-8099-2e8450273649", 00:35:10.144 "strip_size_kb": 0, 00:35:10.144 "state": "online", 00:35:10.145 "raid_level": "raid1", 00:35:10.145 "superblock": false, 00:35:10.145 "num_base_bdevs": 4, 00:35:10.145 "num_base_bdevs_discovered": 3, 00:35:10.145 "num_base_bdevs_operational": 3, 00:35:10.145 "process": { 00:35:10.145 "type": "rebuild", 00:35:10.145 "target": "spare", 00:35:10.145 "progress": { 00:35:10.145 "blocks": 26624, 00:35:10.145 "percent": 40 00:35:10.145 } 00:35:10.145 }, 00:35:10.145 "base_bdevs_list": [ 00:35:10.145 { 00:35:10.145 "name": "spare", 00:35:10.145 "uuid": "7d51a1bb-3c55-58cd-a340-02f28949d1cf", 00:35:10.145 "is_configured": true, 00:35:10.145 "data_offset": 0, 00:35:10.145 "data_size": 65536 00:35:10.145 }, 00:35:10.145 { 00:35:10.145 "name": null, 00:35:10.145 "uuid": "00000000-0000-0000-0000-000000000000", 00:35:10.145 "is_configured": false, 00:35:10.145 "data_offset": 0, 00:35:10.145 "data_size": 65536 00:35:10.145 }, 00:35:10.145 { 00:35:10.145 "name": "BaseBdev3", 00:35:10.145 "uuid": "4093972b-59e4-502c-a5e0-db551010b880", 00:35:10.145 "is_configured": true, 00:35:10.145 "data_offset": 0, 00:35:10.145 "data_size": 65536 00:35:10.145 }, 00:35:10.145 { 00:35:10.145 "name": "BaseBdev4", 00:35:10.145 "uuid": "5e4c0fcc-6f88-59d6-a643-dce65ca3b3b5", 00:35:10.145 "is_configured": true, 00:35:10.145 "data_offset": 0, 00:35:10.145 "data_size": 65536 00:35:10.145 } 00:35:10.145 ] 00:35:10.145 }' 00:35:10.145 17:31:47 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:35:10.404 17:31:47 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:35:10.404 17:31:47 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:35:10.404 17:31:47 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:35:10.404 17:31:47 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:35:11.341 17:31:48 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:35:11.341 17:31:48 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:35:11.341 17:31:48 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:35:11.341 17:31:48 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:35:11.341 17:31:48 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:35:11.341 17:31:48 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:35:11.341 17:31:48 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:35:11.341 17:31:48 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:11.341 17:31:48 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:35:11.341 17:31:48 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:35:11.341 17:31:48 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:11.341 17:31:48 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:35:11.341 "name": "raid_bdev1", 00:35:11.341 "uuid": "389c7e6b-568d-4d93-8099-2e8450273649", 00:35:11.341 "strip_size_kb": 0, 00:35:11.341 "state": "online", 00:35:11.341 "raid_level": "raid1", 00:35:11.341 "superblock": false, 00:35:11.341 "num_base_bdevs": 4, 00:35:11.341 "num_base_bdevs_discovered": 3, 00:35:11.341 "num_base_bdevs_operational": 3, 00:35:11.341 "process": { 00:35:11.341 "type": "rebuild", 00:35:11.341 "target": "spare", 00:35:11.341 "progress": { 00:35:11.341 "blocks": 49152, 00:35:11.341 "percent": 75 00:35:11.341 } 00:35:11.341 }, 00:35:11.341 "base_bdevs_list": [ 00:35:11.341 { 00:35:11.341 "name": "spare", 00:35:11.341 "uuid": "7d51a1bb-3c55-58cd-a340-02f28949d1cf", 00:35:11.341 "is_configured": true, 00:35:11.341 "data_offset": 0, 00:35:11.341 "data_size": 65536 00:35:11.341 }, 00:35:11.341 { 00:35:11.341 "name": null, 00:35:11.341 "uuid": "00000000-0000-0000-0000-000000000000", 00:35:11.341 "is_configured": false, 00:35:11.341 "data_offset": 0, 00:35:11.341 "data_size": 65536 00:35:11.341 }, 00:35:11.341 { 00:35:11.341 "name": "BaseBdev3", 00:35:11.341 "uuid": "4093972b-59e4-502c-a5e0-db551010b880", 00:35:11.341 "is_configured": true, 00:35:11.341 "data_offset": 0, 00:35:11.341 "data_size": 65536 00:35:11.341 }, 00:35:11.341 { 00:35:11.341 "name": "BaseBdev4", 00:35:11.341 "uuid": "5e4c0fcc-6f88-59d6-a643-dce65ca3b3b5", 00:35:11.341 "is_configured": true, 00:35:11.341 "data_offset": 0, 00:35:11.341 "data_size": 65536 00:35:11.341 } 00:35:11.341 ] 00:35:11.341 }' 00:35:11.341 17:31:48 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:35:11.342 17:31:48 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:35:11.342 17:31:48 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:35:11.601 17:31:48 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:35:11.601 17:31:48 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:35:12.169 [2024-11-26 17:31:49.410952] bdev_raid.c:2900:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:35:12.169 [2024-11-26 17:31:49.411065] bdev_raid.c:2562:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:35:12.169 [2024-11-26 17:31:49.411117] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:35:12.428 17:31:49 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:35:12.428 17:31:49 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:35:12.428 17:31:49 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:35:12.428 17:31:49 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:35:12.428 17:31:49 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:35:12.428 17:31:49 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:35:12.428 17:31:49 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:35:12.428 17:31:49 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:35:12.428 17:31:49 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:12.428 17:31:49 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:35:12.428 17:31:49 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:12.428 17:31:49 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:35:12.428 "name": "raid_bdev1", 00:35:12.428 "uuid": "389c7e6b-568d-4d93-8099-2e8450273649", 00:35:12.428 "strip_size_kb": 0, 00:35:12.428 "state": "online", 00:35:12.428 "raid_level": "raid1", 00:35:12.428 "superblock": false, 00:35:12.428 "num_base_bdevs": 4, 00:35:12.428 "num_base_bdevs_discovered": 3, 00:35:12.428 "num_base_bdevs_operational": 3, 00:35:12.428 "base_bdevs_list": [ 00:35:12.428 { 00:35:12.428 "name": "spare", 00:35:12.428 "uuid": "7d51a1bb-3c55-58cd-a340-02f28949d1cf", 00:35:12.428 "is_configured": true, 00:35:12.428 "data_offset": 0, 00:35:12.428 "data_size": 65536 00:35:12.428 }, 00:35:12.428 { 00:35:12.428 "name": null, 00:35:12.428 "uuid": "00000000-0000-0000-0000-000000000000", 00:35:12.428 "is_configured": false, 00:35:12.428 "data_offset": 0, 00:35:12.428 "data_size": 65536 00:35:12.428 }, 00:35:12.428 { 00:35:12.428 "name": "BaseBdev3", 00:35:12.428 "uuid": "4093972b-59e4-502c-a5e0-db551010b880", 00:35:12.428 "is_configured": true, 00:35:12.428 "data_offset": 0, 00:35:12.428 "data_size": 65536 00:35:12.428 }, 00:35:12.428 { 00:35:12.428 "name": "BaseBdev4", 00:35:12.428 "uuid": "5e4c0fcc-6f88-59d6-a643-dce65ca3b3b5", 00:35:12.428 "is_configured": true, 00:35:12.428 "data_offset": 0, 00:35:12.428 "data_size": 65536 00:35:12.428 } 00:35:12.428 ] 00:35:12.428 }' 00:35:12.428 17:31:49 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:35:12.688 17:31:49 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:35:12.688 17:31:49 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:35:12.688 17:31:49 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:35:12.688 17:31:49 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@709 -- # break 00:35:12.688 17:31:49 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:35:12.688 17:31:49 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:35:12.688 17:31:49 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:35:12.688 17:31:49 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=none 00:35:12.688 17:31:49 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:35:12.688 17:31:49 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:35:12.688 17:31:49 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:12.688 17:31:49 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:35:12.688 17:31:49 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:35:12.688 17:31:49 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:12.688 17:31:49 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:35:12.688 "name": "raid_bdev1", 00:35:12.688 "uuid": "389c7e6b-568d-4d93-8099-2e8450273649", 00:35:12.688 "strip_size_kb": 0, 00:35:12.688 "state": "online", 00:35:12.688 "raid_level": "raid1", 00:35:12.688 "superblock": false, 00:35:12.688 "num_base_bdevs": 4, 00:35:12.688 "num_base_bdevs_discovered": 3, 00:35:12.688 "num_base_bdevs_operational": 3, 00:35:12.688 "base_bdevs_list": [ 00:35:12.688 { 00:35:12.688 "name": "spare", 00:35:12.688 "uuid": "7d51a1bb-3c55-58cd-a340-02f28949d1cf", 00:35:12.688 "is_configured": true, 00:35:12.688 "data_offset": 0, 00:35:12.688 "data_size": 65536 00:35:12.688 }, 00:35:12.688 { 00:35:12.688 "name": null, 00:35:12.688 "uuid": "00000000-0000-0000-0000-000000000000", 00:35:12.688 "is_configured": false, 00:35:12.688 "data_offset": 0, 00:35:12.688 "data_size": 65536 00:35:12.688 }, 00:35:12.688 { 00:35:12.688 "name": "BaseBdev3", 00:35:12.688 "uuid": "4093972b-59e4-502c-a5e0-db551010b880", 00:35:12.688 "is_configured": true, 00:35:12.688 "data_offset": 0, 00:35:12.688 "data_size": 65536 00:35:12.688 }, 00:35:12.688 { 00:35:12.688 "name": "BaseBdev4", 00:35:12.688 "uuid": "5e4c0fcc-6f88-59d6-a643-dce65ca3b3b5", 00:35:12.688 "is_configured": true, 00:35:12.688 "data_offset": 0, 00:35:12.688 "data_size": 65536 00:35:12.688 } 00:35:12.688 ] 00:35:12.688 }' 00:35:12.688 17:31:49 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:35:12.688 17:31:50 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:35:12.688 17:31:50 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:35:12.688 17:31:50 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:35:12.688 17:31:50 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:35:12.688 17:31:50 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:35:12.688 17:31:50 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:35:12.688 17:31:50 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:35:12.688 17:31:50 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:35:12.688 17:31:50 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:35:12.688 17:31:50 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:35:12.688 17:31:50 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:35:12.688 17:31:50 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:35:12.688 17:31:50 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:35:12.688 17:31:50 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:35:12.688 17:31:50 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:12.688 17:31:50 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:35:12.688 17:31:50 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:35:12.688 17:31:50 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:12.947 17:31:50 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:35:12.947 "name": "raid_bdev1", 00:35:12.947 "uuid": "389c7e6b-568d-4d93-8099-2e8450273649", 00:35:12.947 "strip_size_kb": 0, 00:35:12.947 "state": "online", 00:35:12.947 "raid_level": "raid1", 00:35:12.947 "superblock": false, 00:35:12.947 "num_base_bdevs": 4, 00:35:12.947 "num_base_bdevs_discovered": 3, 00:35:12.947 "num_base_bdevs_operational": 3, 00:35:12.947 "base_bdevs_list": [ 00:35:12.947 { 00:35:12.947 "name": "spare", 00:35:12.947 "uuid": "7d51a1bb-3c55-58cd-a340-02f28949d1cf", 00:35:12.947 "is_configured": true, 00:35:12.947 "data_offset": 0, 00:35:12.947 "data_size": 65536 00:35:12.947 }, 00:35:12.947 { 00:35:12.947 "name": null, 00:35:12.947 "uuid": "00000000-0000-0000-0000-000000000000", 00:35:12.947 "is_configured": false, 00:35:12.947 "data_offset": 0, 00:35:12.947 "data_size": 65536 00:35:12.947 }, 00:35:12.947 { 00:35:12.947 "name": "BaseBdev3", 00:35:12.947 "uuid": "4093972b-59e4-502c-a5e0-db551010b880", 00:35:12.947 "is_configured": true, 00:35:12.947 "data_offset": 0, 00:35:12.947 "data_size": 65536 00:35:12.947 }, 00:35:12.947 { 00:35:12.947 "name": "BaseBdev4", 00:35:12.947 "uuid": "5e4c0fcc-6f88-59d6-a643-dce65ca3b3b5", 00:35:12.947 "is_configured": true, 00:35:12.947 "data_offset": 0, 00:35:12.947 "data_size": 65536 00:35:12.947 } 00:35:12.947 ] 00:35:12.947 }' 00:35:12.947 17:31:50 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:35:12.947 17:31:50 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:35:13.206 17:31:50 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:35:13.206 17:31:50 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:13.206 17:31:50 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:35:13.206 [2024-11-26 17:31:50.549909] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:35:13.206 [2024-11-26 17:31:50.549951] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:35:13.206 [2024-11-26 17:31:50.550039] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:35:13.206 [2024-11-26 17:31:50.550140] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:35:13.206 [2024-11-26 17:31:50.550153] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:35:13.206 17:31:50 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:13.206 17:31:50 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:35:13.206 17:31:50 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:13.206 17:31:50 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:35:13.206 17:31:50 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@720 -- # jq length 00:35:13.206 17:31:50 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:13.206 17:31:50 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:35:13.206 17:31:50 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:35:13.206 17:31:50 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@723 -- # '[' false = true ']' 00:35:13.206 17:31:50 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@737 -- # nbd_start_disks /var/tmp/spdk.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 00:35:13.206 17:31:50 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:35:13.206 17:31:50 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev1' 'spare') 00:35:13.206 17:31:50 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@10 -- # local bdev_list 00:35:13.206 17:31:50 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:35:13.206 17:31:50 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@11 -- # local nbd_list 00:35:13.206 17:31:50 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@12 -- # local i 00:35:13.206 17:31:50 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:35:13.206 17:31:50 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:35:13.206 17:31:50 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev1 /dev/nbd0 00:35:13.465 /dev/nbd0 00:35:13.465 17:31:50 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:35:13.465 17:31:50 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:35:13.465 17:31:50 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:35:13.465 17:31:50 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@873 -- # local i 00:35:13.465 17:31:50 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:35:13.465 17:31:50 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:35:13.465 17:31:50 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:35:13.465 17:31:50 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@877 -- # break 00:35:13.465 17:31:50 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:35:13.465 17:31:50 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:35:13.465 17:31:50 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:35:13.465 1+0 records in 00:35:13.465 1+0 records out 00:35:13.465 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00027173 s, 15.1 MB/s 00:35:13.465 17:31:50 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:35:13.465 17:31:50 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@890 -- # size=4096 00:35:13.465 17:31:50 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:35:13.465 17:31:50 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:35:13.465 17:31:50 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@893 -- # return 0 00:35:13.465 17:31:50 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:35:13.465 17:31:50 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:35:13.465 17:31:50 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd1 00:35:13.724 /dev/nbd1 00:35:13.724 17:31:51 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:35:13.724 17:31:51 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:35:13.724 17:31:51 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:35:13.724 17:31:51 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@873 -- # local i 00:35:13.724 17:31:51 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:35:13.724 17:31:51 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:35:13.724 17:31:51 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:35:13.724 17:31:51 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@877 -- # break 00:35:13.724 17:31:51 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:35:13.724 17:31:51 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:35:13.724 17:31:51 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:35:13.724 1+0 records in 00:35:13.724 1+0 records out 00:35:13.724 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000563277 s, 7.3 MB/s 00:35:13.724 17:31:51 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:35:13.724 17:31:51 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@890 -- # size=4096 00:35:13.724 17:31:51 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:35:13.724 17:31:51 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:35:13.724 17:31:51 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@893 -- # return 0 00:35:13.724 17:31:51 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:35:13.724 17:31:51 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:35:13.724 17:31:51 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@738 -- # cmp -i 0 /dev/nbd0 /dev/nbd1 00:35:13.983 17:31:51 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@739 -- # nbd_stop_disks /var/tmp/spdk.sock '/dev/nbd0 /dev/nbd1' 00:35:13.983 17:31:51 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:35:13.983 17:31:51 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:35:13.983 17:31:51 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@50 -- # local nbd_list 00:35:13.983 17:31:51 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@51 -- # local i 00:35:13.984 17:31:51 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:35:13.984 17:31:51 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:35:14.243 17:31:51 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:35:14.243 17:31:51 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:35:14.243 17:31:51 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:35:14.243 17:31:51 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:35:14.243 17:31:51 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:35:14.243 17:31:51 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:35:14.243 17:31:51 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:35:14.243 17:31:51 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:35:14.243 17:31:51 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:35:14.243 17:31:51 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:35:14.502 17:31:51 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:35:14.502 17:31:51 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:35:14.502 17:31:51 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:35:14.502 17:31:51 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:35:14.502 17:31:51 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:35:14.502 17:31:51 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:35:14.502 17:31:51 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:35:14.502 17:31:51 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:35:14.502 17:31:51 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@743 -- # '[' false = true ']' 00:35:14.502 17:31:51 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@784 -- # killprocess 77999 00:35:14.502 17:31:51 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@954 -- # '[' -z 77999 ']' 00:35:14.502 17:31:51 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@958 -- # kill -0 77999 00:35:14.502 17:31:51 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@959 -- # uname 00:35:14.503 17:31:51 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:35:14.503 17:31:51 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 77999 00:35:14.503 killing process with pid 77999 00:35:14.503 Received shutdown signal, test time was about 60.000000 seconds 00:35:14.503 00:35:14.503 Latency(us) 00:35:14.503 [2024-11-26T17:31:51.950Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:35:14.503 [2024-11-26T17:31:51.950Z] =================================================================================================================== 00:35:14.503 [2024-11-26T17:31:51.950Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:35:14.503 17:31:51 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:35:14.503 17:31:51 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:35:14.503 17:31:51 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 77999' 00:35:14.503 17:31:51 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@973 -- # kill 77999 00:35:14.503 [2024-11-26 17:31:51.925111] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:35:14.503 17:31:51 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@978 -- # wait 77999 00:35:15.070 [2024-11-26 17:31:52.433246] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:35:16.447 ************************************ 00:35:16.447 END TEST raid_rebuild_test 00:35:16.447 ************************************ 00:35:16.447 17:31:53 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@786 -- # return 0 00:35:16.447 00:35:16.447 real 0m18.984s 00:35:16.447 user 0m20.899s 00:35:16.447 sys 0m3.999s 00:35:16.447 17:31:53 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:35:16.447 17:31:53 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:35:16.447 17:31:53 bdev_raid -- bdev/bdev_raid.sh@979 -- # run_test raid_rebuild_test_sb raid_rebuild_test raid1 4 true false true 00:35:16.447 17:31:53 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 7 -le 1 ']' 00:35:16.447 17:31:53 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:35:16.447 17:31:53 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:35:16.447 ************************************ 00:35:16.447 START TEST raid_rebuild_test_sb 00:35:16.447 ************************************ 00:35:16.447 17:31:53 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@1129 -- # raid_rebuild_test raid1 4 true false true 00:35:16.447 17:31:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@569 -- # local raid_level=raid1 00:35:16.447 17:31:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=4 00:35:16.447 17:31:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@571 -- # local superblock=true 00:35:16.447 17:31:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@572 -- # local background_io=false 00:35:16.447 17:31:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@573 -- # local verify=true 00:35:16.447 17:31:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:35:16.447 17:31:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:35:16.447 17:31:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:35:16.447 17:31:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:35:16.447 17:31:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:35:16.447 17:31:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:35:16.447 17:31:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:35:16.447 17:31:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:35:16.447 17:31:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # echo BaseBdev3 00:35:16.447 17:31:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:35:16.447 17:31:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:35:16.447 17:31:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # echo BaseBdev4 00:35:16.447 17:31:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:35:16.447 17:31:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:35:16.447 17:31:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:35:16.447 17:31:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:35:16.447 17:31:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:35:16.447 17:31:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # local strip_size 00:35:16.447 17:31:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@577 -- # local create_arg 00:35:16.447 17:31:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:35:16.447 17:31:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@579 -- # local data_offset 00:35:16.447 17:31:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@581 -- # '[' raid1 '!=' raid1 ']' 00:35:16.447 17:31:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@589 -- # strip_size=0 00:35:16.447 17:31:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@592 -- # '[' true = true ']' 00:35:16.447 17:31:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@593 -- # create_arg+=' -s' 00:35:16.447 17:31:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@597 -- # raid_pid=78462 00:35:16.447 17:31:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@598 -- # waitforlisten 78462 00:35:16.447 17:31:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:35:16.447 17:31:53 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@835 -- # '[' -z 78462 ']' 00:35:16.447 17:31:53 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:35:16.447 17:31:53 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@840 -- # local max_retries=100 00:35:16.448 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:35:16.448 17:31:53 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:35:16.448 17:31:53 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@844 -- # xtrace_disable 00:35:16.448 17:31:53 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:35:16.448 I/O size of 3145728 is greater than zero copy threshold (65536). 00:35:16.448 Zero copy mechanism will not be used. 00:35:16.448 [2024-11-26 17:31:53.793319] Starting SPDK v25.01-pre git sha1 f7ce15267 / DPDK 24.03.0 initialization... 00:35:16.448 [2024-11-26 17:31:53.793497] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid78462 ] 00:35:16.706 [2024-11-26 17:31:53.988828] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:35:16.706 [2024-11-26 17:31:54.103585] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:35:16.965 [2024-11-26 17:31:54.319103] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:35:16.965 [2024-11-26 17:31:54.319138] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:35:17.535 17:31:54 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:35:17.535 17:31:54 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@868 -- # return 0 00:35:17.535 17:31:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:35:17.535 17:31:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:35:17.535 17:31:54 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:17.535 17:31:54 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:35:17.535 BaseBdev1_malloc 00:35:17.535 17:31:54 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:17.535 17:31:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:35:17.535 17:31:54 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:17.535 17:31:54 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:35:17.535 [2024-11-26 17:31:54.764993] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:35:17.535 [2024-11-26 17:31:54.765086] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:35:17.535 [2024-11-26 17:31:54.765114] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:35:17.535 [2024-11-26 17:31:54.765129] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:35:17.535 [2024-11-26 17:31:54.767482] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:35:17.535 [2024-11-26 17:31:54.767527] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:35:17.535 BaseBdev1 00:35:17.535 17:31:54 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:17.535 17:31:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:35:17.535 17:31:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:35:17.535 17:31:54 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:17.535 17:31:54 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:35:17.535 BaseBdev2_malloc 00:35:17.535 17:31:54 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:17.535 17:31:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:35:17.535 17:31:54 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:17.535 17:31:54 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:35:17.535 [2024-11-26 17:31:54.821727] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:35:17.535 [2024-11-26 17:31:54.821796] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:35:17.535 [2024-11-26 17:31:54.821822] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:35:17.535 [2024-11-26 17:31:54.821836] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:35:17.535 [2024-11-26 17:31:54.824238] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:35:17.535 [2024-11-26 17:31:54.824281] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:35:17.535 BaseBdev2 00:35:17.535 17:31:54 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:17.535 17:31:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:35:17.535 17:31:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:35:17.535 17:31:54 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:17.535 17:31:54 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:35:17.535 BaseBdev3_malloc 00:35:17.535 17:31:54 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:17.535 17:31:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev3_malloc -p BaseBdev3 00:35:17.535 17:31:54 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:17.535 17:31:54 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:35:17.535 [2024-11-26 17:31:54.889284] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev3_malloc 00:35:17.535 [2024-11-26 17:31:54.889361] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:35:17.535 [2024-11-26 17:31:54.889387] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:35:17.535 [2024-11-26 17:31:54.889402] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:35:17.535 [2024-11-26 17:31:54.891747] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:35:17.535 [2024-11-26 17:31:54.891791] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:35:17.535 BaseBdev3 00:35:17.535 17:31:54 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:17.535 17:31:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:35:17.535 17:31:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:35:17.535 17:31:54 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:17.535 17:31:54 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:35:17.535 BaseBdev4_malloc 00:35:17.535 17:31:54 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:17.535 17:31:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev4_malloc -p BaseBdev4 00:35:17.535 17:31:54 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:17.535 17:31:54 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:35:17.535 [2024-11-26 17:31:54.938720] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev4_malloc 00:35:17.535 [2024-11-26 17:31:54.938784] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:35:17.535 [2024-11-26 17:31:54.938807] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:35:17.535 [2024-11-26 17:31:54.938822] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:35:17.535 [2024-11-26 17:31:54.941147] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:35:17.535 [2024-11-26 17:31:54.941190] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:35:17.535 BaseBdev4 00:35:17.535 17:31:54 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:17.535 17:31:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 512 -b spare_malloc 00:35:17.535 17:31:54 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:17.535 17:31:54 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:35:17.795 spare_malloc 00:35:17.795 17:31:54 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:17.795 17:31:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:35:17.795 17:31:54 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:17.795 17:31:54 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:35:17.795 spare_delay 00:35:17.795 17:31:54 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:17.795 17:31:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:35:17.795 17:31:54 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:17.795 17:31:54 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:35:17.795 [2024-11-26 17:31:54.996192] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:35:17.795 [2024-11-26 17:31:54.996251] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:35:17.795 [2024-11-26 17:31:54.996270] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a880 00:35:17.795 [2024-11-26 17:31:54.996284] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:35:17.795 [2024-11-26 17:31:54.998603] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:35:17.795 [2024-11-26 17:31:54.998646] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:35:17.795 spare 00:35:17.795 17:31:54 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:17.795 17:31:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n raid_bdev1 00:35:17.795 17:31:55 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:17.795 17:31:55 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:35:17.795 [2024-11-26 17:31:55.004240] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:35:17.795 [2024-11-26 17:31:55.006311] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:35:17.795 [2024-11-26 17:31:55.006375] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:35:17.795 [2024-11-26 17:31:55.006427] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:35:17.795 [2024-11-26 17:31:55.006596] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:35:17.795 [2024-11-26 17:31:55.006611] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:35:17.795 [2024-11-26 17:31:55.006868] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:35:17.795 [2024-11-26 17:31:55.007059] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:35:17.795 [2024-11-26 17:31:55.007071] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:35:17.795 [2024-11-26 17:31:55.007207] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:35:17.795 17:31:55 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:17.795 17:31:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 4 00:35:17.795 17:31:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:35:17.795 17:31:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:35:17.795 17:31:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:35:17.795 17:31:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:35:17.795 17:31:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:35:17.795 17:31:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:35:17.795 17:31:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:35:17.795 17:31:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:35:17.795 17:31:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:35:17.795 17:31:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:35:17.795 17:31:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:35:17.795 17:31:55 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:17.795 17:31:55 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:35:17.795 17:31:55 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:17.795 17:31:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:35:17.795 "name": "raid_bdev1", 00:35:17.795 "uuid": "b006fce5-bd35-4a71-b15e-ad4d3c4b22d9", 00:35:17.795 "strip_size_kb": 0, 00:35:17.795 "state": "online", 00:35:17.795 "raid_level": "raid1", 00:35:17.795 "superblock": true, 00:35:17.795 "num_base_bdevs": 4, 00:35:17.795 "num_base_bdevs_discovered": 4, 00:35:17.795 "num_base_bdevs_operational": 4, 00:35:17.795 "base_bdevs_list": [ 00:35:17.795 { 00:35:17.795 "name": "BaseBdev1", 00:35:17.795 "uuid": "75b0bc31-dff4-5dd1-a969-4b25e4ceb9e6", 00:35:17.795 "is_configured": true, 00:35:17.795 "data_offset": 2048, 00:35:17.795 "data_size": 63488 00:35:17.795 }, 00:35:17.795 { 00:35:17.795 "name": "BaseBdev2", 00:35:17.795 "uuid": "bc1c4e5e-64ce-55aa-a40b-aa2fd8fda20b", 00:35:17.795 "is_configured": true, 00:35:17.795 "data_offset": 2048, 00:35:17.795 "data_size": 63488 00:35:17.795 }, 00:35:17.795 { 00:35:17.795 "name": "BaseBdev3", 00:35:17.795 "uuid": "84742018-8e45-56c0-a321-c084b6fa428f", 00:35:17.795 "is_configured": true, 00:35:17.795 "data_offset": 2048, 00:35:17.795 "data_size": 63488 00:35:17.795 }, 00:35:17.795 { 00:35:17.795 "name": "BaseBdev4", 00:35:17.795 "uuid": "246e1dbc-dcdc-5de8-9168-a82c813bbbf6", 00:35:17.795 "is_configured": true, 00:35:17.795 "data_offset": 2048, 00:35:17.795 "data_size": 63488 00:35:17.795 } 00:35:17.795 ] 00:35:17.795 }' 00:35:17.795 17:31:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:35:17.795 17:31:55 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:35:18.054 17:31:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:35:18.054 17:31:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:35:18.054 17:31:55 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:18.054 17:31:55 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:35:18.054 [2024-11-26 17:31:55.412671] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:35:18.054 17:31:55 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:18.054 17:31:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=63488 00:35:18.054 17:31:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:35:18.054 17:31:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:35:18.054 17:31:55 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:18.054 17:31:55 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:35:18.054 17:31:55 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:18.054 17:31:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@619 -- # data_offset=2048 00:35:18.054 17:31:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@621 -- # '[' false = true ']' 00:35:18.054 17:31:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@624 -- # '[' true = true ']' 00:35:18.054 17:31:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@625 -- # local write_unit_size 00:35:18.054 17:31:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@628 -- # nbd_start_disks /var/tmp/spdk.sock raid_bdev1 /dev/nbd0 00:35:18.054 17:31:55 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:35:18.054 17:31:55 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # bdev_list=('raid_bdev1') 00:35:18.054 17:31:55 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # local bdev_list 00:35:18.054 17:31:55 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:35:18.054 17:31:55 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # local nbd_list 00:35:18.054 17:31:55 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@12 -- # local i 00:35:18.054 17:31:55 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:35:18.313 17:31:55 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:35:18.313 17:31:55 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk raid_bdev1 /dev/nbd0 00:35:18.572 [2024-11-26 17:31:55.776454] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006150 00:35:18.572 /dev/nbd0 00:35:18.572 17:31:55 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:35:18.572 17:31:55 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:35:18.572 17:31:55 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:35:18.572 17:31:55 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@873 -- # local i 00:35:18.572 17:31:55 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:35:18.572 17:31:55 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:35:18.572 17:31:55 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:35:18.572 17:31:55 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@877 -- # break 00:35:18.572 17:31:55 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:35:18.572 17:31:55 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:35:18.572 17:31:55 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:35:18.572 1+0 records in 00:35:18.572 1+0 records out 00:35:18.572 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000246496 s, 16.6 MB/s 00:35:18.572 17:31:55 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:35:18.572 17:31:55 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@890 -- # size=4096 00:35:18.572 17:31:55 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:35:18.572 17:31:55 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:35:18.572 17:31:55 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@893 -- # return 0 00:35:18.572 17:31:55 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:35:18.572 17:31:55 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:35:18.572 17:31:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@629 -- # '[' raid1 = raid5f ']' 00:35:18.572 17:31:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@633 -- # write_unit_size=1 00:35:18.572 17:31:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@635 -- # dd if=/dev/urandom of=/dev/nbd0 bs=512 count=63488 oflag=direct 00:35:25.152 63488+0 records in 00:35:25.152 63488+0 records out 00:35:25.152 32505856 bytes (33 MB, 31 MiB) copied, 6.09256 s, 5.3 MB/s 00:35:25.152 17:32:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@636 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:35:25.152 17:32:01 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:35:25.152 17:32:01 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:35:25.152 17:32:01 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # local nbd_list 00:35:25.152 17:32:01 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@51 -- # local i 00:35:25.152 17:32:01 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:35:25.152 17:32:01 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:35:25.152 [2024-11-26 17:32:02.117283] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:35:25.152 17:32:02 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:35:25.152 17:32:02 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:35:25.152 17:32:02 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:35:25.152 17:32:02 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:35:25.152 17:32:02 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:35:25.152 17:32:02 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:35:25.152 17:32:02 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:35:25.152 17:32:02 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:35:25.152 17:32:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:35:25.152 17:32:02 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:25.152 17:32:02 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:35:25.152 [2024-11-26 17:32:02.149368] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:35:25.152 17:32:02 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:25.152 17:32:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:35:25.152 17:32:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:35:25.152 17:32:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:35:25.152 17:32:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:35:25.152 17:32:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:35:25.152 17:32:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:35:25.152 17:32:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:35:25.152 17:32:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:35:25.152 17:32:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:35:25.152 17:32:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:35:25.152 17:32:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:35:25.152 17:32:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:35:25.152 17:32:02 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:25.152 17:32:02 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:35:25.152 17:32:02 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:25.152 17:32:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:35:25.152 "name": "raid_bdev1", 00:35:25.152 "uuid": "b006fce5-bd35-4a71-b15e-ad4d3c4b22d9", 00:35:25.153 "strip_size_kb": 0, 00:35:25.153 "state": "online", 00:35:25.153 "raid_level": "raid1", 00:35:25.153 "superblock": true, 00:35:25.153 "num_base_bdevs": 4, 00:35:25.153 "num_base_bdevs_discovered": 3, 00:35:25.153 "num_base_bdevs_operational": 3, 00:35:25.153 "base_bdevs_list": [ 00:35:25.153 { 00:35:25.153 "name": null, 00:35:25.153 "uuid": "00000000-0000-0000-0000-000000000000", 00:35:25.153 "is_configured": false, 00:35:25.153 "data_offset": 0, 00:35:25.153 "data_size": 63488 00:35:25.153 }, 00:35:25.153 { 00:35:25.153 "name": "BaseBdev2", 00:35:25.153 "uuid": "bc1c4e5e-64ce-55aa-a40b-aa2fd8fda20b", 00:35:25.153 "is_configured": true, 00:35:25.153 "data_offset": 2048, 00:35:25.153 "data_size": 63488 00:35:25.153 }, 00:35:25.153 { 00:35:25.153 "name": "BaseBdev3", 00:35:25.153 "uuid": "84742018-8e45-56c0-a321-c084b6fa428f", 00:35:25.153 "is_configured": true, 00:35:25.153 "data_offset": 2048, 00:35:25.153 "data_size": 63488 00:35:25.153 }, 00:35:25.153 { 00:35:25.153 "name": "BaseBdev4", 00:35:25.153 "uuid": "246e1dbc-dcdc-5de8-9168-a82c813bbbf6", 00:35:25.153 "is_configured": true, 00:35:25.153 "data_offset": 2048, 00:35:25.153 "data_size": 63488 00:35:25.153 } 00:35:25.153 ] 00:35:25.153 }' 00:35:25.153 17:32:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:35:25.153 17:32:02 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:35:25.153 17:32:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:35:25.153 17:32:02 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:25.153 17:32:02 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:35:25.153 [2024-11-26 17:32:02.581484] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:35:25.153 [2024-11-26 17:32:02.595092] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000ca3500 00:35:25.153 17:32:02 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:25.153 17:32:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@647 -- # sleep 1 00:35:25.153 [2024-11-26 17:32:02.597222] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:35:26.574 17:32:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:35:26.575 17:32:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:35:26.575 17:32:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:35:26.575 17:32:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:35:26.575 17:32:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:35:26.575 17:32:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:35:26.575 17:32:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:35:26.575 17:32:03 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:26.575 17:32:03 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:35:26.575 17:32:03 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:26.575 17:32:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:35:26.575 "name": "raid_bdev1", 00:35:26.575 "uuid": "b006fce5-bd35-4a71-b15e-ad4d3c4b22d9", 00:35:26.575 "strip_size_kb": 0, 00:35:26.575 "state": "online", 00:35:26.575 "raid_level": "raid1", 00:35:26.575 "superblock": true, 00:35:26.575 "num_base_bdevs": 4, 00:35:26.575 "num_base_bdevs_discovered": 4, 00:35:26.575 "num_base_bdevs_operational": 4, 00:35:26.575 "process": { 00:35:26.575 "type": "rebuild", 00:35:26.575 "target": "spare", 00:35:26.575 "progress": { 00:35:26.575 "blocks": 20480, 00:35:26.575 "percent": 32 00:35:26.575 } 00:35:26.575 }, 00:35:26.575 "base_bdevs_list": [ 00:35:26.575 { 00:35:26.575 "name": "spare", 00:35:26.575 "uuid": "a3301495-37eb-5bb0-b362-a255444b80ec", 00:35:26.575 "is_configured": true, 00:35:26.575 "data_offset": 2048, 00:35:26.575 "data_size": 63488 00:35:26.575 }, 00:35:26.575 { 00:35:26.575 "name": "BaseBdev2", 00:35:26.575 "uuid": "bc1c4e5e-64ce-55aa-a40b-aa2fd8fda20b", 00:35:26.575 "is_configured": true, 00:35:26.575 "data_offset": 2048, 00:35:26.575 "data_size": 63488 00:35:26.575 }, 00:35:26.575 { 00:35:26.575 "name": "BaseBdev3", 00:35:26.575 "uuid": "84742018-8e45-56c0-a321-c084b6fa428f", 00:35:26.575 "is_configured": true, 00:35:26.575 "data_offset": 2048, 00:35:26.575 "data_size": 63488 00:35:26.575 }, 00:35:26.575 { 00:35:26.575 "name": "BaseBdev4", 00:35:26.575 "uuid": "246e1dbc-dcdc-5de8-9168-a82c813bbbf6", 00:35:26.575 "is_configured": true, 00:35:26.575 "data_offset": 2048, 00:35:26.575 "data_size": 63488 00:35:26.575 } 00:35:26.575 ] 00:35:26.575 }' 00:35:26.575 17:32:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:35:26.575 17:32:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:35:26.575 17:32:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:35:26.575 17:32:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:35:26.575 17:32:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:35:26.575 17:32:03 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:26.575 17:32:03 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:35:26.575 [2024-11-26 17:32:03.750551] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:35:26.575 [2024-11-26 17:32:03.804923] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:35:26.575 [2024-11-26 17:32:03.805022] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:35:26.575 [2024-11-26 17:32:03.805041] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:35:26.575 [2024-11-26 17:32:03.805070] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:35:26.575 17:32:03 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:26.575 17:32:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:35:26.575 17:32:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:35:26.575 17:32:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:35:26.575 17:32:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:35:26.575 17:32:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:35:26.575 17:32:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:35:26.575 17:32:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:35:26.575 17:32:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:35:26.575 17:32:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:35:26.575 17:32:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:35:26.575 17:32:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:35:26.575 17:32:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:35:26.575 17:32:03 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:26.575 17:32:03 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:35:26.575 17:32:03 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:26.575 17:32:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:35:26.575 "name": "raid_bdev1", 00:35:26.575 "uuid": "b006fce5-bd35-4a71-b15e-ad4d3c4b22d9", 00:35:26.575 "strip_size_kb": 0, 00:35:26.575 "state": "online", 00:35:26.575 "raid_level": "raid1", 00:35:26.575 "superblock": true, 00:35:26.575 "num_base_bdevs": 4, 00:35:26.575 "num_base_bdevs_discovered": 3, 00:35:26.575 "num_base_bdevs_operational": 3, 00:35:26.575 "base_bdevs_list": [ 00:35:26.575 { 00:35:26.575 "name": null, 00:35:26.575 "uuid": "00000000-0000-0000-0000-000000000000", 00:35:26.575 "is_configured": false, 00:35:26.575 "data_offset": 0, 00:35:26.575 "data_size": 63488 00:35:26.575 }, 00:35:26.575 { 00:35:26.575 "name": "BaseBdev2", 00:35:26.575 "uuid": "bc1c4e5e-64ce-55aa-a40b-aa2fd8fda20b", 00:35:26.575 "is_configured": true, 00:35:26.575 "data_offset": 2048, 00:35:26.575 "data_size": 63488 00:35:26.575 }, 00:35:26.575 { 00:35:26.575 "name": "BaseBdev3", 00:35:26.575 "uuid": "84742018-8e45-56c0-a321-c084b6fa428f", 00:35:26.575 "is_configured": true, 00:35:26.575 "data_offset": 2048, 00:35:26.575 "data_size": 63488 00:35:26.575 }, 00:35:26.575 { 00:35:26.575 "name": "BaseBdev4", 00:35:26.575 "uuid": "246e1dbc-dcdc-5de8-9168-a82c813bbbf6", 00:35:26.575 "is_configured": true, 00:35:26.575 "data_offset": 2048, 00:35:26.575 "data_size": 63488 00:35:26.575 } 00:35:26.575 ] 00:35:26.575 }' 00:35:26.575 17:32:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:35:26.575 17:32:03 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:35:26.851 17:32:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:35:26.851 17:32:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:35:26.851 17:32:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:35:26.851 17:32:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:35:26.851 17:32:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:35:26.851 17:32:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:35:26.851 17:32:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:35:26.851 17:32:04 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:26.851 17:32:04 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:35:27.110 17:32:04 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:27.110 17:32:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:35:27.110 "name": "raid_bdev1", 00:35:27.110 "uuid": "b006fce5-bd35-4a71-b15e-ad4d3c4b22d9", 00:35:27.110 "strip_size_kb": 0, 00:35:27.110 "state": "online", 00:35:27.110 "raid_level": "raid1", 00:35:27.110 "superblock": true, 00:35:27.110 "num_base_bdevs": 4, 00:35:27.110 "num_base_bdevs_discovered": 3, 00:35:27.110 "num_base_bdevs_operational": 3, 00:35:27.110 "base_bdevs_list": [ 00:35:27.110 { 00:35:27.110 "name": null, 00:35:27.110 "uuid": "00000000-0000-0000-0000-000000000000", 00:35:27.110 "is_configured": false, 00:35:27.110 "data_offset": 0, 00:35:27.110 "data_size": 63488 00:35:27.110 }, 00:35:27.110 { 00:35:27.110 "name": "BaseBdev2", 00:35:27.110 "uuid": "bc1c4e5e-64ce-55aa-a40b-aa2fd8fda20b", 00:35:27.110 "is_configured": true, 00:35:27.110 "data_offset": 2048, 00:35:27.110 "data_size": 63488 00:35:27.110 }, 00:35:27.110 { 00:35:27.110 "name": "BaseBdev3", 00:35:27.110 "uuid": "84742018-8e45-56c0-a321-c084b6fa428f", 00:35:27.110 "is_configured": true, 00:35:27.110 "data_offset": 2048, 00:35:27.110 "data_size": 63488 00:35:27.110 }, 00:35:27.110 { 00:35:27.110 "name": "BaseBdev4", 00:35:27.110 "uuid": "246e1dbc-dcdc-5de8-9168-a82c813bbbf6", 00:35:27.110 "is_configured": true, 00:35:27.110 "data_offset": 2048, 00:35:27.110 "data_size": 63488 00:35:27.110 } 00:35:27.110 ] 00:35:27.110 }' 00:35:27.110 17:32:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:35:27.110 17:32:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:35:27.110 17:32:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:35:27.110 17:32:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:35:27.110 17:32:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:35:27.110 17:32:04 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:27.110 17:32:04 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:35:27.110 [2024-11-26 17:32:04.419711] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:35:27.110 [2024-11-26 17:32:04.436174] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000ca35d0 00:35:27.110 17:32:04 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:27.110 17:32:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@663 -- # sleep 1 00:35:27.110 [2024-11-26 17:32:04.438509] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:35:28.046 17:32:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:35:28.046 17:32:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:35:28.046 17:32:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:35:28.046 17:32:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:35:28.046 17:32:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:35:28.046 17:32:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:35:28.046 17:32:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:35:28.046 17:32:05 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:28.046 17:32:05 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:35:28.046 17:32:05 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:28.305 17:32:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:35:28.305 "name": "raid_bdev1", 00:35:28.305 "uuid": "b006fce5-bd35-4a71-b15e-ad4d3c4b22d9", 00:35:28.305 "strip_size_kb": 0, 00:35:28.305 "state": "online", 00:35:28.305 "raid_level": "raid1", 00:35:28.305 "superblock": true, 00:35:28.305 "num_base_bdevs": 4, 00:35:28.305 "num_base_bdevs_discovered": 4, 00:35:28.305 "num_base_bdevs_operational": 4, 00:35:28.305 "process": { 00:35:28.305 "type": "rebuild", 00:35:28.305 "target": "spare", 00:35:28.305 "progress": { 00:35:28.305 "blocks": 20480, 00:35:28.305 "percent": 32 00:35:28.305 } 00:35:28.305 }, 00:35:28.305 "base_bdevs_list": [ 00:35:28.305 { 00:35:28.305 "name": "spare", 00:35:28.305 "uuid": "a3301495-37eb-5bb0-b362-a255444b80ec", 00:35:28.305 "is_configured": true, 00:35:28.305 "data_offset": 2048, 00:35:28.305 "data_size": 63488 00:35:28.305 }, 00:35:28.305 { 00:35:28.305 "name": "BaseBdev2", 00:35:28.305 "uuid": "bc1c4e5e-64ce-55aa-a40b-aa2fd8fda20b", 00:35:28.305 "is_configured": true, 00:35:28.305 "data_offset": 2048, 00:35:28.305 "data_size": 63488 00:35:28.305 }, 00:35:28.305 { 00:35:28.305 "name": "BaseBdev3", 00:35:28.305 "uuid": "84742018-8e45-56c0-a321-c084b6fa428f", 00:35:28.305 "is_configured": true, 00:35:28.305 "data_offset": 2048, 00:35:28.305 "data_size": 63488 00:35:28.305 }, 00:35:28.305 { 00:35:28.305 "name": "BaseBdev4", 00:35:28.305 "uuid": "246e1dbc-dcdc-5de8-9168-a82c813bbbf6", 00:35:28.305 "is_configured": true, 00:35:28.305 "data_offset": 2048, 00:35:28.306 "data_size": 63488 00:35:28.306 } 00:35:28.306 ] 00:35:28.306 }' 00:35:28.306 17:32:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:35:28.306 17:32:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:35:28.306 17:32:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:35:28.306 17:32:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:35:28.306 17:32:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@666 -- # '[' true = true ']' 00:35:28.306 17:32:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@666 -- # '[' = false ']' 00:35:28.306 /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh: line 666: [: =: unary operator expected 00:35:28.306 17:32:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=4 00:35:28.306 17:32:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@693 -- # '[' raid1 = raid1 ']' 00:35:28.306 17:32:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@693 -- # '[' 4 -gt 2 ']' 00:35:28.306 17:32:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@695 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:35:28.306 17:32:05 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:28.306 17:32:05 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:35:28.306 [2024-11-26 17:32:05.595691] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:35:28.306 [2024-11-26 17:32:05.746466] bdev_raid.c:1974:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 1 raid_ch: 0x60d000ca35d0 00:35:28.306 17:32:05 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:28.306 17:32:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@698 -- # base_bdevs[1]= 00:35:28.306 17:32:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@699 -- # (( num_base_bdevs_operational-- )) 00:35:28.306 17:32:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@702 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:35:28.306 17:32:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:35:28.306 17:32:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:35:28.306 17:32:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:35:28.306 17:32:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:35:28.566 17:32:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:35:28.566 17:32:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:35:28.566 17:32:05 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:28.566 17:32:05 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:35:28.566 17:32:05 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:28.566 17:32:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:35:28.566 "name": "raid_bdev1", 00:35:28.566 "uuid": "b006fce5-bd35-4a71-b15e-ad4d3c4b22d9", 00:35:28.566 "strip_size_kb": 0, 00:35:28.566 "state": "online", 00:35:28.566 "raid_level": "raid1", 00:35:28.566 "superblock": true, 00:35:28.566 "num_base_bdevs": 4, 00:35:28.566 "num_base_bdevs_discovered": 3, 00:35:28.566 "num_base_bdevs_operational": 3, 00:35:28.566 "process": { 00:35:28.566 "type": "rebuild", 00:35:28.566 "target": "spare", 00:35:28.566 "progress": { 00:35:28.566 "blocks": 24576, 00:35:28.566 "percent": 38 00:35:28.566 } 00:35:28.566 }, 00:35:28.566 "base_bdevs_list": [ 00:35:28.566 { 00:35:28.566 "name": "spare", 00:35:28.566 "uuid": "a3301495-37eb-5bb0-b362-a255444b80ec", 00:35:28.566 "is_configured": true, 00:35:28.566 "data_offset": 2048, 00:35:28.566 "data_size": 63488 00:35:28.566 }, 00:35:28.566 { 00:35:28.566 "name": null, 00:35:28.566 "uuid": "00000000-0000-0000-0000-000000000000", 00:35:28.566 "is_configured": false, 00:35:28.566 "data_offset": 0, 00:35:28.566 "data_size": 63488 00:35:28.566 }, 00:35:28.566 { 00:35:28.566 "name": "BaseBdev3", 00:35:28.566 "uuid": "84742018-8e45-56c0-a321-c084b6fa428f", 00:35:28.566 "is_configured": true, 00:35:28.566 "data_offset": 2048, 00:35:28.566 "data_size": 63488 00:35:28.566 }, 00:35:28.566 { 00:35:28.566 "name": "BaseBdev4", 00:35:28.566 "uuid": "246e1dbc-dcdc-5de8-9168-a82c813bbbf6", 00:35:28.566 "is_configured": true, 00:35:28.566 "data_offset": 2048, 00:35:28.566 "data_size": 63488 00:35:28.566 } 00:35:28.566 ] 00:35:28.566 }' 00:35:28.566 17:32:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:35:28.566 17:32:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:35:28.566 17:32:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:35:28.566 17:32:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:35:28.566 17:32:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@706 -- # local timeout=479 00:35:28.566 17:32:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:35:28.566 17:32:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:35:28.566 17:32:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:35:28.566 17:32:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:35:28.566 17:32:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:35:28.566 17:32:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:35:28.566 17:32:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:35:28.566 17:32:05 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:28.566 17:32:05 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:35:28.566 17:32:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:35:28.566 17:32:05 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:28.566 17:32:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:35:28.566 "name": "raid_bdev1", 00:35:28.566 "uuid": "b006fce5-bd35-4a71-b15e-ad4d3c4b22d9", 00:35:28.566 "strip_size_kb": 0, 00:35:28.566 "state": "online", 00:35:28.566 "raid_level": "raid1", 00:35:28.566 "superblock": true, 00:35:28.566 "num_base_bdevs": 4, 00:35:28.566 "num_base_bdevs_discovered": 3, 00:35:28.566 "num_base_bdevs_operational": 3, 00:35:28.566 "process": { 00:35:28.566 "type": "rebuild", 00:35:28.566 "target": "spare", 00:35:28.566 "progress": { 00:35:28.566 "blocks": 26624, 00:35:28.566 "percent": 41 00:35:28.566 } 00:35:28.566 }, 00:35:28.566 "base_bdevs_list": [ 00:35:28.566 { 00:35:28.566 "name": "spare", 00:35:28.566 "uuid": "a3301495-37eb-5bb0-b362-a255444b80ec", 00:35:28.566 "is_configured": true, 00:35:28.566 "data_offset": 2048, 00:35:28.566 "data_size": 63488 00:35:28.566 }, 00:35:28.566 { 00:35:28.566 "name": null, 00:35:28.566 "uuid": "00000000-0000-0000-0000-000000000000", 00:35:28.566 "is_configured": false, 00:35:28.566 "data_offset": 0, 00:35:28.566 "data_size": 63488 00:35:28.566 }, 00:35:28.566 { 00:35:28.566 "name": "BaseBdev3", 00:35:28.566 "uuid": "84742018-8e45-56c0-a321-c084b6fa428f", 00:35:28.566 "is_configured": true, 00:35:28.566 "data_offset": 2048, 00:35:28.566 "data_size": 63488 00:35:28.566 }, 00:35:28.566 { 00:35:28.566 "name": "BaseBdev4", 00:35:28.566 "uuid": "246e1dbc-dcdc-5de8-9168-a82c813bbbf6", 00:35:28.566 "is_configured": true, 00:35:28.566 "data_offset": 2048, 00:35:28.566 "data_size": 63488 00:35:28.566 } 00:35:28.566 ] 00:35:28.566 }' 00:35:28.566 17:32:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:35:28.566 17:32:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:35:28.566 17:32:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:35:28.566 17:32:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:35:28.566 17:32:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:35:29.941 17:32:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:35:29.941 17:32:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:35:29.941 17:32:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:35:29.941 17:32:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:35:29.941 17:32:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:35:29.941 17:32:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:35:29.941 17:32:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:35:29.941 17:32:07 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:29.941 17:32:07 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:35:29.941 17:32:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:35:29.941 17:32:07 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:29.941 17:32:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:35:29.941 "name": "raid_bdev1", 00:35:29.941 "uuid": "b006fce5-bd35-4a71-b15e-ad4d3c4b22d9", 00:35:29.941 "strip_size_kb": 0, 00:35:29.941 "state": "online", 00:35:29.941 "raid_level": "raid1", 00:35:29.941 "superblock": true, 00:35:29.941 "num_base_bdevs": 4, 00:35:29.941 "num_base_bdevs_discovered": 3, 00:35:29.941 "num_base_bdevs_operational": 3, 00:35:29.941 "process": { 00:35:29.941 "type": "rebuild", 00:35:29.941 "target": "spare", 00:35:29.941 "progress": { 00:35:29.941 "blocks": 49152, 00:35:29.941 "percent": 77 00:35:29.941 } 00:35:29.941 }, 00:35:29.941 "base_bdevs_list": [ 00:35:29.941 { 00:35:29.941 "name": "spare", 00:35:29.941 "uuid": "a3301495-37eb-5bb0-b362-a255444b80ec", 00:35:29.941 "is_configured": true, 00:35:29.941 "data_offset": 2048, 00:35:29.941 "data_size": 63488 00:35:29.941 }, 00:35:29.941 { 00:35:29.941 "name": null, 00:35:29.941 "uuid": "00000000-0000-0000-0000-000000000000", 00:35:29.941 "is_configured": false, 00:35:29.941 "data_offset": 0, 00:35:29.942 "data_size": 63488 00:35:29.942 }, 00:35:29.942 { 00:35:29.942 "name": "BaseBdev3", 00:35:29.942 "uuid": "84742018-8e45-56c0-a321-c084b6fa428f", 00:35:29.942 "is_configured": true, 00:35:29.942 "data_offset": 2048, 00:35:29.942 "data_size": 63488 00:35:29.942 }, 00:35:29.942 { 00:35:29.942 "name": "BaseBdev4", 00:35:29.942 "uuid": "246e1dbc-dcdc-5de8-9168-a82c813bbbf6", 00:35:29.942 "is_configured": true, 00:35:29.942 "data_offset": 2048, 00:35:29.942 "data_size": 63488 00:35:29.942 } 00:35:29.942 ] 00:35:29.942 }' 00:35:29.942 17:32:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:35:29.942 17:32:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:35:29.942 17:32:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:35:29.942 17:32:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:35:29.942 17:32:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:35:30.509 [2024-11-26 17:32:07.658804] bdev_raid.c:2900:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:35:30.509 [2024-11-26 17:32:07.658900] bdev_raid.c:2562:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:35:30.509 [2024-11-26 17:32:07.659069] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:35:30.767 17:32:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:35:30.767 17:32:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:35:30.767 17:32:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:35:30.767 17:32:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:35:30.767 17:32:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:35:30.767 17:32:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:35:30.767 17:32:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:35:30.767 17:32:08 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:30.767 17:32:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:35:30.767 17:32:08 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:35:30.767 17:32:08 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:30.767 17:32:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:35:30.767 "name": "raid_bdev1", 00:35:30.767 "uuid": "b006fce5-bd35-4a71-b15e-ad4d3c4b22d9", 00:35:30.767 "strip_size_kb": 0, 00:35:30.767 "state": "online", 00:35:30.767 "raid_level": "raid1", 00:35:30.767 "superblock": true, 00:35:30.767 "num_base_bdevs": 4, 00:35:30.767 "num_base_bdevs_discovered": 3, 00:35:30.767 "num_base_bdevs_operational": 3, 00:35:30.767 "base_bdevs_list": [ 00:35:30.767 { 00:35:30.767 "name": "spare", 00:35:30.767 "uuid": "a3301495-37eb-5bb0-b362-a255444b80ec", 00:35:30.767 "is_configured": true, 00:35:30.767 "data_offset": 2048, 00:35:30.767 "data_size": 63488 00:35:30.767 }, 00:35:30.767 { 00:35:30.767 "name": null, 00:35:30.767 "uuid": "00000000-0000-0000-0000-000000000000", 00:35:30.767 "is_configured": false, 00:35:30.767 "data_offset": 0, 00:35:30.767 "data_size": 63488 00:35:30.767 }, 00:35:30.767 { 00:35:30.767 "name": "BaseBdev3", 00:35:30.767 "uuid": "84742018-8e45-56c0-a321-c084b6fa428f", 00:35:30.767 "is_configured": true, 00:35:30.767 "data_offset": 2048, 00:35:30.767 "data_size": 63488 00:35:30.767 }, 00:35:30.767 { 00:35:30.767 "name": "BaseBdev4", 00:35:30.767 "uuid": "246e1dbc-dcdc-5de8-9168-a82c813bbbf6", 00:35:30.767 "is_configured": true, 00:35:30.767 "data_offset": 2048, 00:35:30.767 "data_size": 63488 00:35:30.767 } 00:35:30.767 ] 00:35:30.767 }' 00:35:31.026 17:32:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:35:31.026 17:32:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:35:31.026 17:32:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:35:31.026 17:32:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:35:31.026 17:32:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@709 -- # break 00:35:31.026 17:32:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:35:31.026 17:32:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:35:31.026 17:32:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:35:31.026 17:32:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:35:31.026 17:32:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:35:31.027 17:32:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:35:31.027 17:32:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:35:31.027 17:32:08 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:31.027 17:32:08 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:35:31.027 17:32:08 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:31.027 17:32:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:35:31.027 "name": "raid_bdev1", 00:35:31.027 "uuid": "b006fce5-bd35-4a71-b15e-ad4d3c4b22d9", 00:35:31.027 "strip_size_kb": 0, 00:35:31.027 "state": "online", 00:35:31.027 "raid_level": "raid1", 00:35:31.027 "superblock": true, 00:35:31.027 "num_base_bdevs": 4, 00:35:31.027 "num_base_bdevs_discovered": 3, 00:35:31.027 "num_base_bdevs_operational": 3, 00:35:31.027 "base_bdevs_list": [ 00:35:31.027 { 00:35:31.027 "name": "spare", 00:35:31.027 "uuid": "a3301495-37eb-5bb0-b362-a255444b80ec", 00:35:31.027 "is_configured": true, 00:35:31.027 "data_offset": 2048, 00:35:31.027 "data_size": 63488 00:35:31.027 }, 00:35:31.027 { 00:35:31.027 "name": null, 00:35:31.027 "uuid": "00000000-0000-0000-0000-000000000000", 00:35:31.027 "is_configured": false, 00:35:31.027 "data_offset": 0, 00:35:31.027 "data_size": 63488 00:35:31.027 }, 00:35:31.027 { 00:35:31.027 "name": "BaseBdev3", 00:35:31.027 "uuid": "84742018-8e45-56c0-a321-c084b6fa428f", 00:35:31.027 "is_configured": true, 00:35:31.027 "data_offset": 2048, 00:35:31.027 "data_size": 63488 00:35:31.027 }, 00:35:31.027 { 00:35:31.027 "name": "BaseBdev4", 00:35:31.027 "uuid": "246e1dbc-dcdc-5de8-9168-a82c813bbbf6", 00:35:31.027 "is_configured": true, 00:35:31.027 "data_offset": 2048, 00:35:31.027 "data_size": 63488 00:35:31.027 } 00:35:31.027 ] 00:35:31.027 }' 00:35:31.027 17:32:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:35:31.027 17:32:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:35:31.027 17:32:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:35:31.027 17:32:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:35:31.027 17:32:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:35:31.027 17:32:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:35:31.027 17:32:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:35:31.027 17:32:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:35:31.027 17:32:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:35:31.027 17:32:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:35:31.027 17:32:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:35:31.027 17:32:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:35:31.027 17:32:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:35:31.027 17:32:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:35:31.027 17:32:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:35:31.027 17:32:08 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:31.027 17:32:08 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:35:31.027 17:32:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:35:31.027 17:32:08 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:31.286 17:32:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:35:31.286 "name": "raid_bdev1", 00:35:31.286 "uuid": "b006fce5-bd35-4a71-b15e-ad4d3c4b22d9", 00:35:31.286 "strip_size_kb": 0, 00:35:31.286 "state": "online", 00:35:31.286 "raid_level": "raid1", 00:35:31.286 "superblock": true, 00:35:31.286 "num_base_bdevs": 4, 00:35:31.286 "num_base_bdevs_discovered": 3, 00:35:31.286 "num_base_bdevs_operational": 3, 00:35:31.286 "base_bdevs_list": [ 00:35:31.286 { 00:35:31.286 "name": "spare", 00:35:31.286 "uuid": "a3301495-37eb-5bb0-b362-a255444b80ec", 00:35:31.286 "is_configured": true, 00:35:31.286 "data_offset": 2048, 00:35:31.286 "data_size": 63488 00:35:31.286 }, 00:35:31.286 { 00:35:31.286 "name": null, 00:35:31.286 "uuid": "00000000-0000-0000-0000-000000000000", 00:35:31.286 "is_configured": false, 00:35:31.286 "data_offset": 0, 00:35:31.286 "data_size": 63488 00:35:31.286 }, 00:35:31.286 { 00:35:31.286 "name": "BaseBdev3", 00:35:31.286 "uuid": "84742018-8e45-56c0-a321-c084b6fa428f", 00:35:31.286 "is_configured": true, 00:35:31.286 "data_offset": 2048, 00:35:31.286 "data_size": 63488 00:35:31.286 }, 00:35:31.286 { 00:35:31.286 "name": "BaseBdev4", 00:35:31.286 "uuid": "246e1dbc-dcdc-5de8-9168-a82c813bbbf6", 00:35:31.286 "is_configured": true, 00:35:31.286 "data_offset": 2048, 00:35:31.286 "data_size": 63488 00:35:31.286 } 00:35:31.286 ] 00:35:31.286 }' 00:35:31.286 17:32:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:35:31.286 17:32:08 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:35:31.545 17:32:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:35:31.545 17:32:08 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:31.545 17:32:08 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:35:31.545 [2024-11-26 17:32:08.881307] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:35:31.545 [2024-11-26 17:32:08.881344] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:35:31.545 [2024-11-26 17:32:08.881452] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:35:31.545 [2024-11-26 17:32:08.881536] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:35:31.545 [2024-11-26 17:32:08.881548] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:35:31.545 17:32:08 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:31.545 17:32:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@720 -- # jq length 00:35:31.545 17:32:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:35:31.545 17:32:08 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:31.545 17:32:08 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:35:31.545 17:32:08 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:31.545 17:32:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:35:31.545 17:32:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:35:31.545 17:32:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@723 -- # '[' false = true ']' 00:35:31.545 17:32:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@737 -- # nbd_start_disks /var/tmp/spdk.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 00:35:31.545 17:32:08 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:35:31.545 17:32:08 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev1' 'spare') 00:35:31.545 17:32:08 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # local bdev_list 00:35:31.545 17:32:08 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:35:31.545 17:32:08 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # local nbd_list 00:35:31.545 17:32:08 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@12 -- # local i 00:35:31.545 17:32:08 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:35:31.545 17:32:08 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:35:31.545 17:32:08 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev1 /dev/nbd0 00:35:31.805 /dev/nbd0 00:35:31.805 17:32:09 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:35:31.805 17:32:09 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:35:31.805 17:32:09 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:35:31.805 17:32:09 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@873 -- # local i 00:35:31.805 17:32:09 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:35:31.805 17:32:09 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:35:31.805 17:32:09 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:35:31.805 17:32:09 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@877 -- # break 00:35:31.805 17:32:09 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:35:31.805 17:32:09 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:35:31.805 17:32:09 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:35:31.805 1+0 records in 00:35:31.805 1+0 records out 00:35:31.805 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000365868 s, 11.2 MB/s 00:35:31.805 17:32:09 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:35:31.805 17:32:09 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@890 -- # size=4096 00:35:31.805 17:32:09 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:35:31.805 17:32:09 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:35:31.805 17:32:09 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@893 -- # return 0 00:35:31.805 17:32:09 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:35:31.805 17:32:09 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:35:31.805 17:32:09 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd1 00:35:32.063 /dev/nbd1 00:35:32.063 17:32:09 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:35:32.063 17:32:09 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:35:32.063 17:32:09 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:35:32.063 17:32:09 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@873 -- # local i 00:35:32.063 17:32:09 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:35:32.063 17:32:09 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:35:32.063 17:32:09 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:35:32.063 17:32:09 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@877 -- # break 00:35:32.063 17:32:09 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:35:32.063 17:32:09 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:35:32.063 17:32:09 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:35:32.063 1+0 records in 00:35:32.063 1+0 records out 00:35:32.063 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000362234 s, 11.3 MB/s 00:35:32.063 17:32:09 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:35:32.063 17:32:09 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@890 -- # size=4096 00:35:32.063 17:32:09 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:35:32.063 17:32:09 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:35:32.063 17:32:09 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@893 -- # return 0 00:35:32.063 17:32:09 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:35:32.063 17:32:09 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:35:32.063 17:32:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@738 -- # cmp -i 1048576 /dev/nbd0 /dev/nbd1 00:35:32.322 17:32:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@739 -- # nbd_stop_disks /var/tmp/spdk.sock '/dev/nbd0 /dev/nbd1' 00:35:32.322 17:32:09 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:35:32.322 17:32:09 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:35:32.322 17:32:09 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # local nbd_list 00:35:32.322 17:32:09 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@51 -- # local i 00:35:32.322 17:32:09 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:35:32.322 17:32:09 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:35:32.581 17:32:09 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:35:32.581 17:32:09 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:35:32.581 17:32:09 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:35:32.581 17:32:09 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:35:32.581 17:32:09 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:35:32.581 17:32:09 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:35:32.581 17:32:09 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:35:32.581 17:32:09 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:35:32.581 17:32:09 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:35:32.581 17:32:09 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:35:33.149 17:32:10 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:35:33.149 17:32:10 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:35:33.149 17:32:10 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:35:33.149 17:32:10 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:35:33.149 17:32:10 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:35:33.149 17:32:10 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:35:33.149 17:32:10 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:35:33.149 17:32:10 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:35:33.149 17:32:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@743 -- # '[' true = true ']' 00:35:33.149 17:32:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@745 -- # rpc_cmd bdev_passthru_delete spare 00:35:33.149 17:32:10 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:33.149 17:32:10 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:35:33.149 17:32:10 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:33.149 17:32:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@746 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:35:33.149 17:32:10 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:33.149 17:32:10 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:35:33.149 [2024-11-26 17:32:10.309622] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:35:33.149 [2024-11-26 17:32:10.309687] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:35:33.149 [2024-11-26 17:32:10.309713] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ba80 00:35:33.149 [2024-11-26 17:32:10.309725] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:35:33.149 [2024-11-26 17:32:10.312241] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:35:33.149 [2024-11-26 17:32:10.312280] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:35:33.149 [2024-11-26 17:32:10.312378] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:35:33.149 [2024-11-26 17:32:10.312429] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:35:33.149 [2024-11-26 17:32:10.312566] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:35:33.149 [2024-11-26 17:32:10.312655] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:35:33.149 spare 00:35:33.149 17:32:10 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:33.149 17:32:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@747 -- # rpc_cmd bdev_wait_for_examine 00:35:33.149 17:32:10 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:33.149 17:32:10 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:35:33.149 [2024-11-26 17:32:10.412758] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007b00 00:35:33.149 [2024-11-26 17:32:10.412810] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:35:33.149 [2024-11-26 17:32:10.413203] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000cc1c80 00:35:33.149 [2024-11-26 17:32:10.413410] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007b00 00:35:33.149 [2024-11-26 17:32:10.413426] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007b00 00:35:33.149 [2024-11-26 17:32:10.413671] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:35:33.149 17:32:10 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:33.149 17:32:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@749 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:35:33.149 17:32:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:35:33.149 17:32:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:35:33.149 17:32:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:35:33.149 17:32:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:35:33.149 17:32:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:35:33.149 17:32:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:35:33.149 17:32:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:35:33.149 17:32:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:35:33.149 17:32:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:35:33.149 17:32:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:35:33.149 17:32:10 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:33.149 17:32:10 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:35:33.149 17:32:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:35:33.150 17:32:10 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:33.150 17:32:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:35:33.150 "name": "raid_bdev1", 00:35:33.150 "uuid": "b006fce5-bd35-4a71-b15e-ad4d3c4b22d9", 00:35:33.150 "strip_size_kb": 0, 00:35:33.150 "state": "online", 00:35:33.150 "raid_level": "raid1", 00:35:33.150 "superblock": true, 00:35:33.150 "num_base_bdevs": 4, 00:35:33.150 "num_base_bdevs_discovered": 3, 00:35:33.150 "num_base_bdevs_operational": 3, 00:35:33.150 "base_bdevs_list": [ 00:35:33.150 { 00:35:33.150 "name": "spare", 00:35:33.150 "uuid": "a3301495-37eb-5bb0-b362-a255444b80ec", 00:35:33.150 "is_configured": true, 00:35:33.150 "data_offset": 2048, 00:35:33.150 "data_size": 63488 00:35:33.150 }, 00:35:33.150 { 00:35:33.150 "name": null, 00:35:33.150 "uuid": "00000000-0000-0000-0000-000000000000", 00:35:33.150 "is_configured": false, 00:35:33.150 "data_offset": 2048, 00:35:33.150 "data_size": 63488 00:35:33.150 }, 00:35:33.150 { 00:35:33.150 "name": "BaseBdev3", 00:35:33.150 "uuid": "84742018-8e45-56c0-a321-c084b6fa428f", 00:35:33.150 "is_configured": true, 00:35:33.150 "data_offset": 2048, 00:35:33.150 "data_size": 63488 00:35:33.150 }, 00:35:33.150 { 00:35:33.150 "name": "BaseBdev4", 00:35:33.150 "uuid": "246e1dbc-dcdc-5de8-9168-a82c813bbbf6", 00:35:33.150 "is_configured": true, 00:35:33.150 "data_offset": 2048, 00:35:33.150 "data_size": 63488 00:35:33.150 } 00:35:33.150 ] 00:35:33.150 }' 00:35:33.150 17:32:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:35:33.150 17:32:10 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:35:33.718 17:32:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@750 -- # verify_raid_bdev_process raid_bdev1 none none 00:35:33.718 17:32:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:35:33.718 17:32:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:35:33.718 17:32:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:35:33.718 17:32:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:35:33.718 17:32:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:35:33.718 17:32:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:35:33.718 17:32:10 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:33.718 17:32:10 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:35:33.718 17:32:10 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:33.718 17:32:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:35:33.718 "name": "raid_bdev1", 00:35:33.718 "uuid": "b006fce5-bd35-4a71-b15e-ad4d3c4b22d9", 00:35:33.718 "strip_size_kb": 0, 00:35:33.718 "state": "online", 00:35:33.718 "raid_level": "raid1", 00:35:33.718 "superblock": true, 00:35:33.718 "num_base_bdevs": 4, 00:35:33.718 "num_base_bdevs_discovered": 3, 00:35:33.718 "num_base_bdevs_operational": 3, 00:35:33.718 "base_bdevs_list": [ 00:35:33.718 { 00:35:33.718 "name": "spare", 00:35:33.718 "uuid": "a3301495-37eb-5bb0-b362-a255444b80ec", 00:35:33.718 "is_configured": true, 00:35:33.718 "data_offset": 2048, 00:35:33.718 "data_size": 63488 00:35:33.718 }, 00:35:33.718 { 00:35:33.718 "name": null, 00:35:33.718 "uuid": "00000000-0000-0000-0000-000000000000", 00:35:33.718 "is_configured": false, 00:35:33.718 "data_offset": 2048, 00:35:33.718 "data_size": 63488 00:35:33.718 }, 00:35:33.718 { 00:35:33.718 "name": "BaseBdev3", 00:35:33.718 "uuid": "84742018-8e45-56c0-a321-c084b6fa428f", 00:35:33.718 "is_configured": true, 00:35:33.718 "data_offset": 2048, 00:35:33.718 "data_size": 63488 00:35:33.718 }, 00:35:33.718 { 00:35:33.718 "name": "BaseBdev4", 00:35:33.718 "uuid": "246e1dbc-dcdc-5de8-9168-a82c813bbbf6", 00:35:33.718 "is_configured": true, 00:35:33.718 "data_offset": 2048, 00:35:33.718 "data_size": 63488 00:35:33.718 } 00:35:33.718 ] 00:35:33.718 }' 00:35:33.718 17:32:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:35:33.718 17:32:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:35:33.718 17:32:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:35:33.719 17:32:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:35:33.719 17:32:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@751 -- # rpc_cmd bdev_raid_get_bdevs all 00:35:33.719 17:32:10 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:33.719 17:32:10 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:35:33.719 17:32:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@751 -- # jq -r '.[].base_bdevs_list[0].name' 00:35:33.719 17:32:11 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:33.719 17:32:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@751 -- # [[ spare == \s\p\a\r\e ]] 00:35:33.719 17:32:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@754 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:35:33.719 17:32:11 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:33.719 17:32:11 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:35:33.719 [2024-11-26 17:32:11.037856] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:35:33.719 17:32:11 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:33.719 17:32:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@755 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:35:33.719 17:32:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:35:33.719 17:32:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:35:33.719 17:32:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:35:33.719 17:32:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:35:33.719 17:32:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:35:33.719 17:32:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:35:33.719 17:32:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:35:33.719 17:32:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:35:33.719 17:32:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:35:33.719 17:32:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:35:33.719 17:32:11 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:33.719 17:32:11 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:35:33.719 17:32:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:35:33.719 17:32:11 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:33.719 17:32:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:35:33.719 "name": "raid_bdev1", 00:35:33.719 "uuid": "b006fce5-bd35-4a71-b15e-ad4d3c4b22d9", 00:35:33.719 "strip_size_kb": 0, 00:35:33.719 "state": "online", 00:35:33.719 "raid_level": "raid1", 00:35:33.719 "superblock": true, 00:35:33.719 "num_base_bdevs": 4, 00:35:33.719 "num_base_bdevs_discovered": 2, 00:35:33.719 "num_base_bdevs_operational": 2, 00:35:33.719 "base_bdevs_list": [ 00:35:33.719 { 00:35:33.719 "name": null, 00:35:33.719 "uuid": "00000000-0000-0000-0000-000000000000", 00:35:33.719 "is_configured": false, 00:35:33.719 "data_offset": 0, 00:35:33.719 "data_size": 63488 00:35:33.719 }, 00:35:33.719 { 00:35:33.719 "name": null, 00:35:33.719 "uuid": "00000000-0000-0000-0000-000000000000", 00:35:33.719 "is_configured": false, 00:35:33.719 "data_offset": 2048, 00:35:33.719 "data_size": 63488 00:35:33.719 }, 00:35:33.719 { 00:35:33.719 "name": "BaseBdev3", 00:35:33.719 "uuid": "84742018-8e45-56c0-a321-c084b6fa428f", 00:35:33.719 "is_configured": true, 00:35:33.719 "data_offset": 2048, 00:35:33.719 "data_size": 63488 00:35:33.719 }, 00:35:33.719 { 00:35:33.719 "name": "BaseBdev4", 00:35:33.719 "uuid": "246e1dbc-dcdc-5de8-9168-a82c813bbbf6", 00:35:33.719 "is_configured": true, 00:35:33.719 "data_offset": 2048, 00:35:33.719 "data_size": 63488 00:35:33.719 } 00:35:33.719 ] 00:35:33.719 }' 00:35:33.719 17:32:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:35:33.719 17:32:11 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:35:34.288 17:32:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@756 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:35:34.288 17:32:11 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:34.288 17:32:11 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:35:34.288 [2024-11-26 17:32:11.505948] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:35:34.288 [2024-11-26 17:32:11.506356] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (5) smaller than existing raid bdev raid_bdev1 (6) 00:35:34.288 [2024-11-26 17:32:11.506516] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:35:34.288 [2024-11-26 17:32:11.506657] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:35:34.288 [2024-11-26 17:32:11.521979] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000cc1d50 00:35:34.288 17:32:11 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:34.288 17:32:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@757 -- # sleep 1 00:35:34.288 [2024-11-26 17:32:11.524136] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:35:35.226 17:32:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@758 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:35:35.226 17:32:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:35:35.226 17:32:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:35:35.226 17:32:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:35:35.226 17:32:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:35:35.226 17:32:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:35:35.226 17:32:12 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:35.226 17:32:12 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:35:35.226 17:32:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:35:35.226 17:32:12 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:35.226 17:32:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:35:35.226 "name": "raid_bdev1", 00:35:35.226 "uuid": "b006fce5-bd35-4a71-b15e-ad4d3c4b22d9", 00:35:35.226 "strip_size_kb": 0, 00:35:35.226 "state": "online", 00:35:35.226 "raid_level": "raid1", 00:35:35.226 "superblock": true, 00:35:35.226 "num_base_bdevs": 4, 00:35:35.226 "num_base_bdevs_discovered": 3, 00:35:35.226 "num_base_bdevs_operational": 3, 00:35:35.226 "process": { 00:35:35.226 "type": "rebuild", 00:35:35.226 "target": "spare", 00:35:35.226 "progress": { 00:35:35.226 "blocks": 20480, 00:35:35.226 "percent": 32 00:35:35.226 } 00:35:35.226 }, 00:35:35.226 "base_bdevs_list": [ 00:35:35.226 { 00:35:35.226 "name": "spare", 00:35:35.226 "uuid": "a3301495-37eb-5bb0-b362-a255444b80ec", 00:35:35.226 "is_configured": true, 00:35:35.226 "data_offset": 2048, 00:35:35.226 "data_size": 63488 00:35:35.226 }, 00:35:35.226 { 00:35:35.226 "name": null, 00:35:35.226 "uuid": "00000000-0000-0000-0000-000000000000", 00:35:35.226 "is_configured": false, 00:35:35.226 "data_offset": 2048, 00:35:35.226 "data_size": 63488 00:35:35.226 }, 00:35:35.226 { 00:35:35.226 "name": "BaseBdev3", 00:35:35.226 "uuid": "84742018-8e45-56c0-a321-c084b6fa428f", 00:35:35.226 "is_configured": true, 00:35:35.226 "data_offset": 2048, 00:35:35.226 "data_size": 63488 00:35:35.226 }, 00:35:35.226 { 00:35:35.226 "name": "BaseBdev4", 00:35:35.226 "uuid": "246e1dbc-dcdc-5de8-9168-a82c813bbbf6", 00:35:35.226 "is_configured": true, 00:35:35.226 "data_offset": 2048, 00:35:35.226 "data_size": 63488 00:35:35.226 } 00:35:35.226 ] 00:35:35.226 }' 00:35:35.226 17:32:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:35:35.226 17:32:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:35:35.226 17:32:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:35:35.226 17:32:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:35:35.226 17:32:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@761 -- # rpc_cmd bdev_passthru_delete spare 00:35:35.226 17:32:12 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:35.226 17:32:12 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:35:35.486 [2024-11-26 17:32:12.677552] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:35:35.486 [2024-11-26 17:32:12.731732] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:35:35.486 [2024-11-26 17:32:12.731797] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:35:35.486 [2024-11-26 17:32:12.731819] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:35:35.486 [2024-11-26 17:32:12.731828] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:35:35.486 17:32:12 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:35.486 17:32:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@762 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:35:35.486 17:32:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:35:35.486 17:32:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:35:35.486 17:32:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:35:35.486 17:32:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:35:35.486 17:32:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:35:35.486 17:32:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:35:35.486 17:32:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:35:35.486 17:32:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:35:35.486 17:32:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:35:35.486 17:32:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:35:35.486 17:32:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:35:35.486 17:32:12 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:35.486 17:32:12 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:35:35.486 17:32:12 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:35.486 17:32:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:35:35.486 "name": "raid_bdev1", 00:35:35.486 "uuid": "b006fce5-bd35-4a71-b15e-ad4d3c4b22d9", 00:35:35.486 "strip_size_kb": 0, 00:35:35.486 "state": "online", 00:35:35.486 "raid_level": "raid1", 00:35:35.486 "superblock": true, 00:35:35.486 "num_base_bdevs": 4, 00:35:35.486 "num_base_bdevs_discovered": 2, 00:35:35.486 "num_base_bdevs_operational": 2, 00:35:35.486 "base_bdevs_list": [ 00:35:35.486 { 00:35:35.486 "name": null, 00:35:35.486 "uuid": "00000000-0000-0000-0000-000000000000", 00:35:35.486 "is_configured": false, 00:35:35.486 "data_offset": 0, 00:35:35.486 "data_size": 63488 00:35:35.486 }, 00:35:35.486 { 00:35:35.486 "name": null, 00:35:35.486 "uuid": "00000000-0000-0000-0000-000000000000", 00:35:35.486 "is_configured": false, 00:35:35.486 "data_offset": 2048, 00:35:35.486 "data_size": 63488 00:35:35.486 }, 00:35:35.486 { 00:35:35.486 "name": "BaseBdev3", 00:35:35.486 "uuid": "84742018-8e45-56c0-a321-c084b6fa428f", 00:35:35.486 "is_configured": true, 00:35:35.486 "data_offset": 2048, 00:35:35.486 "data_size": 63488 00:35:35.486 }, 00:35:35.486 { 00:35:35.486 "name": "BaseBdev4", 00:35:35.486 "uuid": "246e1dbc-dcdc-5de8-9168-a82c813bbbf6", 00:35:35.486 "is_configured": true, 00:35:35.486 "data_offset": 2048, 00:35:35.486 "data_size": 63488 00:35:35.486 } 00:35:35.486 ] 00:35:35.486 }' 00:35:35.486 17:32:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:35:35.486 17:32:12 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:35:36.053 17:32:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@763 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:35:36.054 17:32:13 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:36.054 17:32:13 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:35:36.054 [2024-11-26 17:32:13.210181] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:35:36.054 [2024-11-26 17:32:13.210260] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:35:36.054 [2024-11-26 17:32:13.210298] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000c380 00:35:36.054 [2024-11-26 17:32:13.210310] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:35:36.054 [2024-11-26 17:32:13.210792] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:35:36.054 [2024-11-26 17:32:13.210811] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:35:36.054 [2024-11-26 17:32:13.210908] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:35:36.054 [2024-11-26 17:32:13.210921] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (5) smaller than existing raid bdev raid_bdev1 (6) 00:35:36.054 [2024-11-26 17:32:13.210947] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:35:36.054 [2024-11-26 17:32:13.210973] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:35:36.054 [2024-11-26 17:32:13.226149] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000cc1e20 00:35:36.054 spare 00:35:36.054 17:32:13 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:36.054 17:32:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@764 -- # sleep 1 00:35:36.054 [2024-11-26 17:32:13.228269] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:35:36.991 17:32:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@765 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:35:36.991 17:32:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:35:36.991 17:32:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:35:36.991 17:32:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:35:36.991 17:32:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:35:36.991 17:32:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:35:36.991 17:32:14 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:36.991 17:32:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:35:36.991 17:32:14 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:35:36.991 17:32:14 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:36.991 17:32:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:35:36.991 "name": "raid_bdev1", 00:35:36.991 "uuid": "b006fce5-bd35-4a71-b15e-ad4d3c4b22d9", 00:35:36.991 "strip_size_kb": 0, 00:35:36.991 "state": "online", 00:35:36.991 "raid_level": "raid1", 00:35:36.991 "superblock": true, 00:35:36.991 "num_base_bdevs": 4, 00:35:36.991 "num_base_bdevs_discovered": 3, 00:35:36.991 "num_base_bdevs_operational": 3, 00:35:36.991 "process": { 00:35:36.991 "type": "rebuild", 00:35:36.991 "target": "spare", 00:35:36.991 "progress": { 00:35:36.991 "blocks": 20480, 00:35:36.991 "percent": 32 00:35:36.991 } 00:35:36.991 }, 00:35:36.991 "base_bdevs_list": [ 00:35:36.991 { 00:35:36.991 "name": "spare", 00:35:36.991 "uuid": "a3301495-37eb-5bb0-b362-a255444b80ec", 00:35:36.991 "is_configured": true, 00:35:36.991 "data_offset": 2048, 00:35:36.991 "data_size": 63488 00:35:36.991 }, 00:35:36.991 { 00:35:36.991 "name": null, 00:35:36.991 "uuid": "00000000-0000-0000-0000-000000000000", 00:35:36.991 "is_configured": false, 00:35:36.991 "data_offset": 2048, 00:35:36.991 "data_size": 63488 00:35:36.991 }, 00:35:36.991 { 00:35:36.991 "name": "BaseBdev3", 00:35:36.991 "uuid": "84742018-8e45-56c0-a321-c084b6fa428f", 00:35:36.991 "is_configured": true, 00:35:36.991 "data_offset": 2048, 00:35:36.991 "data_size": 63488 00:35:36.991 }, 00:35:36.991 { 00:35:36.991 "name": "BaseBdev4", 00:35:36.991 "uuid": "246e1dbc-dcdc-5de8-9168-a82c813bbbf6", 00:35:36.991 "is_configured": true, 00:35:36.991 "data_offset": 2048, 00:35:36.991 "data_size": 63488 00:35:36.991 } 00:35:36.991 ] 00:35:36.991 }' 00:35:36.991 17:32:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:35:36.991 17:32:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:35:36.991 17:32:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:35:36.991 17:32:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:35:36.991 17:32:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@768 -- # rpc_cmd bdev_passthru_delete spare 00:35:36.991 17:32:14 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:36.991 17:32:14 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:35:36.991 [2024-11-26 17:32:14.382139] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:35:36.991 [2024-11-26 17:32:14.435813] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:35:36.991 [2024-11-26 17:32:14.436035] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:35:36.992 [2024-11-26 17:32:14.436139] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:35:36.992 [2024-11-26 17:32:14.436185] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:35:37.251 17:32:14 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:37.251 17:32:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@769 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:35:37.251 17:32:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:35:37.251 17:32:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:35:37.251 17:32:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:35:37.251 17:32:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:35:37.251 17:32:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:35:37.251 17:32:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:35:37.251 17:32:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:35:37.251 17:32:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:35:37.251 17:32:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:35:37.251 17:32:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:35:37.251 17:32:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:35:37.251 17:32:14 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:37.251 17:32:14 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:35:37.251 17:32:14 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:37.251 17:32:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:35:37.251 "name": "raid_bdev1", 00:35:37.251 "uuid": "b006fce5-bd35-4a71-b15e-ad4d3c4b22d9", 00:35:37.251 "strip_size_kb": 0, 00:35:37.251 "state": "online", 00:35:37.251 "raid_level": "raid1", 00:35:37.251 "superblock": true, 00:35:37.251 "num_base_bdevs": 4, 00:35:37.251 "num_base_bdevs_discovered": 2, 00:35:37.251 "num_base_bdevs_operational": 2, 00:35:37.251 "base_bdevs_list": [ 00:35:37.251 { 00:35:37.251 "name": null, 00:35:37.251 "uuid": "00000000-0000-0000-0000-000000000000", 00:35:37.251 "is_configured": false, 00:35:37.251 "data_offset": 0, 00:35:37.251 "data_size": 63488 00:35:37.251 }, 00:35:37.251 { 00:35:37.251 "name": null, 00:35:37.251 "uuid": "00000000-0000-0000-0000-000000000000", 00:35:37.251 "is_configured": false, 00:35:37.251 "data_offset": 2048, 00:35:37.251 "data_size": 63488 00:35:37.251 }, 00:35:37.251 { 00:35:37.251 "name": "BaseBdev3", 00:35:37.251 "uuid": "84742018-8e45-56c0-a321-c084b6fa428f", 00:35:37.251 "is_configured": true, 00:35:37.251 "data_offset": 2048, 00:35:37.251 "data_size": 63488 00:35:37.251 }, 00:35:37.251 { 00:35:37.251 "name": "BaseBdev4", 00:35:37.251 "uuid": "246e1dbc-dcdc-5de8-9168-a82c813bbbf6", 00:35:37.251 "is_configured": true, 00:35:37.251 "data_offset": 2048, 00:35:37.251 "data_size": 63488 00:35:37.251 } 00:35:37.251 ] 00:35:37.251 }' 00:35:37.251 17:32:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:35:37.251 17:32:14 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:35:37.511 17:32:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@770 -- # verify_raid_bdev_process raid_bdev1 none none 00:35:37.511 17:32:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:35:37.511 17:32:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:35:37.511 17:32:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:35:37.511 17:32:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:35:37.511 17:32:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:35:37.511 17:32:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:35:37.511 17:32:14 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:37.511 17:32:14 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:35:37.511 17:32:14 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:37.770 17:32:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:35:37.770 "name": "raid_bdev1", 00:35:37.770 "uuid": "b006fce5-bd35-4a71-b15e-ad4d3c4b22d9", 00:35:37.770 "strip_size_kb": 0, 00:35:37.770 "state": "online", 00:35:37.770 "raid_level": "raid1", 00:35:37.770 "superblock": true, 00:35:37.770 "num_base_bdevs": 4, 00:35:37.770 "num_base_bdevs_discovered": 2, 00:35:37.770 "num_base_bdevs_operational": 2, 00:35:37.770 "base_bdevs_list": [ 00:35:37.770 { 00:35:37.770 "name": null, 00:35:37.770 "uuid": "00000000-0000-0000-0000-000000000000", 00:35:37.770 "is_configured": false, 00:35:37.770 "data_offset": 0, 00:35:37.770 "data_size": 63488 00:35:37.770 }, 00:35:37.770 { 00:35:37.770 "name": null, 00:35:37.770 "uuid": "00000000-0000-0000-0000-000000000000", 00:35:37.770 "is_configured": false, 00:35:37.770 "data_offset": 2048, 00:35:37.770 "data_size": 63488 00:35:37.770 }, 00:35:37.770 { 00:35:37.770 "name": "BaseBdev3", 00:35:37.770 "uuid": "84742018-8e45-56c0-a321-c084b6fa428f", 00:35:37.770 "is_configured": true, 00:35:37.770 "data_offset": 2048, 00:35:37.770 "data_size": 63488 00:35:37.770 }, 00:35:37.770 { 00:35:37.770 "name": "BaseBdev4", 00:35:37.770 "uuid": "246e1dbc-dcdc-5de8-9168-a82c813bbbf6", 00:35:37.770 "is_configured": true, 00:35:37.770 "data_offset": 2048, 00:35:37.770 "data_size": 63488 00:35:37.770 } 00:35:37.770 ] 00:35:37.770 }' 00:35:37.770 17:32:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:35:37.770 17:32:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:35:37.770 17:32:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:35:37.770 17:32:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:35:37.770 17:32:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@773 -- # rpc_cmd bdev_passthru_delete BaseBdev1 00:35:37.770 17:32:15 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:37.770 17:32:15 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:35:37.770 17:32:15 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:37.770 17:32:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@774 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:35:37.770 17:32:15 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:37.770 17:32:15 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:35:37.770 [2024-11-26 17:32:15.075065] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:35:37.770 [2024-11-26 17:32:15.075135] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:35:37.770 [2024-11-26 17:32:15.075161] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000c980 00:35:37.770 [2024-11-26 17:32:15.075176] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:35:37.770 [2024-11-26 17:32:15.075638] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:35:37.770 [2024-11-26 17:32:15.075661] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:35:37.770 [2024-11-26 17:32:15.075741] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev BaseBdev1 00:35:37.770 [2024-11-26 17:32:15.075758] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (6) 00:35:37.770 [2024-11-26 17:32:15.075768] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:35:37.770 [2024-11-26 17:32:15.075795] bdev_raid.c:3894:raid_bdev_examine_done: *ERROR*: Failed to examine bdev BaseBdev1: Invalid argument 00:35:37.770 BaseBdev1 00:35:37.770 17:32:15 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:37.770 17:32:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@775 -- # sleep 1 00:35:38.707 17:32:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@776 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:35:38.707 17:32:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:35:38.707 17:32:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:35:38.707 17:32:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:35:38.707 17:32:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:35:38.707 17:32:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:35:38.707 17:32:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:35:38.707 17:32:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:35:38.707 17:32:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:35:38.707 17:32:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:35:38.707 17:32:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:35:38.707 17:32:16 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:38.707 17:32:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:35:38.707 17:32:16 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:35:38.707 17:32:16 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:38.707 17:32:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:35:38.707 "name": "raid_bdev1", 00:35:38.707 "uuid": "b006fce5-bd35-4a71-b15e-ad4d3c4b22d9", 00:35:38.707 "strip_size_kb": 0, 00:35:38.707 "state": "online", 00:35:38.707 "raid_level": "raid1", 00:35:38.707 "superblock": true, 00:35:38.707 "num_base_bdevs": 4, 00:35:38.707 "num_base_bdevs_discovered": 2, 00:35:38.707 "num_base_bdevs_operational": 2, 00:35:38.707 "base_bdevs_list": [ 00:35:38.707 { 00:35:38.707 "name": null, 00:35:38.707 "uuid": "00000000-0000-0000-0000-000000000000", 00:35:38.707 "is_configured": false, 00:35:38.707 "data_offset": 0, 00:35:38.707 "data_size": 63488 00:35:38.707 }, 00:35:38.707 { 00:35:38.707 "name": null, 00:35:38.707 "uuid": "00000000-0000-0000-0000-000000000000", 00:35:38.707 "is_configured": false, 00:35:38.707 "data_offset": 2048, 00:35:38.707 "data_size": 63488 00:35:38.707 }, 00:35:38.707 { 00:35:38.707 "name": "BaseBdev3", 00:35:38.707 "uuid": "84742018-8e45-56c0-a321-c084b6fa428f", 00:35:38.707 "is_configured": true, 00:35:38.707 "data_offset": 2048, 00:35:38.707 "data_size": 63488 00:35:38.707 }, 00:35:38.707 { 00:35:38.707 "name": "BaseBdev4", 00:35:38.707 "uuid": "246e1dbc-dcdc-5de8-9168-a82c813bbbf6", 00:35:38.707 "is_configured": true, 00:35:38.707 "data_offset": 2048, 00:35:38.707 "data_size": 63488 00:35:38.707 } 00:35:38.707 ] 00:35:38.707 }' 00:35:38.707 17:32:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:35:38.707 17:32:16 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:35:39.276 17:32:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@777 -- # verify_raid_bdev_process raid_bdev1 none none 00:35:39.276 17:32:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:35:39.276 17:32:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:35:39.276 17:32:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:35:39.276 17:32:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:35:39.276 17:32:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:35:39.276 17:32:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:35:39.276 17:32:16 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:39.276 17:32:16 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:35:39.276 17:32:16 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:39.276 17:32:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:35:39.276 "name": "raid_bdev1", 00:35:39.276 "uuid": "b006fce5-bd35-4a71-b15e-ad4d3c4b22d9", 00:35:39.276 "strip_size_kb": 0, 00:35:39.276 "state": "online", 00:35:39.276 "raid_level": "raid1", 00:35:39.276 "superblock": true, 00:35:39.276 "num_base_bdevs": 4, 00:35:39.276 "num_base_bdevs_discovered": 2, 00:35:39.276 "num_base_bdevs_operational": 2, 00:35:39.276 "base_bdevs_list": [ 00:35:39.276 { 00:35:39.276 "name": null, 00:35:39.276 "uuid": "00000000-0000-0000-0000-000000000000", 00:35:39.276 "is_configured": false, 00:35:39.276 "data_offset": 0, 00:35:39.276 "data_size": 63488 00:35:39.276 }, 00:35:39.276 { 00:35:39.276 "name": null, 00:35:39.276 "uuid": "00000000-0000-0000-0000-000000000000", 00:35:39.276 "is_configured": false, 00:35:39.276 "data_offset": 2048, 00:35:39.276 "data_size": 63488 00:35:39.276 }, 00:35:39.276 { 00:35:39.276 "name": "BaseBdev3", 00:35:39.276 "uuid": "84742018-8e45-56c0-a321-c084b6fa428f", 00:35:39.276 "is_configured": true, 00:35:39.276 "data_offset": 2048, 00:35:39.276 "data_size": 63488 00:35:39.276 }, 00:35:39.276 { 00:35:39.276 "name": "BaseBdev4", 00:35:39.276 "uuid": "246e1dbc-dcdc-5de8-9168-a82c813bbbf6", 00:35:39.276 "is_configured": true, 00:35:39.276 "data_offset": 2048, 00:35:39.276 "data_size": 63488 00:35:39.276 } 00:35:39.276 ] 00:35:39.276 }' 00:35:39.276 17:32:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:35:39.276 17:32:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:35:39.276 17:32:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:35:39.276 17:32:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:35:39.276 17:32:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@778 -- # NOT rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:35:39.276 17:32:16 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@652 -- # local es=0 00:35:39.276 17:32:16 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:35:39.276 17:32:16 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:35:39.276 17:32:16 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:35:39.276 17:32:16 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:35:39.276 17:32:16 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:35:39.276 17:32:16 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:35:39.276 17:32:16 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:39.276 17:32:16 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:35:39.276 [2024-11-26 17:32:16.679526] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:35:39.276 [2024-11-26 17:32:16.679728] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (6) 00:35:39.276 [2024-11-26 17:32:16.679747] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:35:39.276 request: 00:35:39.276 { 00:35:39.276 "base_bdev": "BaseBdev1", 00:35:39.276 "raid_bdev": "raid_bdev1", 00:35:39.276 "method": "bdev_raid_add_base_bdev", 00:35:39.276 "req_id": 1 00:35:39.276 } 00:35:39.276 Got JSON-RPC error response 00:35:39.276 response: 00:35:39.276 { 00:35:39.276 "code": -22, 00:35:39.276 "message": "Failed to add base bdev to RAID bdev: Invalid argument" 00:35:39.276 } 00:35:39.276 17:32:16 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:35:39.276 17:32:16 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@655 -- # es=1 00:35:39.276 17:32:16 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:35:39.276 17:32:16 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:35:39.276 17:32:16 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:35:39.276 17:32:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@779 -- # sleep 1 00:35:40.655 17:32:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@780 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:35:40.655 17:32:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:35:40.655 17:32:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:35:40.655 17:32:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:35:40.655 17:32:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:35:40.655 17:32:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:35:40.655 17:32:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:35:40.655 17:32:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:35:40.655 17:32:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:35:40.655 17:32:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:35:40.655 17:32:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:35:40.655 17:32:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:35:40.655 17:32:17 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:40.655 17:32:17 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:35:40.655 17:32:17 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:40.655 17:32:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:35:40.655 "name": "raid_bdev1", 00:35:40.655 "uuid": "b006fce5-bd35-4a71-b15e-ad4d3c4b22d9", 00:35:40.655 "strip_size_kb": 0, 00:35:40.655 "state": "online", 00:35:40.655 "raid_level": "raid1", 00:35:40.655 "superblock": true, 00:35:40.655 "num_base_bdevs": 4, 00:35:40.655 "num_base_bdevs_discovered": 2, 00:35:40.655 "num_base_bdevs_operational": 2, 00:35:40.655 "base_bdevs_list": [ 00:35:40.655 { 00:35:40.655 "name": null, 00:35:40.655 "uuid": "00000000-0000-0000-0000-000000000000", 00:35:40.655 "is_configured": false, 00:35:40.655 "data_offset": 0, 00:35:40.655 "data_size": 63488 00:35:40.655 }, 00:35:40.655 { 00:35:40.655 "name": null, 00:35:40.655 "uuid": "00000000-0000-0000-0000-000000000000", 00:35:40.655 "is_configured": false, 00:35:40.655 "data_offset": 2048, 00:35:40.655 "data_size": 63488 00:35:40.655 }, 00:35:40.655 { 00:35:40.655 "name": "BaseBdev3", 00:35:40.655 "uuid": "84742018-8e45-56c0-a321-c084b6fa428f", 00:35:40.655 "is_configured": true, 00:35:40.655 "data_offset": 2048, 00:35:40.655 "data_size": 63488 00:35:40.655 }, 00:35:40.655 { 00:35:40.655 "name": "BaseBdev4", 00:35:40.655 "uuid": "246e1dbc-dcdc-5de8-9168-a82c813bbbf6", 00:35:40.655 "is_configured": true, 00:35:40.655 "data_offset": 2048, 00:35:40.655 "data_size": 63488 00:35:40.655 } 00:35:40.655 ] 00:35:40.655 }' 00:35:40.655 17:32:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:35:40.655 17:32:17 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:35:40.915 17:32:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@781 -- # verify_raid_bdev_process raid_bdev1 none none 00:35:40.915 17:32:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:35:40.915 17:32:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:35:40.915 17:32:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:35:40.915 17:32:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:35:40.915 17:32:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:35:40.915 17:32:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:35:40.915 17:32:18 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:40.915 17:32:18 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:35:40.915 17:32:18 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:40.915 17:32:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:35:40.915 "name": "raid_bdev1", 00:35:40.915 "uuid": "b006fce5-bd35-4a71-b15e-ad4d3c4b22d9", 00:35:40.915 "strip_size_kb": 0, 00:35:40.915 "state": "online", 00:35:40.915 "raid_level": "raid1", 00:35:40.915 "superblock": true, 00:35:40.915 "num_base_bdevs": 4, 00:35:40.915 "num_base_bdevs_discovered": 2, 00:35:40.915 "num_base_bdevs_operational": 2, 00:35:40.915 "base_bdevs_list": [ 00:35:40.915 { 00:35:40.915 "name": null, 00:35:40.915 "uuid": "00000000-0000-0000-0000-000000000000", 00:35:40.915 "is_configured": false, 00:35:40.915 "data_offset": 0, 00:35:40.915 "data_size": 63488 00:35:40.915 }, 00:35:40.915 { 00:35:40.915 "name": null, 00:35:40.915 "uuid": "00000000-0000-0000-0000-000000000000", 00:35:40.915 "is_configured": false, 00:35:40.915 "data_offset": 2048, 00:35:40.915 "data_size": 63488 00:35:40.915 }, 00:35:40.915 { 00:35:40.915 "name": "BaseBdev3", 00:35:40.915 "uuid": "84742018-8e45-56c0-a321-c084b6fa428f", 00:35:40.915 "is_configured": true, 00:35:40.915 "data_offset": 2048, 00:35:40.915 "data_size": 63488 00:35:40.915 }, 00:35:40.915 { 00:35:40.915 "name": "BaseBdev4", 00:35:40.915 "uuid": "246e1dbc-dcdc-5de8-9168-a82c813bbbf6", 00:35:40.915 "is_configured": true, 00:35:40.915 "data_offset": 2048, 00:35:40.915 "data_size": 63488 00:35:40.915 } 00:35:40.915 ] 00:35:40.915 }' 00:35:40.915 17:32:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:35:40.915 17:32:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:35:40.915 17:32:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:35:40.915 17:32:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:35:40.915 17:32:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@784 -- # killprocess 78462 00:35:40.915 17:32:18 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@954 -- # '[' -z 78462 ']' 00:35:40.915 17:32:18 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@958 -- # kill -0 78462 00:35:40.915 17:32:18 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@959 -- # uname 00:35:40.915 17:32:18 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:35:40.915 17:32:18 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 78462 00:35:40.915 killing process with pid 78462 00:35:40.915 Received shutdown signal, test time was about 60.000000 seconds 00:35:40.915 00:35:40.915 Latency(us) 00:35:40.915 [2024-11-26T17:32:18.362Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:35:40.915 [2024-11-26T17:32:18.362Z] =================================================================================================================== 00:35:40.915 [2024-11-26T17:32:18.362Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:35:40.915 17:32:18 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:35:40.915 17:32:18 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:35:40.915 17:32:18 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@972 -- # echo 'killing process with pid 78462' 00:35:40.915 17:32:18 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@973 -- # kill 78462 00:35:40.915 [2024-11-26 17:32:18.282074] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:35:40.915 [2024-11-26 17:32:18.282204] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:35:40.915 17:32:18 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@978 -- # wait 78462 00:35:40.915 [2024-11-26 17:32:18.282276] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:35:40.915 [2024-11-26 17:32:18.282288] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state offline 00:35:41.483 [2024-11-26 17:32:18.770956] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:35:42.861 ************************************ 00:35:42.861 END TEST raid_rebuild_test_sb 00:35:42.861 ************************************ 00:35:42.861 17:32:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@786 -- # return 0 00:35:42.861 00:35:42.861 real 0m26.257s 00:35:42.861 user 0m31.220s 00:35:42.861 sys 0m4.438s 00:35:42.861 17:32:19 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@1130 -- # xtrace_disable 00:35:42.861 17:32:19 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:35:42.861 17:32:19 bdev_raid -- bdev/bdev_raid.sh@980 -- # run_test raid_rebuild_test_io raid_rebuild_test raid1 4 false true true 00:35:42.861 17:32:19 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 7 -le 1 ']' 00:35:42.861 17:32:19 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:35:42.861 17:32:19 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:35:42.861 ************************************ 00:35:42.861 START TEST raid_rebuild_test_io 00:35:42.861 ************************************ 00:35:42.861 17:32:19 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@1129 -- # raid_rebuild_test raid1 4 false true true 00:35:42.861 17:32:19 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@569 -- # local raid_level=raid1 00:35:42.861 17:32:19 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=4 00:35:42.861 17:32:19 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@571 -- # local superblock=false 00:35:42.861 17:32:19 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@572 -- # local background_io=true 00:35:42.861 17:32:19 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@573 -- # local verify=true 00:35:42.861 17:32:19 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:35:42.861 17:32:19 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:35:42.861 17:32:19 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:35:42.861 17:32:19 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:35:42.861 17:32:19 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:35:42.861 17:32:19 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:35:42.861 17:32:19 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:35:42.861 17:32:19 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:35:42.861 17:32:19 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@576 -- # echo BaseBdev3 00:35:42.861 17:32:19 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:35:42.861 17:32:19 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:35:42.861 17:32:19 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@576 -- # echo BaseBdev4 00:35:42.861 17:32:19 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:35:42.861 17:32:19 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:35:42.861 17:32:19 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:35:42.861 17:32:19 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:35:42.861 17:32:19 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:35:42.861 17:32:19 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@576 -- # local strip_size 00:35:42.861 17:32:19 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@577 -- # local create_arg 00:35:42.861 17:32:19 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:35:42.861 17:32:19 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@579 -- # local data_offset 00:35:42.861 17:32:19 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@581 -- # '[' raid1 '!=' raid1 ']' 00:35:42.861 17:32:19 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@589 -- # strip_size=0 00:35:42.861 17:32:19 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@592 -- # '[' false = true ']' 00:35:42.861 17:32:19 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@597 -- # raid_pid=79227 00:35:42.861 17:32:19 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@598 -- # waitforlisten 79227 00:35:42.861 17:32:19 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@835 -- # '[' -z 79227 ']' 00:35:42.861 17:32:19 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:35:42.861 17:32:19 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:35:42.861 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:35:42.861 17:32:19 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@840 -- # local max_retries=100 00:35:42.861 17:32:19 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:35:42.861 17:32:19 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@844 -- # xtrace_disable 00:35:42.861 17:32:19 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:35:42.861 I/O size of 3145728 is greater than zero copy threshold (65536). 00:35:42.861 Zero copy mechanism will not be used. 00:35:42.861 [2024-11-26 17:32:20.101896] Starting SPDK v25.01-pre git sha1 f7ce15267 / DPDK 24.03.0 initialization... 00:35:42.861 [2024-11-26 17:32:20.102109] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid79227 ] 00:35:42.861 [2024-11-26 17:32:20.290969] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:35:43.131 [2024-11-26 17:32:20.438863] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:35:43.391 [2024-11-26 17:32:20.642069] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:35:43.391 [2024-11-26 17:32:20.642325] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:35:43.650 17:32:21 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:35:43.650 17:32:21 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@868 -- # return 0 00:35:43.650 17:32:21 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:35:43.650 17:32:21 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:35:43.650 17:32:21 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:43.650 17:32:21 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:35:43.650 BaseBdev1_malloc 00:35:43.650 17:32:21 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:43.650 17:32:21 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:35:43.650 17:32:21 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:43.650 17:32:21 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:35:43.909 [2024-11-26 17:32:21.099149] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:35:43.909 [2024-11-26 17:32:21.099339] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:35:43.909 [2024-11-26 17:32:21.099400] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:35:43.909 [2024-11-26 17:32:21.099517] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:35:43.909 [2024-11-26 17:32:21.101875] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:35:43.909 [2024-11-26 17:32:21.102024] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:35:43.909 BaseBdev1 00:35:43.909 17:32:21 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:43.909 17:32:21 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:35:43.909 17:32:21 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:35:43.909 17:32:21 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:43.909 17:32:21 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:35:43.909 BaseBdev2_malloc 00:35:43.909 17:32:21 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:43.909 17:32:21 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:35:43.909 17:32:21 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:43.909 17:32:21 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:35:43.909 [2024-11-26 17:32:21.153419] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:35:43.909 [2024-11-26 17:32:21.153498] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:35:43.909 [2024-11-26 17:32:21.153524] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:35:43.909 [2024-11-26 17:32:21.153538] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:35:43.909 [2024-11-26 17:32:21.155901] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:35:43.909 [2024-11-26 17:32:21.155946] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:35:43.909 BaseBdev2 00:35:43.909 17:32:21 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:43.909 17:32:21 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:35:43.909 17:32:21 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:35:43.909 17:32:21 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:43.909 17:32:21 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:35:43.909 BaseBdev3_malloc 00:35:43.909 17:32:21 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:43.909 17:32:21 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev3_malloc -p BaseBdev3 00:35:43.909 17:32:21 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:43.909 17:32:21 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:35:43.909 [2024-11-26 17:32:21.220870] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev3_malloc 00:35:43.909 [2024-11-26 17:32:21.221038] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:35:43.909 [2024-11-26 17:32:21.221107] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:35:43.909 [2024-11-26 17:32:21.221192] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:35:43.909 [2024-11-26 17:32:21.223527] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:35:43.909 [2024-11-26 17:32:21.223670] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:35:43.909 BaseBdev3 00:35:43.909 17:32:21 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:43.909 17:32:21 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:35:43.909 17:32:21 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:35:43.909 17:32:21 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:43.909 17:32:21 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:35:43.909 BaseBdev4_malloc 00:35:43.909 17:32:21 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:43.909 17:32:21 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev4_malloc -p BaseBdev4 00:35:43.909 17:32:21 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:43.909 17:32:21 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:35:43.909 [2024-11-26 17:32:21.279841] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev4_malloc 00:35:43.909 [2024-11-26 17:32:21.280022] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:35:43.909 [2024-11-26 17:32:21.280098] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:35:43.909 [2024-11-26 17:32:21.280181] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:35:43.909 [2024-11-26 17:32:21.282562] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:35:43.909 [2024-11-26 17:32:21.282706] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:35:43.909 BaseBdev4 00:35:43.909 17:32:21 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:43.909 17:32:21 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 512 -b spare_malloc 00:35:43.909 17:32:21 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:43.909 17:32:21 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:35:43.909 spare_malloc 00:35:43.909 17:32:21 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:43.909 17:32:21 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:35:43.909 17:32:21 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:43.909 17:32:21 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:35:43.909 spare_delay 00:35:43.909 17:32:21 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:43.909 17:32:21 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:35:43.909 17:32:21 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:43.909 17:32:21 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:35:43.909 [2024-11-26 17:32:21.350982] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:35:43.909 [2024-11-26 17:32:21.351181] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:35:43.909 [2024-11-26 17:32:21.351239] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a880 00:35:43.909 [2024-11-26 17:32:21.351319] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:35:43.909 [2024-11-26 17:32:21.353761] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:35:43.909 [2024-11-26 17:32:21.353902] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:35:44.169 spare 00:35:44.169 17:32:21 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:44.169 17:32:21 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n raid_bdev1 00:35:44.169 17:32:21 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:44.169 17:32:21 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:35:44.169 [2024-11-26 17:32:21.363029] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:35:44.169 [2024-11-26 17:32:21.365331] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:35:44.169 [2024-11-26 17:32:21.365522] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:35:44.169 [2024-11-26 17:32:21.365624] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:35:44.169 [2024-11-26 17:32:21.365806] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:35:44.169 [2024-11-26 17:32:21.365861] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:35:44.169 [2024-11-26 17:32:21.366216] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:35:44.169 [2024-11-26 17:32:21.366501] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:35:44.169 [2024-11-26 17:32:21.366606] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:35:44.169 [2024-11-26 17:32:21.366862] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:35:44.169 17:32:21 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:44.169 17:32:21 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 4 00:35:44.169 17:32:21 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:35:44.169 17:32:21 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:35:44.169 17:32:21 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:35:44.169 17:32:21 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:35:44.169 17:32:21 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:35:44.169 17:32:21 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:35:44.169 17:32:21 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:35:44.169 17:32:21 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:35:44.169 17:32:21 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:35:44.169 17:32:21 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:35:44.169 17:32:21 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:35:44.169 17:32:21 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:44.169 17:32:21 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:35:44.169 17:32:21 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:44.169 17:32:21 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:35:44.169 "name": "raid_bdev1", 00:35:44.169 "uuid": "adccd4be-17a7-4c0b-a1dd-da2774bc027f", 00:35:44.169 "strip_size_kb": 0, 00:35:44.169 "state": "online", 00:35:44.169 "raid_level": "raid1", 00:35:44.169 "superblock": false, 00:35:44.169 "num_base_bdevs": 4, 00:35:44.169 "num_base_bdevs_discovered": 4, 00:35:44.169 "num_base_bdevs_operational": 4, 00:35:44.169 "base_bdevs_list": [ 00:35:44.169 { 00:35:44.169 "name": "BaseBdev1", 00:35:44.169 "uuid": "3c6677b3-14c3-5d65-a1e4-230cc10dda40", 00:35:44.169 "is_configured": true, 00:35:44.169 "data_offset": 0, 00:35:44.169 "data_size": 65536 00:35:44.169 }, 00:35:44.169 { 00:35:44.169 "name": "BaseBdev2", 00:35:44.169 "uuid": "fcb77172-013b-5529-9d0b-ff12b685fe63", 00:35:44.169 "is_configured": true, 00:35:44.169 "data_offset": 0, 00:35:44.169 "data_size": 65536 00:35:44.170 }, 00:35:44.170 { 00:35:44.170 "name": "BaseBdev3", 00:35:44.170 "uuid": "db064682-3974-5689-8e6c-ffdd5e71f1fa", 00:35:44.170 "is_configured": true, 00:35:44.170 "data_offset": 0, 00:35:44.170 "data_size": 65536 00:35:44.170 }, 00:35:44.170 { 00:35:44.170 "name": "BaseBdev4", 00:35:44.170 "uuid": "45232db2-a1c8-5509-816a-4be9bc56dba5", 00:35:44.170 "is_configured": true, 00:35:44.170 "data_offset": 0, 00:35:44.170 "data_size": 65536 00:35:44.170 } 00:35:44.170 ] 00:35:44.170 }' 00:35:44.170 17:32:21 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:35:44.170 17:32:21 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:35:44.428 17:32:21 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:35:44.428 17:32:21 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:44.428 17:32:21 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:35:44.428 17:32:21 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:35:44.428 [2024-11-26 17:32:21.819602] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:35:44.428 17:32:21 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:44.428 17:32:21 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=65536 00:35:44.428 17:32:21 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:35:44.428 17:32:21 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:44.428 17:32:21 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:35:44.428 17:32:21 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:35:44.687 17:32:21 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:44.687 17:32:21 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@619 -- # data_offset=0 00:35:44.687 17:32:21 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@621 -- # '[' true = true ']' 00:35:44.687 17:32:21 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:35:44.687 17:32:21 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@623 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:35:44.687 17:32:21 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:44.687 17:32:21 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:35:44.687 [2024-11-26 17:32:21.919225] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:35:44.687 17:32:21 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:44.687 17:32:21 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:35:44.687 17:32:21 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:35:44.687 17:32:21 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:35:44.687 17:32:21 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:35:44.687 17:32:21 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:35:44.687 17:32:21 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:35:44.687 17:32:21 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:35:44.687 17:32:21 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:35:44.687 17:32:21 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:35:44.687 17:32:21 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:35:44.687 17:32:21 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:35:44.687 17:32:21 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:35:44.687 17:32:21 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:44.687 17:32:21 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:35:44.687 17:32:21 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:44.687 17:32:21 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:35:44.687 "name": "raid_bdev1", 00:35:44.687 "uuid": "adccd4be-17a7-4c0b-a1dd-da2774bc027f", 00:35:44.687 "strip_size_kb": 0, 00:35:44.687 "state": "online", 00:35:44.687 "raid_level": "raid1", 00:35:44.687 "superblock": false, 00:35:44.687 "num_base_bdevs": 4, 00:35:44.687 "num_base_bdevs_discovered": 3, 00:35:44.687 "num_base_bdevs_operational": 3, 00:35:44.687 "base_bdevs_list": [ 00:35:44.687 { 00:35:44.687 "name": null, 00:35:44.687 "uuid": "00000000-0000-0000-0000-000000000000", 00:35:44.687 "is_configured": false, 00:35:44.687 "data_offset": 0, 00:35:44.687 "data_size": 65536 00:35:44.687 }, 00:35:44.687 { 00:35:44.687 "name": "BaseBdev2", 00:35:44.687 "uuid": "fcb77172-013b-5529-9d0b-ff12b685fe63", 00:35:44.687 "is_configured": true, 00:35:44.687 "data_offset": 0, 00:35:44.687 "data_size": 65536 00:35:44.687 }, 00:35:44.687 { 00:35:44.687 "name": "BaseBdev3", 00:35:44.688 "uuid": "db064682-3974-5689-8e6c-ffdd5e71f1fa", 00:35:44.688 "is_configured": true, 00:35:44.688 "data_offset": 0, 00:35:44.688 "data_size": 65536 00:35:44.688 }, 00:35:44.688 { 00:35:44.688 "name": "BaseBdev4", 00:35:44.688 "uuid": "45232db2-a1c8-5509-816a-4be9bc56dba5", 00:35:44.688 "is_configured": true, 00:35:44.688 "data_offset": 0, 00:35:44.688 "data_size": 65536 00:35:44.688 } 00:35:44.688 ] 00:35:44.688 }' 00:35:44.688 17:32:21 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:35:44.688 17:32:21 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:35:44.688 [2024-11-26 17:32:22.052008] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:35:44.688 I/O size of 3145728 is greater than zero copy threshold (65536). 00:35:44.688 Zero copy mechanism will not be used. 00:35:44.688 Running I/O for 60 seconds... 00:35:44.946 17:32:22 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:35:44.946 17:32:22 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:44.946 17:32:22 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:35:45.205 [2024-11-26 17:32:22.395910] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:35:45.205 17:32:22 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:45.205 17:32:22 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@647 -- # sleep 1 00:35:45.205 [2024-11-26 17:32:22.460030] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000062f0 00:35:45.205 [2024-11-26 17:32:22.462462] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:35:45.205 [2024-11-26 17:32:22.569937] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:35:45.205 [2024-11-26 17:32:22.571661] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:35:45.465 [2024-11-26 17:32:22.793884] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:35:45.465 [2024-11-26 17:32:22.794838] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:35:45.983 181.00 IOPS, 543.00 MiB/s [2024-11-26T17:32:23.430Z] [2024-11-26 17:32:23.193147] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 8192 offset_begin: 6144 offset_end: 12288 00:35:45.983 [2024-11-26 17:32:23.331932] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:35:46.243 17:32:23 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:35:46.243 17:32:23 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:35:46.243 17:32:23 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:35:46.243 17:32:23 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:35:46.243 17:32:23 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:35:46.243 17:32:23 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:35:46.243 17:32:23 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:46.243 17:32:23 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:35:46.243 17:32:23 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:35:46.243 17:32:23 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:46.243 17:32:23 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:35:46.243 "name": "raid_bdev1", 00:35:46.243 "uuid": "adccd4be-17a7-4c0b-a1dd-da2774bc027f", 00:35:46.243 "strip_size_kb": 0, 00:35:46.243 "state": "online", 00:35:46.243 "raid_level": "raid1", 00:35:46.243 "superblock": false, 00:35:46.243 "num_base_bdevs": 4, 00:35:46.243 "num_base_bdevs_discovered": 4, 00:35:46.243 "num_base_bdevs_operational": 4, 00:35:46.243 "process": { 00:35:46.243 "type": "rebuild", 00:35:46.243 "target": "spare", 00:35:46.243 "progress": { 00:35:46.243 "blocks": 12288, 00:35:46.243 "percent": 18 00:35:46.243 } 00:35:46.243 }, 00:35:46.243 "base_bdevs_list": [ 00:35:46.243 { 00:35:46.243 "name": "spare", 00:35:46.243 "uuid": "f0594f81-31b3-5b78-8623-718a7bac8966", 00:35:46.243 "is_configured": true, 00:35:46.243 "data_offset": 0, 00:35:46.243 "data_size": 65536 00:35:46.243 }, 00:35:46.243 { 00:35:46.243 "name": "BaseBdev2", 00:35:46.243 "uuid": "fcb77172-013b-5529-9d0b-ff12b685fe63", 00:35:46.243 "is_configured": true, 00:35:46.243 "data_offset": 0, 00:35:46.243 "data_size": 65536 00:35:46.243 }, 00:35:46.243 { 00:35:46.243 "name": "BaseBdev3", 00:35:46.243 "uuid": "db064682-3974-5689-8e6c-ffdd5e71f1fa", 00:35:46.243 "is_configured": true, 00:35:46.243 "data_offset": 0, 00:35:46.243 "data_size": 65536 00:35:46.243 }, 00:35:46.243 { 00:35:46.243 "name": "BaseBdev4", 00:35:46.243 "uuid": "45232db2-a1c8-5509-816a-4be9bc56dba5", 00:35:46.243 "is_configured": true, 00:35:46.243 "data_offset": 0, 00:35:46.243 "data_size": 65536 00:35:46.243 } 00:35:46.243 ] 00:35:46.243 }' 00:35:46.243 17:32:23 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:35:46.243 17:32:23 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:35:46.243 17:32:23 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:35:46.243 [2024-11-26 17:32:23.551779] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 14336 offset_begin: 12288 offset_end: 18432 00:35:46.243 [2024-11-26 17:32:23.553404] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 14336 offset_begin: 12288 offset_end: 18432 00:35:46.243 17:32:23 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:35:46.243 17:32:23 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:35:46.243 17:32:23 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:46.243 17:32:23 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:35:46.243 [2024-11-26 17:32:23.595739] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:35:46.503 [2024-11-26 17:32:23.733277] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:35:46.503 [2024-11-26 17:32:23.745644] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:35:46.503 [2024-11-26 17:32:23.745906] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:35:46.503 [2024-11-26 17:32:23.745958] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:35:46.503 [2024-11-26 17:32:23.775438] bdev_raid.c:1974:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 0 raid_ch: 0x60d000006220 00:35:46.503 17:32:23 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:46.503 17:32:23 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:35:46.503 17:32:23 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:35:46.503 17:32:23 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:35:46.503 17:32:23 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:35:46.503 17:32:23 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:35:46.503 17:32:23 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:35:46.503 17:32:23 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:35:46.503 17:32:23 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:35:46.503 17:32:23 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:35:46.503 17:32:23 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:35:46.503 17:32:23 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:35:46.503 17:32:23 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:46.503 17:32:23 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:35:46.503 17:32:23 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:35:46.503 17:32:23 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:46.503 17:32:23 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:35:46.503 "name": "raid_bdev1", 00:35:46.503 "uuid": "adccd4be-17a7-4c0b-a1dd-da2774bc027f", 00:35:46.503 "strip_size_kb": 0, 00:35:46.503 "state": "online", 00:35:46.503 "raid_level": "raid1", 00:35:46.503 "superblock": false, 00:35:46.503 "num_base_bdevs": 4, 00:35:46.503 "num_base_bdevs_discovered": 3, 00:35:46.503 "num_base_bdevs_operational": 3, 00:35:46.503 "base_bdevs_list": [ 00:35:46.503 { 00:35:46.503 "name": null, 00:35:46.503 "uuid": "00000000-0000-0000-0000-000000000000", 00:35:46.503 "is_configured": false, 00:35:46.503 "data_offset": 0, 00:35:46.503 "data_size": 65536 00:35:46.503 }, 00:35:46.503 { 00:35:46.503 "name": "BaseBdev2", 00:35:46.503 "uuid": "fcb77172-013b-5529-9d0b-ff12b685fe63", 00:35:46.503 "is_configured": true, 00:35:46.503 "data_offset": 0, 00:35:46.503 "data_size": 65536 00:35:46.503 }, 00:35:46.503 { 00:35:46.503 "name": "BaseBdev3", 00:35:46.503 "uuid": "db064682-3974-5689-8e6c-ffdd5e71f1fa", 00:35:46.503 "is_configured": true, 00:35:46.503 "data_offset": 0, 00:35:46.503 "data_size": 65536 00:35:46.503 }, 00:35:46.503 { 00:35:46.503 "name": "BaseBdev4", 00:35:46.503 "uuid": "45232db2-a1c8-5509-816a-4be9bc56dba5", 00:35:46.503 "is_configured": true, 00:35:46.503 "data_offset": 0, 00:35:46.503 "data_size": 65536 00:35:46.503 } 00:35:46.503 ] 00:35:46.503 }' 00:35:46.503 17:32:23 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:35:46.503 17:32:23 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:35:47.022 157.50 IOPS, 472.50 MiB/s [2024-11-26T17:32:24.469Z] 17:32:24 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:35:47.022 17:32:24 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:35:47.022 17:32:24 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:35:47.022 17:32:24 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:35:47.022 17:32:24 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:35:47.022 17:32:24 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:35:47.022 17:32:24 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:35:47.022 17:32:24 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:47.022 17:32:24 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:35:47.022 17:32:24 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:47.022 17:32:24 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:35:47.022 "name": "raid_bdev1", 00:35:47.022 "uuid": "adccd4be-17a7-4c0b-a1dd-da2774bc027f", 00:35:47.022 "strip_size_kb": 0, 00:35:47.022 "state": "online", 00:35:47.022 "raid_level": "raid1", 00:35:47.022 "superblock": false, 00:35:47.022 "num_base_bdevs": 4, 00:35:47.022 "num_base_bdevs_discovered": 3, 00:35:47.022 "num_base_bdevs_operational": 3, 00:35:47.022 "base_bdevs_list": [ 00:35:47.022 { 00:35:47.022 "name": null, 00:35:47.022 "uuid": "00000000-0000-0000-0000-000000000000", 00:35:47.022 "is_configured": false, 00:35:47.022 "data_offset": 0, 00:35:47.022 "data_size": 65536 00:35:47.022 }, 00:35:47.022 { 00:35:47.022 "name": "BaseBdev2", 00:35:47.022 "uuid": "fcb77172-013b-5529-9d0b-ff12b685fe63", 00:35:47.022 "is_configured": true, 00:35:47.022 "data_offset": 0, 00:35:47.022 "data_size": 65536 00:35:47.022 }, 00:35:47.022 { 00:35:47.022 "name": "BaseBdev3", 00:35:47.022 "uuid": "db064682-3974-5689-8e6c-ffdd5e71f1fa", 00:35:47.022 "is_configured": true, 00:35:47.022 "data_offset": 0, 00:35:47.022 "data_size": 65536 00:35:47.022 }, 00:35:47.022 { 00:35:47.022 "name": "BaseBdev4", 00:35:47.022 "uuid": "45232db2-a1c8-5509-816a-4be9bc56dba5", 00:35:47.022 "is_configured": true, 00:35:47.022 "data_offset": 0, 00:35:47.022 "data_size": 65536 00:35:47.022 } 00:35:47.022 ] 00:35:47.022 }' 00:35:47.022 17:32:24 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:35:47.022 17:32:24 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:35:47.022 17:32:24 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:35:47.022 17:32:24 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:35:47.022 17:32:24 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:35:47.022 17:32:24 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:47.022 17:32:24 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:35:47.022 [2024-11-26 17:32:24.394078] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:35:47.022 17:32:24 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:47.022 17:32:24 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@663 -- # sleep 1 00:35:47.280 [2024-11-26 17:32:24.472879] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000063c0 00:35:47.280 [2024-11-26 17:32:24.475224] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:35:47.280 [2024-11-26 17:32:24.577029] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:35:47.280 [2024-11-26 17:32:24.577900] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:35:47.280 [2024-11-26 17:32:24.693987] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:35:47.280 [2024-11-26 17:32:24.694833] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:35:47.847 [2024-11-26 17:32:25.032351] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 8192 offset_begin: 6144 offset_end: 12288 00:35:47.847 [2024-11-26 17:32:25.038781] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 8192 offset_begin: 6144 offset_end: 12288 00:35:47.847 151.00 IOPS, 453.00 MiB/s [2024-11-26T17:32:25.294Z] [2024-11-26 17:32:25.258836] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:35:47.847 [2024-11-26 17:32:25.259614] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:35:48.106 17:32:25 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:35:48.106 17:32:25 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:35:48.106 17:32:25 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:35:48.106 17:32:25 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:35:48.106 17:32:25 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:35:48.106 17:32:25 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:35:48.106 17:32:25 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:48.106 17:32:25 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:35:48.106 17:32:25 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:35:48.106 17:32:25 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:48.106 17:32:25 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:35:48.106 "name": "raid_bdev1", 00:35:48.106 "uuid": "adccd4be-17a7-4c0b-a1dd-da2774bc027f", 00:35:48.106 "strip_size_kb": 0, 00:35:48.106 "state": "online", 00:35:48.106 "raid_level": "raid1", 00:35:48.106 "superblock": false, 00:35:48.107 "num_base_bdevs": 4, 00:35:48.107 "num_base_bdevs_discovered": 4, 00:35:48.107 "num_base_bdevs_operational": 4, 00:35:48.107 "process": { 00:35:48.107 "type": "rebuild", 00:35:48.107 "target": "spare", 00:35:48.107 "progress": { 00:35:48.107 "blocks": 10240, 00:35:48.107 "percent": 15 00:35:48.107 } 00:35:48.107 }, 00:35:48.107 "base_bdevs_list": [ 00:35:48.107 { 00:35:48.107 "name": "spare", 00:35:48.107 "uuid": "f0594f81-31b3-5b78-8623-718a7bac8966", 00:35:48.107 "is_configured": true, 00:35:48.107 "data_offset": 0, 00:35:48.107 "data_size": 65536 00:35:48.107 }, 00:35:48.107 { 00:35:48.107 "name": "BaseBdev2", 00:35:48.107 "uuid": "fcb77172-013b-5529-9d0b-ff12b685fe63", 00:35:48.107 "is_configured": true, 00:35:48.107 "data_offset": 0, 00:35:48.107 "data_size": 65536 00:35:48.107 }, 00:35:48.107 { 00:35:48.107 "name": "BaseBdev3", 00:35:48.107 "uuid": "db064682-3974-5689-8e6c-ffdd5e71f1fa", 00:35:48.107 "is_configured": true, 00:35:48.107 "data_offset": 0, 00:35:48.107 "data_size": 65536 00:35:48.107 }, 00:35:48.107 { 00:35:48.107 "name": "BaseBdev4", 00:35:48.107 "uuid": "45232db2-a1c8-5509-816a-4be9bc56dba5", 00:35:48.107 "is_configured": true, 00:35:48.107 "data_offset": 0, 00:35:48.107 "data_size": 65536 00:35:48.107 } 00:35:48.107 ] 00:35:48.107 }' 00:35:48.107 17:32:25 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:35:48.107 17:32:25 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:35:48.107 17:32:25 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:35:48.367 17:32:25 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:35:48.367 17:32:25 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@666 -- # '[' false = true ']' 00:35:48.367 17:32:25 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=4 00:35:48.367 17:32:25 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@693 -- # '[' raid1 = raid1 ']' 00:35:48.367 17:32:25 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@693 -- # '[' 4 -gt 2 ']' 00:35:48.367 17:32:25 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@695 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:35:48.367 17:32:25 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:48.367 17:32:25 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:35:48.367 [2024-11-26 17:32:25.585293] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:35:48.367 [2024-11-26 17:32:25.595126] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 14336 offset_begin: 12288 offset_end: 18432 00:35:48.367 [2024-11-26 17:32:25.596824] bdev_raid.c:1974:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 1 raid_ch: 0x60d000006220 00:35:48.367 [2024-11-26 17:32:25.596970] bdev_raid.c:1974:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 1 raid_ch: 0x60d0000063c0 00:35:48.367 17:32:25 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:48.367 17:32:25 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@698 -- # base_bdevs[1]= 00:35:48.367 17:32:25 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@699 -- # (( num_base_bdevs_operational-- )) 00:35:48.367 17:32:25 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@702 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:35:48.367 17:32:25 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:35:48.367 17:32:25 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:35:48.367 17:32:25 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:35:48.367 17:32:25 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:35:48.367 17:32:25 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:35:48.367 17:32:25 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:35:48.367 17:32:25 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:48.367 17:32:25 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:35:48.367 17:32:25 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:48.367 17:32:25 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:35:48.367 "name": "raid_bdev1", 00:35:48.367 "uuid": "adccd4be-17a7-4c0b-a1dd-da2774bc027f", 00:35:48.367 "strip_size_kb": 0, 00:35:48.367 "state": "online", 00:35:48.367 "raid_level": "raid1", 00:35:48.367 "superblock": false, 00:35:48.367 "num_base_bdevs": 4, 00:35:48.367 "num_base_bdevs_discovered": 3, 00:35:48.367 "num_base_bdevs_operational": 3, 00:35:48.367 "process": { 00:35:48.367 "type": "rebuild", 00:35:48.367 "target": "spare", 00:35:48.367 "progress": { 00:35:48.367 "blocks": 14336, 00:35:48.367 "percent": 21 00:35:48.367 } 00:35:48.367 }, 00:35:48.367 "base_bdevs_list": [ 00:35:48.367 { 00:35:48.367 "name": "spare", 00:35:48.367 "uuid": "f0594f81-31b3-5b78-8623-718a7bac8966", 00:35:48.367 "is_configured": true, 00:35:48.367 "data_offset": 0, 00:35:48.367 "data_size": 65536 00:35:48.367 }, 00:35:48.367 { 00:35:48.367 "name": null, 00:35:48.367 "uuid": "00000000-0000-0000-0000-000000000000", 00:35:48.367 "is_configured": false, 00:35:48.367 "data_offset": 0, 00:35:48.367 "data_size": 65536 00:35:48.367 }, 00:35:48.367 { 00:35:48.367 "name": "BaseBdev3", 00:35:48.367 "uuid": "db064682-3974-5689-8e6c-ffdd5e71f1fa", 00:35:48.367 "is_configured": true, 00:35:48.367 "data_offset": 0, 00:35:48.367 "data_size": 65536 00:35:48.367 }, 00:35:48.367 { 00:35:48.367 "name": "BaseBdev4", 00:35:48.367 "uuid": "45232db2-a1c8-5509-816a-4be9bc56dba5", 00:35:48.367 "is_configured": true, 00:35:48.367 "data_offset": 0, 00:35:48.367 "data_size": 65536 00:35:48.367 } 00:35:48.367 ] 00:35:48.367 }' 00:35:48.367 17:32:25 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:35:48.367 17:32:25 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:35:48.367 17:32:25 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:35:48.367 17:32:25 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:35:48.367 17:32:25 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@706 -- # local timeout=499 00:35:48.367 17:32:25 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:35:48.367 17:32:25 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:35:48.367 17:32:25 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:35:48.367 17:32:25 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:35:48.367 17:32:25 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:35:48.367 17:32:25 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:35:48.367 [2024-11-26 17:32:25.750116] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 16384 offset_begin: 12288 offset_end: 18432 00:35:48.367 17:32:25 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:35:48.367 17:32:25 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:35:48.367 17:32:25 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:48.367 17:32:25 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:35:48.367 17:32:25 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:48.368 17:32:25 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:35:48.368 "name": "raid_bdev1", 00:35:48.368 "uuid": "adccd4be-17a7-4c0b-a1dd-da2774bc027f", 00:35:48.368 "strip_size_kb": 0, 00:35:48.368 "state": "online", 00:35:48.368 "raid_level": "raid1", 00:35:48.368 "superblock": false, 00:35:48.368 "num_base_bdevs": 4, 00:35:48.368 "num_base_bdevs_discovered": 3, 00:35:48.368 "num_base_bdevs_operational": 3, 00:35:48.368 "process": { 00:35:48.368 "type": "rebuild", 00:35:48.368 "target": "spare", 00:35:48.368 "progress": { 00:35:48.368 "blocks": 16384, 00:35:48.368 "percent": 25 00:35:48.368 } 00:35:48.368 }, 00:35:48.368 "base_bdevs_list": [ 00:35:48.368 { 00:35:48.368 "name": "spare", 00:35:48.368 "uuid": "f0594f81-31b3-5b78-8623-718a7bac8966", 00:35:48.368 "is_configured": true, 00:35:48.368 "data_offset": 0, 00:35:48.368 "data_size": 65536 00:35:48.368 }, 00:35:48.368 { 00:35:48.368 "name": null, 00:35:48.368 "uuid": "00000000-0000-0000-0000-000000000000", 00:35:48.368 "is_configured": false, 00:35:48.368 "data_offset": 0, 00:35:48.368 "data_size": 65536 00:35:48.368 }, 00:35:48.368 { 00:35:48.368 "name": "BaseBdev3", 00:35:48.368 "uuid": "db064682-3974-5689-8e6c-ffdd5e71f1fa", 00:35:48.368 "is_configured": true, 00:35:48.368 "data_offset": 0, 00:35:48.368 "data_size": 65536 00:35:48.368 }, 00:35:48.368 { 00:35:48.368 "name": "BaseBdev4", 00:35:48.368 "uuid": "45232db2-a1c8-5509-816a-4be9bc56dba5", 00:35:48.368 "is_configured": true, 00:35:48.368 "data_offset": 0, 00:35:48.368 "data_size": 65536 00:35:48.368 } 00:35:48.368 ] 00:35:48.368 }' 00:35:48.368 17:32:25 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:35:48.627 17:32:25 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:35:48.627 17:32:25 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:35:48.627 17:32:25 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:35:48.627 17:32:25 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@711 -- # sleep 1 00:35:48.627 [2024-11-26 17:32:25.993836] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 20480 offset_begin: 18432 offset_end: 24576 00:35:48.886 130.50 IOPS, 391.50 MiB/s [2024-11-26T17:32:26.333Z] [2024-11-26 17:32:26.131494] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 22528 offset_begin: 18432 offset_end: 24576 00:35:48.886 [2024-11-26 17:32:26.131704] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 22528 offset_begin: 18432 offset_end: 24576 00:35:49.145 [2024-11-26 17:32:26.444120] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 26624 offset_begin: 24576 offset_end: 30720 00:35:49.145 [2024-11-26 17:32:26.546802] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 28672 offset_begin: 24576 offset_end: 30720 00:35:49.404 [2024-11-26 17:32:26.773089] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 32768 offset_begin: 30720 offset_end: 36864 00:35:49.664 17:32:26 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:35:49.664 17:32:26 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:35:49.664 17:32:26 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:35:49.664 17:32:26 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:35:49.664 17:32:26 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:35:49.664 17:32:26 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:35:49.664 17:32:26 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:35:49.664 17:32:26 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:49.664 17:32:26 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:35:49.664 17:32:26 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:35:49.664 17:32:26 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:49.664 17:32:26 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:35:49.664 "name": "raid_bdev1", 00:35:49.664 "uuid": "adccd4be-17a7-4c0b-a1dd-da2774bc027f", 00:35:49.664 "strip_size_kb": 0, 00:35:49.664 "state": "online", 00:35:49.664 "raid_level": "raid1", 00:35:49.664 "superblock": false, 00:35:49.664 "num_base_bdevs": 4, 00:35:49.664 "num_base_bdevs_discovered": 3, 00:35:49.664 "num_base_bdevs_operational": 3, 00:35:49.664 "process": { 00:35:49.664 "type": "rebuild", 00:35:49.664 "target": "spare", 00:35:49.664 "progress": { 00:35:49.664 "blocks": 32768, 00:35:49.664 "percent": 50 00:35:49.664 } 00:35:49.664 }, 00:35:49.664 "base_bdevs_list": [ 00:35:49.664 { 00:35:49.664 "name": "spare", 00:35:49.664 "uuid": "f0594f81-31b3-5b78-8623-718a7bac8966", 00:35:49.664 "is_configured": true, 00:35:49.664 "data_offset": 0, 00:35:49.664 "data_size": 65536 00:35:49.664 }, 00:35:49.664 { 00:35:49.664 "name": null, 00:35:49.664 "uuid": "00000000-0000-0000-0000-000000000000", 00:35:49.664 "is_configured": false, 00:35:49.664 "data_offset": 0, 00:35:49.664 "data_size": 65536 00:35:49.664 }, 00:35:49.664 { 00:35:49.664 "name": "BaseBdev3", 00:35:49.664 "uuid": "db064682-3974-5689-8e6c-ffdd5e71f1fa", 00:35:49.664 "is_configured": true, 00:35:49.664 "data_offset": 0, 00:35:49.664 "data_size": 65536 00:35:49.664 }, 00:35:49.664 { 00:35:49.664 "name": "BaseBdev4", 00:35:49.664 "uuid": "45232db2-a1c8-5509-816a-4be9bc56dba5", 00:35:49.664 "is_configured": true, 00:35:49.664 "data_offset": 0, 00:35:49.664 "data_size": 65536 00:35:49.664 } 00:35:49.664 ] 00:35:49.664 }' 00:35:49.664 17:32:26 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:35:49.664 17:32:26 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:35:49.664 17:32:26 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:35:49.664 [2024-11-26 17:32:27.004392] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 34816 offset_begin: 30720 offset_end: 36864 00:35:49.664 17:32:27 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:35:49.664 17:32:27 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@711 -- # sleep 1 00:35:49.924 116.40 IOPS, 349.20 MiB/s [2024-11-26T17:32:27.371Z] [2024-11-26 17:32:27.365355] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 38912 offset_begin: 36864 offset_end: 43008 00:35:50.184 [2024-11-26 17:32:27.568698] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 40960 offset_begin: 36864 offset_end: 43008 00:35:50.753 17:32:28 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:35:50.753 17:32:28 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:35:50.753 17:32:28 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:35:50.753 17:32:28 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:35:50.753 17:32:28 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:35:50.753 17:32:28 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:35:50.753 17:32:28 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:35:50.753 17:32:28 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:50.753 17:32:28 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:35:50.753 17:32:28 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:35:50.753 17:32:28 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:50.753 17:32:28 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:35:50.753 "name": "raid_bdev1", 00:35:50.753 "uuid": "adccd4be-17a7-4c0b-a1dd-da2774bc027f", 00:35:50.753 "strip_size_kb": 0, 00:35:50.753 "state": "online", 00:35:50.753 "raid_level": "raid1", 00:35:50.753 "superblock": false, 00:35:50.753 "num_base_bdevs": 4, 00:35:50.753 "num_base_bdevs_discovered": 3, 00:35:50.753 "num_base_bdevs_operational": 3, 00:35:50.753 "process": { 00:35:50.753 "type": "rebuild", 00:35:50.753 "target": "spare", 00:35:50.753 "progress": { 00:35:50.753 "blocks": 49152, 00:35:50.753 "percent": 75 00:35:50.753 } 00:35:50.753 }, 00:35:50.753 "base_bdevs_list": [ 00:35:50.753 { 00:35:50.753 "name": "spare", 00:35:50.753 "uuid": "f0594f81-31b3-5b78-8623-718a7bac8966", 00:35:50.753 "is_configured": true, 00:35:50.753 "data_offset": 0, 00:35:50.753 "data_size": 65536 00:35:50.753 }, 00:35:50.753 { 00:35:50.753 "name": null, 00:35:50.753 "uuid": "00000000-0000-0000-0000-000000000000", 00:35:50.753 "is_configured": false, 00:35:50.753 "data_offset": 0, 00:35:50.753 "data_size": 65536 00:35:50.753 }, 00:35:50.753 { 00:35:50.753 "name": "BaseBdev3", 00:35:50.753 "uuid": "db064682-3974-5689-8e6c-ffdd5e71f1fa", 00:35:50.753 "is_configured": true, 00:35:50.753 "data_offset": 0, 00:35:50.753 "data_size": 65536 00:35:50.753 }, 00:35:50.753 { 00:35:50.753 "name": "BaseBdev4", 00:35:50.753 "uuid": "45232db2-a1c8-5509-816a-4be9bc56dba5", 00:35:50.753 "is_configured": true, 00:35:50.753 "data_offset": 0, 00:35:50.753 "data_size": 65536 00:35:50.753 } 00:35:50.753 ] 00:35:50.753 }' 00:35:50.753 17:32:28 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:35:50.753 104.83 IOPS, 314.50 MiB/s [2024-11-26T17:32:28.200Z] 17:32:28 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:35:50.753 17:32:28 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:35:50.753 17:32:28 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:35:50.753 17:32:28 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@711 -- # sleep 1 00:35:51.691 [2024-11-26 17:32:28.841924] bdev_raid.c:2900:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:35:51.691 [2024-11-26 17:32:28.942011] bdev_raid.c:2562:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:35:51.691 [2024-11-26 17:32:28.944267] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:35:51.955 95.57 IOPS, 286.71 MiB/s [2024-11-26T17:32:29.402Z] 17:32:29 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:35:51.955 17:32:29 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:35:51.956 17:32:29 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:35:51.956 17:32:29 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:35:51.956 17:32:29 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:35:51.956 17:32:29 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:35:51.956 17:32:29 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:35:51.956 17:32:29 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:35:51.956 17:32:29 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:51.956 17:32:29 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:35:51.956 17:32:29 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:51.956 17:32:29 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:35:51.956 "name": "raid_bdev1", 00:35:51.956 "uuid": "adccd4be-17a7-4c0b-a1dd-da2774bc027f", 00:35:51.956 "strip_size_kb": 0, 00:35:51.956 "state": "online", 00:35:51.956 "raid_level": "raid1", 00:35:51.956 "superblock": false, 00:35:51.956 "num_base_bdevs": 4, 00:35:51.956 "num_base_bdevs_discovered": 3, 00:35:51.956 "num_base_bdevs_operational": 3, 00:35:51.956 "base_bdevs_list": [ 00:35:51.956 { 00:35:51.956 "name": "spare", 00:35:51.956 "uuid": "f0594f81-31b3-5b78-8623-718a7bac8966", 00:35:51.956 "is_configured": true, 00:35:51.956 "data_offset": 0, 00:35:51.956 "data_size": 65536 00:35:51.956 }, 00:35:51.956 { 00:35:51.956 "name": null, 00:35:51.956 "uuid": "00000000-0000-0000-0000-000000000000", 00:35:51.956 "is_configured": false, 00:35:51.956 "data_offset": 0, 00:35:51.956 "data_size": 65536 00:35:51.956 }, 00:35:51.956 { 00:35:51.956 "name": "BaseBdev3", 00:35:51.956 "uuid": "db064682-3974-5689-8e6c-ffdd5e71f1fa", 00:35:51.956 "is_configured": true, 00:35:51.956 "data_offset": 0, 00:35:51.956 "data_size": 65536 00:35:51.956 }, 00:35:51.956 { 00:35:51.956 "name": "BaseBdev4", 00:35:51.956 "uuid": "45232db2-a1c8-5509-816a-4be9bc56dba5", 00:35:51.956 "is_configured": true, 00:35:51.956 "data_offset": 0, 00:35:51.956 "data_size": 65536 00:35:51.956 } 00:35:51.956 ] 00:35:51.956 }' 00:35:51.956 17:32:29 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:35:51.956 17:32:29 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:35:51.956 17:32:29 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:35:51.956 17:32:29 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:35:51.956 17:32:29 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@709 -- # break 00:35:51.957 17:32:29 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:35:51.957 17:32:29 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:35:51.957 17:32:29 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:35:51.957 17:32:29 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:35:51.957 17:32:29 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:35:51.957 17:32:29 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:35:51.957 17:32:29 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:51.957 17:32:29 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:35:51.957 17:32:29 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:35:51.957 17:32:29 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:51.957 17:32:29 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:35:51.957 "name": "raid_bdev1", 00:35:51.957 "uuid": "adccd4be-17a7-4c0b-a1dd-da2774bc027f", 00:35:51.957 "strip_size_kb": 0, 00:35:51.957 "state": "online", 00:35:51.957 "raid_level": "raid1", 00:35:51.957 "superblock": false, 00:35:51.957 "num_base_bdevs": 4, 00:35:51.957 "num_base_bdevs_discovered": 3, 00:35:51.957 "num_base_bdevs_operational": 3, 00:35:51.957 "base_bdevs_list": [ 00:35:51.957 { 00:35:51.957 "name": "spare", 00:35:51.957 "uuid": "f0594f81-31b3-5b78-8623-718a7bac8966", 00:35:51.957 "is_configured": true, 00:35:51.957 "data_offset": 0, 00:35:51.957 "data_size": 65536 00:35:51.957 }, 00:35:51.957 { 00:35:51.957 "name": null, 00:35:51.957 "uuid": "00000000-0000-0000-0000-000000000000", 00:35:51.957 "is_configured": false, 00:35:51.957 "data_offset": 0, 00:35:51.957 "data_size": 65536 00:35:51.957 }, 00:35:51.957 { 00:35:51.957 "name": "BaseBdev3", 00:35:51.957 "uuid": "db064682-3974-5689-8e6c-ffdd5e71f1fa", 00:35:51.958 "is_configured": true, 00:35:51.958 "data_offset": 0, 00:35:51.958 "data_size": 65536 00:35:51.958 }, 00:35:51.958 { 00:35:51.958 "name": "BaseBdev4", 00:35:51.958 "uuid": "45232db2-a1c8-5509-816a-4be9bc56dba5", 00:35:51.958 "is_configured": true, 00:35:51.958 "data_offset": 0, 00:35:51.958 "data_size": 65536 00:35:51.958 } 00:35:51.958 ] 00:35:51.958 }' 00:35:51.958 17:32:29 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:35:51.958 17:32:29 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:35:52.220 17:32:29 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:35:52.220 17:32:29 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:35:52.220 17:32:29 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:35:52.220 17:32:29 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:35:52.220 17:32:29 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:35:52.220 17:32:29 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:35:52.220 17:32:29 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:35:52.220 17:32:29 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:35:52.220 17:32:29 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:35:52.220 17:32:29 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:35:52.220 17:32:29 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:35:52.220 17:32:29 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:35:52.220 17:32:29 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:35:52.220 17:32:29 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:35:52.220 17:32:29 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:52.220 17:32:29 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:35:52.220 17:32:29 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:52.220 17:32:29 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:35:52.220 "name": "raid_bdev1", 00:35:52.220 "uuid": "adccd4be-17a7-4c0b-a1dd-da2774bc027f", 00:35:52.220 "strip_size_kb": 0, 00:35:52.220 "state": "online", 00:35:52.220 "raid_level": "raid1", 00:35:52.220 "superblock": false, 00:35:52.220 "num_base_bdevs": 4, 00:35:52.220 "num_base_bdevs_discovered": 3, 00:35:52.220 "num_base_bdevs_operational": 3, 00:35:52.220 "base_bdevs_list": [ 00:35:52.220 { 00:35:52.220 "name": "spare", 00:35:52.220 "uuid": "f0594f81-31b3-5b78-8623-718a7bac8966", 00:35:52.220 "is_configured": true, 00:35:52.220 "data_offset": 0, 00:35:52.220 "data_size": 65536 00:35:52.220 }, 00:35:52.220 { 00:35:52.220 "name": null, 00:35:52.220 "uuid": "00000000-0000-0000-0000-000000000000", 00:35:52.220 "is_configured": false, 00:35:52.220 "data_offset": 0, 00:35:52.220 "data_size": 65536 00:35:52.220 }, 00:35:52.220 { 00:35:52.220 "name": "BaseBdev3", 00:35:52.220 "uuid": "db064682-3974-5689-8e6c-ffdd5e71f1fa", 00:35:52.220 "is_configured": true, 00:35:52.220 "data_offset": 0, 00:35:52.220 "data_size": 65536 00:35:52.220 }, 00:35:52.220 { 00:35:52.220 "name": "BaseBdev4", 00:35:52.220 "uuid": "45232db2-a1c8-5509-816a-4be9bc56dba5", 00:35:52.220 "is_configured": true, 00:35:52.220 "data_offset": 0, 00:35:52.220 "data_size": 65536 00:35:52.220 } 00:35:52.220 ] 00:35:52.220 }' 00:35:52.220 17:32:29 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:35:52.220 17:32:29 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:35:52.480 17:32:29 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:35:52.480 17:32:29 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:52.480 17:32:29 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:35:52.480 [2024-11-26 17:32:29.885606] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:35:52.480 [2024-11-26 17:32:29.885641] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:35:52.739 00:35:52.739 Latency(us) 00:35:52.739 [2024-11-26T17:32:30.186Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:35:52.739 Job: raid_bdev1 (Core Mask 0x1, workload: randrw, percentage: 50, depth: 2, IO size: 3145728) 00:35:52.739 raid_bdev1 : 7.89 90.10 270.30 0.00 0.00 15524.47 300.37 118838.61 00:35:52.739 [2024-11-26T17:32:30.186Z] =================================================================================================================== 00:35:52.739 [2024-11-26T17:32:30.186Z] Total : 90.10 270.30 0.00 0.00 15524.47 300.37 118838.61 00:35:52.739 [2024-11-26 17:32:29.968915] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:35:52.739 [2024-11-26 17:32:29.969137] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:35:52.739 [2024-11-26 17:32:29.969278] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:35:52.739 [2024-11-26 17:32:29.969397] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:35:52.739 { 00:35:52.739 "results": [ 00:35:52.739 { 00:35:52.739 "job": "raid_bdev1", 00:35:52.739 "core_mask": "0x1", 00:35:52.739 "workload": "randrw", 00:35:52.739 "percentage": 50, 00:35:52.739 "status": "finished", 00:35:52.739 "queue_depth": 2, 00:35:52.739 "io_size": 3145728, 00:35:52.739 "runtime": 7.891205, 00:35:52.739 "iops": 90.1003078743994, 00:35:52.739 "mibps": 270.30092362319823, 00:35:52.739 "io_failed": 0, 00:35:52.739 "io_timeout": 0, 00:35:52.739 "avg_latency_us": 15524.4695840868, 00:35:52.739 "min_latency_us": 300.37333333333333, 00:35:52.739 "max_latency_us": 118838.61333333333 00:35:52.739 } 00:35:52.739 ], 00:35:52.739 "core_count": 1 00:35:52.739 } 00:35:52.739 17:32:29 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:52.739 17:32:29 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@720 -- # jq length 00:35:52.739 17:32:29 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:35:52.739 17:32:29 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:52.739 17:32:29 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:35:52.739 17:32:29 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:52.739 17:32:30 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:35:52.739 17:32:30 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:35:52.739 17:32:30 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@723 -- # '[' true = true ']' 00:35:52.739 17:32:30 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@725 -- # nbd_start_disks /var/tmp/spdk.sock spare /dev/nbd0 00:35:52.739 17:32:30 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:35:52.739 17:32:30 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@10 -- # bdev_list=('spare') 00:35:52.739 17:32:30 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@10 -- # local bdev_list 00:35:52.739 17:32:30 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:35:52.739 17:32:30 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@11 -- # local nbd_list 00:35:52.739 17:32:30 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@12 -- # local i 00:35:52.739 17:32:30 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:35:52.739 17:32:30 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:35:52.739 17:32:30 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd0 00:35:52.998 /dev/nbd0 00:35:52.998 17:32:30 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:35:52.998 17:32:30 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:35:52.998 17:32:30 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:35:52.998 17:32:30 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@873 -- # local i 00:35:52.998 17:32:30 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:35:52.998 17:32:30 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:35:52.998 17:32:30 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:35:52.998 17:32:30 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@877 -- # break 00:35:52.998 17:32:30 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:35:52.998 17:32:30 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:35:52.998 17:32:30 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:35:52.998 1+0 records in 00:35:52.998 1+0 records out 00:35:52.998 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000608627 s, 6.7 MB/s 00:35:52.998 17:32:30 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:35:52.998 17:32:30 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@890 -- # size=4096 00:35:52.998 17:32:30 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:35:52.998 17:32:30 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:35:52.998 17:32:30 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@893 -- # return 0 00:35:52.999 17:32:30 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:35:52.999 17:32:30 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:35:52.999 17:32:30 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@726 -- # for bdev in "${base_bdevs[@]:1}" 00:35:52.999 17:32:30 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@727 -- # '[' -z '' ']' 00:35:52.999 17:32:30 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@728 -- # continue 00:35:52.999 17:32:30 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@726 -- # for bdev in "${base_bdevs[@]:1}" 00:35:52.999 17:32:30 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@727 -- # '[' -z BaseBdev3 ']' 00:35:52.999 17:32:30 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@730 -- # nbd_start_disks /var/tmp/spdk.sock BaseBdev3 /dev/nbd1 00:35:52.999 17:32:30 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:35:52.999 17:32:30 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev3') 00:35:52.999 17:32:30 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@10 -- # local bdev_list 00:35:52.999 17:32:30 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd1') 00:35:52.999 17:32:30 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@11 -- # local nbd_list 00:35:52.999 17:32:30 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@12 -- # local i 00:35:52.999 17:32:30 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:35:52.999 17:32:30 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:35:52.999 17:32:30 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev3 /dev/nbd1 00:35:53.257 /dev/nbd1 00:35:53.257 17:32:30 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:35:53.257 17:32:30 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:35:53.257 17:32:30 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:35:53.257 17:32:30 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@873 -- # local i 00:35:53.257 17:32:30 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:35:53.257 17:32:30 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:35:53.257 17:32:30 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:35:53.257 17:32:30 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@877 -- # break 00:35:53.257 17:32:30 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:35:53.257 17:32:30 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:35:53.257 17:32:30 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:35:53.257 1+0 records in 00:35:53.257 1+0 records out 00:35:53.257 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000694511 s, 5.9 MB/s 00:35:53.257 17:32:30 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:35:53.257 17:32:30 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@890 -- # size=4096 00:35:53.257 17:32:30 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:35:53.257 17:32:30 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:35:53.257 17:32:30 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@893 -- # return 0 00:35:53.257 17:32:30 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:35:53.257 17:32:30 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:35:53.257 17:32:30 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@731 -- # cmp -i 0 /dev/nbd0 /dev/nbd1 00:35:53.516 17:32:30 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@732 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd1 00:35:53.516 17:32:30 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:35:53.516 17:32:30 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd1') 00:35:53.516 17:32:30 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@50 -- # local nbd_list 00:35:53.516 17:32:30 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@51 -- # local i 00:35:53.516 17:32:30 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:35:53.516 17:32:30 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:35:53.775 17:32:31 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:35:53.776 17:32:31 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:35:53.776 17:32:31 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:35:53.776 17:32:31 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:35:53.776 17:32:31 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:35:53.776 17:32:31 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:35:53.776 17:32:31 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@41 -- # break 00:35:53.776 17:32:31 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@45 -- # return 0 00:35:53.776 17:32:31 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@726 -- # for bdev in "${base_bdevs[@]:1}" 00:35:53.776 17:32:31 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@727 -- # '[' -z BaseBdev4 ']' 00:35:53.776 17:32:31 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@730 -- # nbd_start_disks /var/tmp/spdk.sock BaseBdev4 /dev/nbd1 00:35:53.776 17:32:31 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:35:53.776 17:32:31 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev4') 00:35:53.776 17:32:31 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@10 -- # local bdev_list 00:35:53.776 17:32:31 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd1') 00:35:53.776 17:32:31 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@11 -- # local nbd_list 00:35:53.776 17:32:31 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@12 -- # local i 00:35:53.776 17:32:31 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:35:53.776 17:32:31 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:35:53.776 17:32:31 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev4 /dev/nbd1 00:35:54.035 /dev/nbd1 00:35:54.035 17:32:31 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:35:54.035 17:32:31 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:35:54.035 17:32:31 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:35:54.035 17:32:31 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@873 -- # local i 00:35:54.035 17:32:31 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:35:54.035 17:32:31 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:35:54.035 17:32:31 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:35:54.035 17:32:31 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@877 -- # break 00:35:54.035 17:32:31 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:35:54.035 17:32:31 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:35:54.035 17:32:31 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:35:54.035 1+0 records in 00:35:54.035 1+0 records out 00:35:54.035 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000340795 s, 12.0 MB/s 00:35:54.035 17:32:31 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:35:54.035 17:32:31 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@890 -- # size=4096 00:35:54.035 17:32:31 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:35:54.035 17:32:31 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:35:54.035 17:32:31 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@893 -- # return 0 00:35:54.035 17:32:31 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:35:54.035 17:32:31 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:35:54.035 17:32:31 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@731 -- # cmp -i 0 /dev/nbd0 /dev/nbd1 00:35:54.295 17:32:31 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@732 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd1 00:35:54.295 17:32:31 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:35:54.295 17:32:31 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd1') 00:35:54.295 17:32:31 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@50 -- # local nbd_list 00:35:54.295 17:32:31 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@51 -- # local i 00:35:54.295 17:32:31 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:35:54.295 17:32:31 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:35:54.553 17:32:31 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:35:54.553 17:32:31 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:35:54.553 17:32:31 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:35:54.553 17:32:31 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:35:54.553 17:32:31 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:35:54.553 17:32:31 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:35:54.553 17:32:31 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@41 -- # break 00:35:54.553 17:32:31 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@45 -- # return 0 00:35:54.554 17:32:31 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@734 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:35:54.554 17:32:31 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:35:54.554 17:32:31 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:35:54.554 17:32:31 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@50 -- # local nbd_list 00:35:54.554 17:32:31 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@51 -- # local i 00:35:54.554 17:32:31 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:35:54.554 17:32:31 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:35:54.830 17:32:32 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:35:54.830 17:32:32 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:35:54.830 17:32:32 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:35:54.830 17:32:32 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:35:54.830 17:32:32 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:35:54.830 17:32:32 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:35:54.830 17:32:32 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@41 -- # break 00:35:54.830 17:32:32 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@45 -- # return 0 00:35:54.830 17:32:32 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@743 -- # '[' false = true ']' 00:35:54.830 17:32:32 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@784 -- # killprocess 79227 00:35:54.830 17:32:32 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@954 -- # '[' -z 79227 ']' 00:35:54.830 17:32:32 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@958 -- # kill -0 79227 00:35:54.830 17:32:32 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@959 -- # uname 00:35:54.830 17:32:32 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:35:54.830 17:32:32 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 79227 00:35:54.830 17:32:32 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:35:54.830 killing process with pid 79227 00:35:54.830 Received shutdown signal, test time was about 10.014694 seconds 00:35:54.830 00:35:54.830 Latency(us) 00:35:54.830 [2024-11-26T17:32:32.277Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:35:54.830 [2024-11-26T17:32:32.277Z] =================================================================================================================== 00:35:54.830 [2024-11-26T17:32:32.277Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:35:54.830 17:32:32 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:35:54.830 17:32:32 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@972 -- # echo 'killing process with pid 79227' 00:35:54.830 17:32:32 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@973 -- # kill 79227 00:35:54.830 [2024-11-26 17:32:32.069050] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:35:54.830 17:32:32 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@978 -- # wait 79227 00:35:55.101 [2024-11-26 17:32:32.493735] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:35:56.479 17:32:33 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@786 -- # return 0 00:35:56.479 00:35:56.479 real 0m13.707s 00:35:56.479 user 0m17.415s 00:35:56.479 sys 0m2.077s 00:35:56.479 17:32:33 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@1130 -- # xtrace_disable 00:35:56.479 ************************************ 00:35:56.479 END TEST raid_rebuild_test_io 00:35:56.479 ************************************ 00:35:56.479 17:32:33 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:35:56.479 17:32:33 bdev_raid -- bdev/bdev_raid.sh@981 -- # run_test raid_rebuild_test_sb_io raid_rebuild_test raid1 4 true true true 00:35:56.479 17:32:33 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 7 -le 1 ']' 00:35:56.479 17:32:33 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:35:56.479 17:32:33 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:35:56.479 ************************************ 00:35:56.479 START TEST raid_rebuild_test_sb_io 00:35:56.479 ************************************ 00:35:56.480 17:32:33 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@1129 -- # raid_rebuild_test raid1 4 true true true 00:35:56.480 17:32:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@569 -- # local raid_level=raid1 00:35:56.480 17:32:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=4 00:35:56.480 17:32:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@571 -- # local superblock=true 00:35:56.480 17:32:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@572 -- # local background_io=true 00:35:56.480 17:32:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@573 -- # local verify=true 00:35:56.480 17:32:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:35:56.480 17:32:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:35:56.480 17:32:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:35:56.480 17:32:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:35:56.480 17:32:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:35:56.480 17:32:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:35:56.480 17:32:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:35:56.480 17:32:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:35:56.480 17:32:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@576 -- # echo BaseBdev3 00:35:56.480 17:32:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:35:56.480 17:32:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:35:56.480 17:32:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@576 -- # echo BaseBdev4 00:35:56.480 17:32:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:35:56.480 17:32:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:35:56.480 17:32:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:35:56.480 17:32:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:35:56.480 17:32:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:35:56.480 17:32:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@576 -- # local strip_size 00:35:56.480 17:32:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@577 -- # local create_arg 00:35:56.480 17:32:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:35:56.480 17:32:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@579 -- # local data_offset 00:35:56.480 17:32:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@581 -- # '[' raid1 '!=' raid1 ']' 00:35:56.480 17:32:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@589 -- # strip_size=0 00:35:56.480 17:32:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@592 -- # '[' true = true ']' 00:35:56.480 17:32:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@593 -- # create_arg+=' -s' 00:35:56.480 17:32:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@597 -- # raid_pid=79636 00:35:56.480 17:32:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@598 -- # waitforlisten 79636 00:35:56.480 17:32:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:35:56.480 17:32:33 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@835 -- # '[' -z 79636 ']' 00:35:56.480 17:32:33 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:35:56.480 17:32:33 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@840 -- # local max_retries=100 00:35:56.480 17:32:33 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:35:56.480 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:35:56.480 17:32:33 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@844 -- # xtrace_disable 00:35:56.480 17:32:33 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:35:56.480 [2024-11-26 17:32:33.875448] Starting SPDK v25.01-pre git sha1 f7ce15267 / DPDK 24.03.0 initialization... 00:35:56.480 I/O size of 3145728 is greater than zero copy threshold (65536). 00:35:56.480 Zero copy mechanism will not be used. 00:35:56.480 [2024-11-26 17:32:33.875887] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid79636 ] 00:35:56.739 [2024-11-26 17:32:34.064476] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:35:56.739 [2024-11-26 17:32:34.180303] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:35:56.998 [2024-11-26 17:32:34.377819] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:35:56.998 [2024-11-26 17:32:34.378010] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:35:57.567 17:32:34 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:35:57.567 17:32:34 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@868 -- # return 0 00:35:57.567 17:32:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:35:57.567 17:32:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:35:57.567 17:32:34 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:57.567 17:32:34 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:35:57.567 BaseBdev1_malloc 00:35:57.567 17:32:34 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:57.567 17:32:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:35:57.567 17:32:34 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:57.567 17:32:34 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:35:57.567 [2024-11-26 17:32:34.859569] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:35:57.567 [2024-11-26 17:32:34.859647] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:35:57.567 [2024-11-26 17:32:34.859674] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:35:57.567 [2024-11-26 17:32:34.859689] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:35:57.567 [2024-11-26 17:32:34.862922] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:35:57.567 [2024-11-26 17:32:34.863116] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:35:57.567 BaseBdev1 00:35:57.567 17:32:34 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:57.567 17:32:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:35:57.567 17:32:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:35:57.567 17:32:34 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:57.567 17:32:34 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:35:57.567 BaseBdev2_malloc 00:35:57.567 17:32:34 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:57.567 17:32:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:35:57.567 17:32:34 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:57.567 17:32:34 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:35:57.567 [2024-11-26 17:32:34.911607] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:35:57.567 [2024-11-26 17:32:34.911670] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:35:57.567 [2024-11-26 17:32:34.911696] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:35:57.567 [2024-11-26 17:32:34.911710] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:35:57.567 [2024-11-26 17:32:34.914143] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:35:57.567 [2024-11-26 17:32:34.914185] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:35:57.567 BaseBdev2 00:35:57.567 17:32:34 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:57.567 17:32:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:35:57.567 17:32:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:35:57.567 17:32:34 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:57.567 17:32:34 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:35:57.567 BaseBdev3_malloc 00:35:57.567 17:32:34 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:57.567 17:32:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev3_malloc -p BaseBdev3 00:35:57.567 17:32:34 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:57.567 17:32:34 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:35:57.568 [2024-11-26 17:32:34.976735] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev3_malloc 00:35:57.568 [2024-11-26 17:32:34.976798] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:35:57.568 [2024-11-26 17:32:34.976822] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:35:57.568 [2024-11-26 17:32:34.976836] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:35:57.568 [2024-11-26 17:32:34.979213] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:35:57.568 [2024-11-26 17:32:34.979255] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:35:57.568 BaseBdev3 00:35:57.568 17:32:34 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:57.568 17:32:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:35:57.568 17:32:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:35:57.568 17:32:34 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:57.568 17:32:34 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:35:57.828 BaseBdev4_malloc 00:35:57.828 17:32:35 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:57.828 17:32:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev4_malloc -p BaseBdev4 00:35:57.828 17:32:35 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:57.828 17:32:35 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:35:57.828 [2024-11-26 17:32:35.028759] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev4_malloc 00:35:57.828 [2024-11-26 17:32:35.028819] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:35:57.828 [2024-11-26 17:32:35.028842] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:35:57.828 [2024-11-26 17:32:35.028855] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:35:57.828 [2024-11-26 17:32:35.031211] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:35:57.828 [2024-11-26 17:32:35.031256] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:35:57.828 BaseBdev4 00:35:57.828 17:32:35 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:57.828 17:32:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 512 -b spare_malloc 00:35:57.828 17:32:35 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:57.828 17:32:35 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:35:57.828 spare_malloc 00:35:57.828 17:32:35 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:57.828 17:32:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:35:57.828 17:32:35 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:57.828 17:32:35 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:35:57.828 spare_delay 00:35:57.828 17:32:35 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:57.828 17:32:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:35:57.828 17:32:35 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:57.828 17:32:35 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:35:57.828 [2024-11-26 17:32:35.089771] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:35:57.828 [2024-11-26 17:32:35.089963] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:35:57.828 [2024-11-26 17:32:35.089992] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a880 00:35:57.828 [2024-11-26 17:32:35.090006] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:35:57.828 [2024-11-26 17:32:35.092409] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:35:57.828 [2024-11-26 17:32:35.092450] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:35:57.828 spare 00:35:57.828 17:32:35 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:57.828 17:32:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n raid_bdev1 00:35:57.828 17:32:35 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:57.828 17:32:35 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:35:57.828 [2024-11-26 17:32:35.101828] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:35:57.828 [2024-11-26 17:32:35.103929] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:35:57.828 [2024-11-26 17:32:35.103990] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:35:57.828 [2024-11-26 17:32:35.104039] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:35:57.828 [2024-11-26 17:32:35.104233] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:35:57.828 [2024-11-26 17:32:35.104249] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:35:57.828 [2024-11-26 17:32:35.104500] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:35:57.828 [2024-11-26 17:32:35.104664] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:35:57.828 [2024-11-26 17:32:35.104675] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:35:57.828 [2024-11-26 17:32:35.104820] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:35:57.828 17:32:35 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:57.828 17:32:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 4 00:35:57.828 17:32:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:35:57.828 17:32:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:35:57.828 17:32:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:35:57.828 17:32:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:35:57.828 17:32:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:35:57.828 17:32:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:35:57.828 17:32:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:35:57.828 17:32:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:35:57.828 17:32:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:35:57.828 17:32:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:35:57.828 17:32:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:35:57.828 17:32:35 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:57.828 17:32:35 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:35:57.828 17:32:35 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:57.828 17:32:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:35:57.828 "name": "raid_bdev1", 00:35:57.828 "uuid": "1531ce49-2e53-4c6a-9473-a21c6f93677f", 00:35:57.828 "strip_size_kb": 0, 00:35:57.828 "state": "online", 00:35:57.828 "raid_level": "raid1", 00:35:57.828 "superblock": true, 00:35:57.828 "num_base_bdevs": 4, 00:35:57.828 "num_base_bdevs_discovered": 4, 00:35:57.828 "num_base_bdevs_operational": 4, 00:35:57.828 "base_bdevs_list": [ 00:35:57.828 { 00:35:57.828 "name": "BaseBdev1", 00:35:57.828 "uuid": "f5eb76c1-2647-5882-b484-41f67e09032b", 00:35:57.828 "is_configured": true, 00:35:57.828 "data_offset": 2048, 00:35:57.828 "data_size": 63488 00:35:57.828 }, 00:35:57.828 { 00:35:57.828 "name": "BaseBdev2", 00:35:57.828 "uuid": "d0b0dfb7-c4d8-52fe-80b6-a679240b7fad", 00:35:57.828 "is_configured": true, 00:35:57.828 "data_offset": 2048, 00:35:57.828 "data_size": 63488 00:35:57.828 }, 00:35:57.828 { 00:35:57.828 "name": "BaseBdev3", 00:35:57.828 "uuid": "c0612ada-6677-55be-8d1f-87f32e3901f5", 00:35:57.828 "is_configured": true, 00:35:57.828 "data_offset": 2048, 00:35:57.828 "data_size": 63488 00:35:57.828 }, 00:35:57.828 { 00:35:57.828 "name": "BaseBdev4", 00:35:57.828 "uuid": "94373632-e9eb-5e76-bfb5-963d9b5bcd8c", 00:35:57.828 "is_configured": true, 00:35:57.828 "data_offset": 2048, 00:35:57.829 "data_size": 63488 00:35:57.829 } 00:35:57.829 ] 00:35:57.829 }' 00:35:57.829 17:32:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:35:57.829 17:32:35 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:35:58.397 17:32:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:35:58.397 17:32:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:35:58.397 17:32:35 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:58.397 17:32:35 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:35:58.397 [2024-11-26 17:32:35.554266] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:35:58.397 17:32:35 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:58.397 17:32:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=63488 00:35:58.397 17:32:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:35:58.397 17:32:35 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:58.397 17:32:35 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:35:58.397 17:32:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:35:58.397 17:32:35 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:58.397 17:32:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@619 -- # data_offset=2048 00:35:58.397 17:32:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@621 -- # '[' true = true ']' 00:35:58.397 17:32:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:35:58.397 17:32:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@623 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:35:58.397 17:32:35 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:58.397 17:32:35 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:35:58.397 [2024-11-26 17:32:35.637885] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:35:58.397 17:32:35 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:58.397 17:32:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:35:58.397 17:32:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:35:58.397 17:32:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:35:58.397 17:32:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:35:58.398 17:32:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:35:58.398 17:32:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:35:58.398 17:32:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:35:58.398 17:32:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:35:58.398 17:32:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:35:58.398 17:32:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:35:58.398 17:32:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:35:58.398 17:32:35 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:58.398 17:32:35 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:35:58.398 17:32:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:35:58.398 17:32:35 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:58.398 17:32:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:35:58.398 "name": "raid_bdev1", 00:35:58.398 "uuid": "1531ce49-2e53-4c6a-9473-a21c6f93677f", 00:35:58.398 "strip_size_kb": 0, 00:35:58.398 "state": "online", 00:35:58.398 "raid_level": "raid1", 00:35:58.398 "superblock": true, 00:35:58.398 "num_base_bdevs": 4, 00:35:58.398 "num_base_bdevs_discovered": 3, 00:35:58.398 "num_base_bdevs_operational": 3, 00:35:58.398 "base_bdevs_list": [ 00:35:58.398 { 00:35:58.398 "name": null, 00:35:58.398 "uuid": "00000000-0000-0000-0000-000000000000", 00:35:58.398 "is_configured": false, 00:35:58.398 "data_offset": 0, 00:35:58.398 "data_size": 63488 00:35:58.398 }, 00:35:58.398 { 00:35:58.398 "name": "BaseBdev2", 00:35:58.398 "uuid": "d0b0dfb7-c4d8-52fe-80b6-a679240b7fad", 00:35:58.398 "is_configured": true, 00:35:58.398 "data_offset": 2048, 00:35:58.398 "data_size": 63488 00:35:58.398 }, 00:35:58.398 { 00:35:58.398 "name": "BaseBdev3", 00:35:58.398 "uuid": "c0612ada-6677-55be-8d1f-87f32e3901f5", 00:35:58.398 "is_configured": true, 00:35:58.398 "data_offset": 2048, 00:35:58.398 "data_size": 63488 00:35:58.398 }, 00:35:58.398 { 00:35:58.398 "name": "BaseBdev4", 00:35:58.398 "uuid": "94373632-e9eb-5e76-bfb5-963d9b5bcd8c", 00:35:58.398 "is_configured": true, 00:35:58.398 "data_offset": 2048, 00:35:58.398 "data_size": 63488 00:35:58.398 } 00:35:58.398 ] 00:35:58.398 }' 00:35:58.398 17:32:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:35:58.398 17:32:35 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:35:58.398 [2024-11-26 17:32:35.761803] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:35:58.398 I/O size of 3145728 is greater than zero copy threshold (65536). 00:35:58.398 Zero copy mechanism will not be used. 00:35:58.398 Running I/O for 60 seconds... 00:35:58.657 17:32:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:35:58.657 17:32:36 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:58.657 17:32:36 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:35:58.657 [2024-11-26 17:32:36.095406] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:35:58.916 17:32:36 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:58.916 17:32:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@647 -- # sleep 1 00:35:58.916 [2024-11-26 17:32:36.143837] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000062f0 00:35:58.916 [2024-11-26 17:32:36.146105] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:35:58.916 [2024-11-26 17:32:36.255277] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:35:58.916 [2024-11-26 17:32:36.256751] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:35:59.175 [2024-11-26 17:32:36.483392] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:35:59.175 [2024-11-26 17:32:36.484348] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:35:59.434 157.00 IOPS, 471.00 MiB/s [2024-11-26T17:32:36.881Z] [2024-11-26 17:32:36.859924] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 8192 offset_begin: 6144 offset_end: 12288 00:35:59.434 [2024-11-26 17:32:36.861653] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 8192 offset_begin: 6144 offset_end: 12288 00:35:59.693 [2024-11-26 17:32:37.074275] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:35:59.693 17:32:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:35:59.693 17:32:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:35:59.693 17:32:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:35:59.693 17:32:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:35:59.693 17:32:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:35:59.693 17:32:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:35:59.693 17:32:37 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:59.693 17:32:37 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:35:59.693 17:32:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:35:59.952 17:32:37 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:59.952 17:32:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:35:59.952 "name": "raid_bdev1", 00:35:59.952 "uuid": "1531ce49-2e53-4c6a-9473-a21c6f93677f", 00:35:59.952 "strip_size_kb": 0, 00:35:59.952 "state": "online", 00:35:59.952 "raid_level": "raid1", 00:35:59.952 "superblock": true, 00:35:59.952 "num_base_bdevs": 4, 00:35:59.952 "num_base_bdevs_discovered": 4, 00:35:59.952 "num_base_bdevs_operational": 4, 00:35:59.952 "process": { 00:35:59.952 "type": "rebuild", 00:35:59.952 "target": "spare", 00:35:59.952 "progress": { 00:35:59.952 "blocks": 10240, 00:35:59.952 "percent": 16 00:35:59.952 } 00:35:59.952 }, 00:35:59.952 "base_bdevs_list": [ 00:35:59.952 { 00:35:59.952 "name": "spare", 00:35:59.952 "uuid": "198ee48c-8f44-56a6-8298-8aea7731dcbc", 00:35:59.952 "is_configured": true, 00:35:59.952 "data_offset": 2048, 00:35:59.952 "data_size": 63488 00:35:59.952 }, 00:35:59.952 { 00:35:59.952 "name": "BaseBdev2", 00:35:59.952 "uuid": "d0b0dfb7-c4d8-52fe-80b6-a679240b7fad", 00:35:59.952 "is_configured": true, 00:35:59.952 "data_offset": 2048, 00:35:59.952 "data_size": 63488 00:35:59.952 }, 00:35:59.952 { 00:35:59.952 "name": "BaseBdev3", 00:35:59.952 "uuid": "c0612ada-6677-55be-8d1f-87f32e3901f5", 00:35:59.952 "is_configured": true, 00:35:59.952 "data_offset": 2048, 00:35:59.952 "data_size": 63488 00:35:59.952 }, 00:35:59.952 { 00:35:59.952 "name": "BaseBdev4", 00:35:59.952 "uuid": "94373632-e9eb-5e76-bfb5-963d9b5bcd8c", 00:35:59.952 "is_configured": true, 00:35:59.952 "data_offset": 2048, 00:35:59.952 "data_size": 63488 00:35:59.952 } 00:35:59.952 ] 00:35:59.952 }' 00:35:59.952 17:32:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:35:59.952 17:32:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:35:59.952 17:32:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:35:59.952 17:32:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:35:59.952 17:32:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:35:59.952 17:32:37 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:59.953 17:32:37 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:35:59.953 [2024-11-26 17:32:37.276459] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:35:59.953 [2024-11-26 17:32:37.297811] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 14336 offset_begin: 12288 offset_end: 18432 00:36:00.212 [2024-11-26 17:32:37.400247] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:36:00.212 [2024-11-26 17:32:37.410752] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:36:00.212 [2024-11-26 17:32:37.410815] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:36:00.212 [2024-11-26 17:32:37.410834] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:36:00.212 [2024-11-26 17:32:37.441109] bdev_raid.c:1974:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 0 raid_ch: 0x60d000006220 00:36:00.212 17:32:37 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:00.212 17:32:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:36:00.212 17:32:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:36:00.212 17:32:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:36:00.212 17:32:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:36:00.212 17:32:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:36:00.212 17:32:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:36:00.212 17:32:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:36:00.212 17:32:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:36:00.212 17:32:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:36:00.212 17:32:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:36:00.212 17:32:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:36:00.212 17:32:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:36:00.212 17:32:37 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:00.212 17:32:37 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:36:00.212 17:32:37 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:00.212 17:32:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:36:00.212 "name": "raid_bdev1", 00:36:00.212 "uuid": "1531ce49-2e53-4c6a-9473-a21c6f93677f", 00:36:00.212 "strip_size_kb": 0, 00:36:00.212 "state": "online", 00:36:00.212 "raid_level": "raid1", 00:36:00.212 "superblock": true, 00:36:00.212 "num_base_bdevs": 4, 00:36:00.212 "num_base_bdevs_discovered": 3, 00:36:00.212 "num_base_bdevs_operational": 3, 00:36:00.212 "base_bdevs_list": [ 00:36:00.212 { 00:36:00.212 "name": null, 00:36:00.212 "uuid": "00000000-0000-0000-0000-000000000000", 00:36:00.212 "is_configured": false, 00:36:00.212 "data_offset": 0, 00:36:00.212 "data_size": 63488 00:36:00.212 }, 00:36:00.212 { 00:36:00.212 "name": "BaseBdev2", 00:36:00.212 "uuid": "d0b0dfb7-c4d8-52fe-80b6-a679240b7fad", 00:36:00.212 "is_configured": true, 00:36:00.212 "data_offset": 2048, 00:36:00.212 "data_size": 63488 00:36:00.212 }, 00:36:00.212 { 00:36:00.212 "name": "BaseBdev3", 00:36:00.212 "uuid": "c0612ada-6677-55be-8d1f-87f32e3901f5", 00:36:00.212 "is_configured": true, 00:36:00.212 "data_offset": 2048, 00:36:00.212 "data_size": 63488 00:36:00.212 }, 00:36:00.212 { 00:36:00.212 "name": "BaseBdev4", 00:36:00.212 "uuid": "94373632-e9eb-5e76-bfb5-963d9b5bcd8c", 00:36:00.212 "is_configured": true, 00:36:00.212 "data_offset": 2048, 00:36:00.212 "data_size": 63488 00:36:00.212 } 00:36:00.212 ] 00:36:00.212 }' 00:36:00.212 17:32:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:36:00.212 17:32:37 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:36:00.471 143.00 IOPS, 429.00 MiB/s [2024-11-26T17:32:37.918Z] 17:32:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:36:00.472 17:32:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:36:00.472 17:32:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:36:00.472 17:32:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:36:00.472 17:32:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:36:00.731 17:32:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:36:00.731 17:32:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:36:00.731 17:32:37 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:00.731 17:32:37 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:36:00.731 17:32:37 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:00.731 17:32:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:36:00.731 "name": "raid_bdev1", 00:36:00.731 "uuid": "1531ce49-2e53-4c6a-9473-a21c6f93677f", 00:36:00.731 "strip_size_kb": 0, 00:36:00.731 "state": "online", 00:36:00.731 "raid_level": "raid1", 00:36:00.731 "superblock": true, 00:36:00.731 "num_base_bdevs": 4, 00:36:00.731 "num_base_bdevs_discovered": 3, 00:36:00.731 "num_base_bdevs_operational": 3, 00:36:00.731 "base_bdevs_list": [ 00:36:00.731 { 00:36:00.731 "name": null, 00:36:00.731 "uuid": "00000000-0000-0000-0000-000000000000", 00:36:00.731 "is_configured": false, 00:36:00.731 "data_offset": 0, 00:36:00.731 "data_size": 63488 00:36:00.731 }, 00:36:00.731 { 00:36:00.731 "name": "BaseBdev2", 00:36:00.731 "uuid": "d0b0dfb7-c4d8-52fe-80b6-a679240b7fad", 00:36:00.731 "is_configured": true, 00:36:00.731 "data_offset": 2048, 00:36:00.731 "data_size": 63488 00:36:00.731 }, 00:36:00.731 { 00:36:00.731 "name": "BaseBdev3", 00:36:00.731 "uuid": "c0612ada-6677-55be-8d1f-87f32e3901f5", 00:36:00.731 "is_configured": true, 00:36:00.731 "data_offset": 2048, 00:36:00.731 "data_size": 63488 00:36:00.731 }, 00:36:00.731 { 00:36:00.731 "name": "BaseBdev4", 00:36:00.731 "uuid": "94373632-e9eb-5e76-bfb5-963d9b5bcd8c", 00:36:00.731 "is_configured": true, 00:36:00.731 "data_offset": 2048, 00:36:00.731 "data_size": 63488 00:36:00.731 } 00:36:00.731 ] 00:36:00.731 }' 00:36:00.731 17:32:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:36:00.731 17:32:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:36:00.731 17:32:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:36:00.731 17:32:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:36:00.731 17:32:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:36:00.731 17:32:38 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:00.731 17:32:38 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:36:00.731 [2024-11-26 17:32:38.079831] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:36:00.731 17:32:38 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:00.731 17:32:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@663 -- # sleep 1 00:36:00.731 [2024-11-26 17:32:38.142810] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000063c0 00:36:00.731 [2024-11-26 17:32:38.145162] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:36:00.991 [2024-11-26 17:32:38.253082] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:36:00.991 [2024-11-26 17:32:38.254577] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:36:01.249 [2024-11-26 17:32:38.465961] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:36:01.249 [2024-11-26 17:32:38.466350] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:36:01.508 160.33 IOPS, 481.00 MiB/s [2024-11-26T17:32:38.956Z] [2024-11-26 17:32:38.834466] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:36:01.768 17:32:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:36:01.768 17:32:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:36:01.768 17:32:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:36:01.768 17:32:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:36:01.768 17:32:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:36:01.768 17:32:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:36:01.768 17:32:39 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:01.768 17:32:39 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:36:01.768 17:32:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:36:01.768 17:32:39 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:01.768 [2024-11-26 17:32:39.172377] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 16384 offset_begin: 12288 offset_end: 18432 00:36:01.768 [2024-11-26 17:32:39.173127] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 16384 offset_begin: 12288 offset_end: 18432 00:36:01.768 17:32:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:36:01.768 "name": "raid_bdev1", 00:36:01.768 "uuid": "1531ce49-2e53-4c6a-9473-a21c6f93677f", 00:36:01.768 "strip_size_kb": 0, 00:36:01.768 "state": "online", 00:36:01.768 "raid_level": "raid1", 00:36:01.768 "superblock": true, 00:36:01.768 "num_base_bdevs": 4, 00:36:01.768 "num_base_bdevs_discovered": 4, 00:36:01.768 "num_base_bdevs_operational": 4, 00:36:01.768 "process": { 00:36:01.768 "type": "rebuild", 00:36:01.768 "target": "spare", 00:36:01.768 "progress": { 00:36:01.768 "blocks": 14336, 00:36:01.768 "percent": 22 00:36:01.768 } 00:36:01.768 }, 00:36:01.768 "base_bdevs_list": [ 00:36:01.768 { 00:36:01.768 "name": "spare", 00:36:01.768 "uuid": "198ee48c-8f44-56a6-8298-8aea7731dcbc", 00:36:01.768 "is_configured": true, 00:36:01.768 "data_offset": 2048, 00:36:01.768 "data_size": 63488 00:36:01.768 }, 00:36:01.768 { 00:36:01.768 "name": "BaseBdev2", 00:36:01.768 "uuid": "d0b0dfb7-c4d8-52fe-80b6-a679240b7fad", 00:36:01.768 "is_configured": true, 00:36:01.768 "data_offset": 2048, 00:36:01.768 "data_size": 63488 00:36:01.768 }, 00:36:01.768 { 00:36:01.768 "name": "BaseBdev3", 00:36:01.768 "uuid": "c0612ada-6677-55be-8d1f-87f32e3901f5", 00:36:01.768 "is_configured": true, 00:36:01.768 "data_offset": 2048, 00:36:01.768 "data_size": 63488 00:36:01.768 }, 00:36:01.768 { 00:36:01.768 "name": "BaseBdev4", 00:36:01.768 "uuid": "94373632-e9eb-5e76-bfb5-963d9b5bcd8c", 00:36:01.768 "is_configured": true, 00:36:01.768 "data_offset": 2048, 00:36:01.768 "data_size": 63488 00:36:01.768 } 00:36:01.768 ] 00:36:01.768 }' 00:36:01.768 17:32:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:36:02.026 17:32:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:36:02.026 17:32:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:36:02.026 17:32:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:36:02.026 17:32:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@666 -- # '[' true = true ']' 00:36:02.026 17:32:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@666 -- # '[' = false ']' 00:36:02.026 /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh: line 666: [: =: unary operator expected 00:36:02.026 17:32:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=4 00:36:02.026 17:32:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@693 -- # '[' raid1 = raid1 ']' 00:36:02.026 17:32:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@693 -- # '[' 4 -gt 2 ']' 00:36:02.026 17:32:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@695 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:36:02.026 17:32:39 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:02.026 17:32:39 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:36:02.026 [2024-11-26 17:32:39.283216] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:36:02.285 [2024-11-26 17:32:39.601649] bdev_raid.c:1974:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 1 raid_ch: 0x60d000006220 00:36:02.285 [2024-11-26 17:32:39.601727] bdev_raid.c:1974:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 1 raid_ch: 0x60d0000063c0 00:36:02.285 17:32:39 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:02.285 17:32:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@698 -- # base_bdevs[1]= 00:36:02.285 17:32:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@699 -- # (( num_base_bdevs_operational-- )) 00:36:02.285 17:32:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@702 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:36:02.285 17:32:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:36:02.285 17:32:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:36:02.285 17:32:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:36:02.285 17:32:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:36:02.285 17:32:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:36:02.285 17:32:39 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:02.285 17:32:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:36:02.285 17:32:39 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:36:02.285 17:32:39 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:02.285 17:32:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:36:02.285 "name": "raid_bdev1", 00:36:02.285 "uuid": "1531ce49-2e53-4c6a-9473-a21c6f93677f", 00:36:02.285 "strip_size_kb": 0, 00:36:02.285 "state": "online", 00:36:02.285 "raid_level": "raid1", 00:36:02.285 "superblock": true, 00:36:02.285 "num_base_bdevs": 4, 00:36:02.285 "num_base_bdevs_discovered": 3, 00:36:02.285 "num_base_bdevs_operational": 3, 00:36:02.285 "process": { 00:36:02.285 "type": "rebuild", 00:36:02.285 "target": "spare", 00:36:02.285 "progress": { 00:36:02.285 "blocks": 18432, 00:36:02.285 "percent": 29 00:36:02.285 } 00:36:02.285 }, 00:36:02.285 "base_bdevs_list": [ 00:36:02.285 { 00:36:02.285 "name": "spare", 00:36:02.285 "uuid": "198ee48c-8f44-56a6-8298-8aea7731dcbc", 00:36:02.285 "is_configured": true, 00:36:02.285 "data_offset": 2048, 00:36:02.285 "data_size": 63488 00:36:02.285 }, 00:36:02.285 { 00:36:02.285 "name": null, 00:36:02.285 "uuid": "00000000-0000-0000-0000-000000000000", 00:36:02.285 "is_configured": false, 00:36:02.285 "data_offset": 0, 00:36:02.285 "data_size": 63488 00:36:02.285 }, 00:36:02.285 { 00:36:02.285 "name": "BaseBdev3", 00:36:02.285 "uuid": "c0612ada-6677-55be-8d1f-87f32e3901f5", 00:36:02.285 "is_configured": true, 00:36:02.285 "data_offset": 2048, 00:36:02.285 "data_size": 63488 00:36:02.285 }, 00:36:02.285 { 00:36:02.285 "name": "BaseBdev4", 00:36:02.285 "uuid": "94373632-e9eb-5e76-bfb5-963d9b5bcd8c", 00:36:02.285 "is_configured": true, 00:36:02.285 "data_offset": 2048, 00:36:02.285 "data_size": 63488 00:36:02.286 } 00:36:02.286 ] 00:36:02.286 }' 00:36:02.286 17:32:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:36:02.286 17:32:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:36:02.286 17:32:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:36:02.545 17:32:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:36:02.545 17:32:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@706 -- # local timeout=513 00:36:02.545 17:32:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:36:02.545 17:32:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:36:02.545 17:32:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:36:02.545 17:32:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:36:02.545 17:32:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:36:02.546 17:32:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:36:02.546 17:32:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:36:02.546 17:32:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:36:02.546 17:32:39 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:02.546 17:32:39 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:36:02.546 138.50 IOPS, 415.50 MiB/s [2024-11-26T17:32:39.993Z] 17:32:39 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:02.546 17:32:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:36:02.546 "name": "raid_bdev1", 00:36:02.546 "uuid": "1531ce49-2e53-4c6a-9473-a21c6f93677f", 00:36:02.546 "strip_size_kb": 0, 00:36:02.546 "state": "online", 00:36:02.546 "raid_level": "raid1", 00:36:02.546 "superblock": true, 00:36:02.546 "num_base_bdevs": 4, 00:36:02.546 "num_base_bdevs_discovered": 3, 00:36:02.546 "num_base_bdevs_operational": 3, 00:36:02.546 "process": { 00:36:02.546 "type": "rebuild", 00:36:02.546 "target": "spare", 00:36:02.546 "progress": { 00:36:02.546 "blocks": 20480, 00:36:02.546 "percent": 32 00:36:02.546 } 00:36:02.546 }, 00:36:02.546 "base_bdevs_list": [ 00:36:02.546 { 00:36:02.546 "name": "spare", 00:36:02.546 "uuid": "198ee48c-8f44-56a6-8298-8aea7731dcbc", 00:36:02.546 "is_configured": true, 00:36:02.546 "data_offset": 2048, 00:36:02.546 "data_size": 63488 00:36:02.546 }, 00:36:02.546 { 00:36:02.546 "name": null, 00:36:02.546 "uuid": "00000000-0000-0000-0000-000000000000", 00:36:02.546 "is_configured": false, 00:36:02.546 "data_offset": 0, 00:36:02.546 "data_size": 63488 00:36:02.546 }, 00:36:02.546 { 00:36:02.546 "name": "BaseBdev3", 00:36:02.546 "uuid": "c0612ada-6677-55be-8d1f-87f32e3901f5", 00:36:02.546 "is_configured": true, 00:36:02.546 "data_offset": 2048, 00:36:02.546 "data_size": 63488 00:36:02.546 }, 00:36:02.546 { 00:36:02.546 "name": "BaseBdev4", 00:36:02.546 "uuid": "94373632-e9eb-5e76-bfb5-963d9b5bcd8c", 00:36:02.546 "is_configured": true, 00:36:02.546 "data_offset": 2048, 00:36:02.546 "data_size": 63488 00:36:02.546 } 00:36:02.546 ] 00:36:02.546 }' 00:36:02.546 17:32:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:36:02.546 17:32:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:36:02.546 17:32:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:36:02.546 17:32:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:36:02.546 17:32:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@711 -- # sleep 1 00:36:02.805 [2024-11-26 17:32:40.044046] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 26624 offset_begin: 24576 offset_end: 30720 00:36:03.374 [2024-11-26 17:32:40.702789] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 38912 offset_begin: 36864 offset_end: 43008 00:36:03.374 [2024-11-26 17:32:40.703751] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 38912 offset_begin: 36864 offset_end: 43008 00:36:03.643 120.00 IOPS, 360.00 MiB/s [2024-11-26T17:32:41.090Z] 17:32:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:36:03.643 17:32:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:36:03.643 17:32:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:36:03.643 17:32:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:36:03.643 17:32:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:36:03.643 17:32:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:36:03.643 17:32:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:36:03.643 17:32:40 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:03.643 17:32:40 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:36:03.643 17:32:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:36:03.643 [2024-11-26 17:32:40.923275] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 40960 offset_begin: 36864 offset_end: 43008 00:36:03.643 17:32:40 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:03.643 17:32:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:36:03.643 "name": "raid_bdev1", 00:36:03.643 "uuid": "1531ce49-2e53-4c6a-9473-a21c6f93677f", 00:36:03.643 "strip_size_kb": 0, 00:36:03.643 "state": "online", 00:36:03.643 "raid_level": "raid1", 00:36:03.643 "superblock": true, 00:36:03.643 "num_base_bdevs": 4, 00:36:03.643 "num_base_bdevs_discovered": 3, 00:36:03.643 "num_base_bdevs_operational": 3, 00:36:03.643 "process": { 00:36:03.643 "type": "rebuild", 00:36:03.643 "target": "spare", 00:36:03.643 "progress": { 00:36:03.643 "blocks": 38912, 00:36:03.643 "percent": 61 00:36:03.643 } 00:36:03.643 }, 00:36:03.643 "base_bdevs_list": [ 00:36:03.643 { 00:36:03.643 "name": "spare", 00:36:03.643 "uuid": "198ee48c-8f44-56a6-8298-8aea7731dcbc", 00:36:03.643 "is_configured": true, 00:36:03.643 "data_offset": 2048, 00:36:03.643 "data_size": 63488 00:36:03.643 }, 00:36:03.643 { 00:36:03.643 "name": null, 00:36:03.643 "uuid": "00000000-0000-0000-0000-000000000000", 00:36:03.643 "is_configured": false, 00:36:03.643 "data_offset": 0, 00:36:03.643 "data_size": 63488 00:36:03.643 }, 00:36:03.643 { 00:36:03.643 "name": "BaseBdev3", 00:36:03.643 "uuid": "c0612ada-6677-55be-8d1f-87f32e3901f5", 00:36:03.643 "is_configured": true, 00:36:03.643 "data_offset": 2048, 00:36:03.643 "data_size": 63488 00:36:03.643 }, 00:36:03.643 { 00:36:03.643 "name": "BaseBdev4", 00:36:03.643 "uuid": "94373632-e9eb-5e76-bfb5-963d9b5bcd8c", 00:36:03.643 "is_configured": true, 00:36:03.643 "data_offset": 2048, 00:36:03.643 "data_size": 63488 00:36:03.643 } 00:36:03.643 ] 00:36:03.643 }' 00:36:03.643 17:32:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:36:03.643 17:32:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:36:03.643 17:32:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:36:03.643 17:32:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:36:03.643 17:32:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@711 -- # sleep 1 00:36:04.226 [2024-11-26 17:32:41.467480] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 51200 offset_begin: 49152 offset_end: 55296 00:36:04.226 [2024-11-26 17:32:41.467976] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 51200 offset_begin: 49152 offset_end: 55296 00:36:04.226 [2024-11-26 17:32:41.591085] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 53248 offset_begin: 49152 offset_end: 55296 00:36:04.745 106.50 IOPS, 319.50 MiB/s [2024-11-26T17:32:42.192Z] 17:32:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:36:04.745 17:32:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:36:04.745 17:32:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:36:04.745 17:32:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:36:04.745 17:32:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:36:04.745 17:32:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:36:04.745 17:32:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:36:04.745 17:32:42 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:04.745 17:32:42 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:36:04.745 17:32:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:36:04.745 17:32:42 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:04.745 17:32:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:36:04.745 "name": "raid_bdev1", 00:36:04.745 "uuid": "1531ce49-2e53-4c6a-9473-a21c6f93677f", 00:36:04.745 "strip_size_kb": 0, 00:36:04.745 "state": "online", 00:36:04.745 "raid_level": "raid1", 00:36:04.745 "superblock": true, 00:36:04.745 "num_base_bdevs": 4, 00:36:04.745 "num_base_bdevs_discovered": 3, 00:36:04.745 "num_base_bdevs_operational": 3, 00:36:04.745 "process": { 00:36:04.745 "type": "rebuild", 00:36:04.745 "target": "spare", 00:36:04.745 "progress": { 00:36:04.745 "blocks": 59392, 00:36:04.745 "percent": 93 00:36:04.745 } 00:36:04.745 }, 00:36:04.745 "base_bdevs_list": [ 00:36:04.745 { 00:36:04.745 "name": "spare", 00:36:04.745 "uuid": "198ee48c-8f44-56a6-8298-8aea7731dcbc", 00:36:04.745 "is_configured": true, 00:36:04.745 "data_offset": 2048, 00:36:04.745 "data_size": 63488 00:36:04.745 }, 00:36:04.745 { 00:36:04.745 "name": null, 00:36:04.745 "uuid": "00000000-0000-0000-0000-000000000000", 00:36:04.745 "is_configured": false, 00:36:04.745 "data_offset": 0, 00:36:04.745 "data_size": 63488 00:36:04.745 }, 00:36:04.745 { 00:36:04.745 "name": "BaseBdev3", 00:36:04.745 "uuid": "c0612ada-6677-55be-8d1f-87f32e3901f5", 00:36:04.745 "is_configured": true, 00:36:04.745 "data_offset": 2048, 00:36:04.745 "data_size": 63488 00:36:04.745 }, 00:36:04.745 { 00:36:04.745 "name": "BaseBdev4", 00:36:04.745 "uuid": "94373632-e9eb-5e76-bfb5-963d9b5bcd8c", 00:36:04.745 "is_configured": true, 00:36:04.745 "data_offset": 2048, 00:36:04.745 "data_size": 63488 00:36:04.745 } 00:36:04.745 ] 00:36:04.745 }' 00:36:04.745 17:32:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:36:04.745 17:32:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:36:04.745 17:32:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:36:04.745 17:32:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:36:04.745 17:32:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@711 -- # sleep 1 00:36:05.004 [2024-11-26 17:32:42.262468] bdev_raid.c:2900:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:36:05.004 [2024-11-26 17:32:42.368220] bdev_raid.c:2562:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:36:05.004 [2024-11-26 17:32:42.371967] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:36:05.831 95.57 IOPS, 286.71 MiB/s [2024-11-26T17:32:43.278Z] 17:32:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:36:05.831 17:32:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:36:05.831 17:32:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:36:05.831 17:32:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:36:05.831 17:32:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:36:05.831 17:32:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:36:05.831 17:32:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:36:05.831 17:32:43 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:05.831 17:32:43 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:36:05.831 17:32:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:36:05.831 17:32:43 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:05.831 17:32:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:36:05.831 "name": "raid_bdev1", 00:36:05.831 "uuid": "1531ce49-2e53-4c6a-9473-a21c6f93677f", 00:36:05.831 "strip_size_kb": 0, 00:36:05.831 "state": "online", 00:36:05.831 "raid_level": "raid1", 00:36:05.831 "superblock": true, 00:36:05.831 "num_base_bdevs": 4, 00:36:05.831 "num_base_bdevs_discovered": 3, 00:36:05.831 "num_base_bdevs_operational": 3, 00:36:05.831 "base_bdevs_list": [ 00:36:05.831 { 00:36:05.831 "name": "spare", 00:36:05.831 "uuid": "198ee48c-8f44-56a6-8298-8aea7731dcbc", 00:36:05.831 "is_configured": true, 00:36:05.831 "data_offset": 2048, 00:36:05.831 "data_size": 63488 00:36:05.831 }, 00:36:05.831 { 00:36:05.831 "name": null, 00:36:05.831 "uuid": "00000000-0000-0000-0000-000000000000", 00:36:05.831 "is_configured": false, 00:36:05.831 "data_offset": 0, 00:36:05.831 "data_size": 63488 00:36:05.831 }, 00:36:05.831 { 00:36:05.831 "name": "BaseBdev3", 00:36:05.831 "uuid": "c0612ada-6677-55be-8d1f-87f32e3901f5", 00:36:05.831 "is_configured": true, 00:36:05.831 "data_offset": 2048, 00:36:05.831 "data_size": 63488 00:36:05.831 }, 00:36:05.831 { 00:36:05.831 "name": "BaseBdev4", 00:36:05.831 "uuid": "94373632-e9eb-5e76-bfb5-963d9b5bcd8c", 00:36:05.831 "is_configured": true, 00:36:05.831 "data_offset": 2048, 00:36:05.831 "data_size": 63488 00:36:05.832 } 00:36:05.832 ] 00:36:05.832 }' 00:36:05.832 17:32:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:36:06.091 17:32:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:36:06.091 17:32:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:36:06.091 17:32:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:36:06.091 17:32:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@709 -- # break 00:36:06.091 17:32:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:36:06.091 17:32:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:36:06.091 17:32:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:36:06.091 17:32:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:36:06.091 17:32:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:36:06.091 17:32:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:36:06.091 17:32:43 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:06.091 17:32:43 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:36:06.091 17:32:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:36:06.091 17:32:43 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:06.091 17:32:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:36:06.091 "name": "raid_bdev1", 00:36:06.091 "uuid": "1531ce49-2e53-4c6a-9473-a21c6f93677f", 00:36:06.091 "strip_size_kb": 0, 00:36:06.091 "state": "online", 00:36:06.091 "raid_level": "raid1", 00:36:06.092 "superblock": true, 00:36:06.092 "num_base_bdevs": 4, 00:36:06.092 "num_base_bdevs_discovered": 3, 00:36:06.092 "num_base_bdevs_operational": 3, 00:36:06.092 "base_bdevs_list": [ 00:36:06.092 { 00:36:06.092 "name": "spare", 00:36:06.092 "uuid": "198ee48c-8f44-56a6-8298-8aea7731dcbc", 00:36:06.092 "is_configured": true, 00:36:06.092 "data_offset": 2048, 00:36:06.092 "data_size": 63488 00:36:06.092 }, 00:36:06.092 { 00:36:06.092 "name": null, 00:36:06.092 "uuid": "00000000-0000-0000-0000-000000000000", 00:36:06.092 "is_configured": false, 00:36:06.092 "data_offset": 0, 00:36:06.092 "data_size": 63488 00:36:06.092 }, 00:36:06.092 { 00:36:06.092 "name": "BaseBdev3", 00:36:06.092 "uuid": "c0612ada-6677-55be-8d1f-87f32e3901f5", 00:36:06.092 "is_configured": true, 00:36:06.092 "data_offset": 2048, 00:36:06.092 "data_size": 63488 00:36:06.092 }, 00:36:06.092 { 00:36:06.092 "name": "BaseBdev4", 00:36:06.092 "uuid": "94373632-e9eb-5e76-bfb5-963d9b5bcd8c", 00:36:06.092 "is_configured": true, 00:36:06.092 "data_offset": 2048, 00:36:06.092 "data_size": 63488 00:36:06.092 } 00:36:06.092 ] 00:36:06.092 }' 00:36:06.092 17:32:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:36:06.092 17:32:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:36:06.092 17:32:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:36:06.092 17:32:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:36:06.092 17:32:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:36:06.092 17:32:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:36:06.092 17:32:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:36:06.092 17:32:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:36:06.092 17:32:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:36:06.092 17:32:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:36:06.092 17:32:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:36:06.092 17:32:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:36:06.092 17:32:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:36:06.092 17:32:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:36:06.092 17:32:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:36:06.092 17:32:43 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:06.092 17:32:43 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:36:06.092 17:32:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:36:06.092 17:32:43 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:06.092 17:32:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:36:06.092 "name": "raid_bdev1", 00:36:06.092 "uuid": "1531ce49-2e53-4c6a-9473-a21c6f93677f", 00:36:06.092 "strip_size_kb": 0, 00:36:06.092 "state": "online", 00:36:06.092 "raid_level": "raid1", 00:36:06.092 "superblock": true, 00:36:06.092 "num_base_bdevs": 4, 00:36:06.092 "num_base_bdevs_discovered": 3, 00:36:06.092 "num_base_bdevs_operational": 3, 00:36:06.092 "base_bdevs_list": [ 00:36:06.092 { 00:36:06.092 "name": "spare", 00:36:06.092 "uuid": "198ee48c-8f44-56a6-8298-8aea7731dcbc", 00:36:06.092 "is_configured": true, 00:36:06.092 "data_offset": 2048, 00:36:06.092 "data_size": 63488 00:36:06.092 }, 00:36:06.092 { 00:36:06.092 "name": null, 00:36:06.092 "uuid": "00000000-0000-0000-0000-000000000000", 00:36:06.092 "is_configured": false, 00:36:06.092 "data_offset": 0, 00:36:06.092 "data_size": 63488 00:36:06.092 }, 00:36:06.092 { 00:36:06.092 "name": "BaseBdev3", 00:36:06.092 "uuid": "c0612ada-6677-55be-8d1f-87f32e3901f5", 00:36:06.092 "is_configured": true, 00:36:06.092 "data_offset": 2048, 00:36:06.092 "data_size": 63488 00:36:06.092 }, 00:36:06.092 { 00:36:06.092 "name": "BaseBdev4", 00:36:06.092 "uuid": "94373632-e9eb-5e76-bfb5-963d9b5bcd8c", 00:36:06.092 "is_configured": true, 00:36:06.092 "data_offset": 2048, 00:36:06.092 "data_size": 63488 00:36:06.092 } 00:36:06.092 ] 00:36:06.092 }' 00:36:06.092 17:32:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:36:06.092 17:32:43 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:36:06.610 87.62 IOPS, 262.88 MiB/s [2024-11-26T17:32:44.057Z] 17:32:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:36:06.610 17:32:43 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:06.610 17:32:43 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:36:06.610 [2024-11-26 17:32:43.927268] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:36:06.610 [2024-11-26 17:32:43.927306] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:36:06.610 00:36:06.610 Latency(us) 00:36:06.610 [2024-11-26T17:32:44.057Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:36:06.610 Job: raid_bdev1 (Core Mask 0x1, workload: randrw, percentage: 50, depth: 2, IO size: 3145728) 00:36:06.610 raid_bdev1 : 8.23 86.26 258.78 0.00 0.00 16919.74 294.52 112347.43 00:36:06.610 [2024-11-26T17:32:44.057Z] =================================================================================================================== 00:36:06.610 [2024-11-26T17:32:44.057Z] Total : 86.26 258.78 0.00 0.00 16919.74 294.52 112347.43 00:36:06.610 [2024-11-26 17:32:44.016519] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:36:06.610 [2024-11-26 17:32:44.016590] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:36:06.610 [2024-11-26 17:32:44.016680] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:36:06.610 [2024-11-26 17:32:44.016695] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:36:06.610 { 00:36:06.610 "results": [ 00:36:06.610 { 00:36:06.610 "job": "raid_bdev1", 00:36:06.610 "core_mask": "0x1", 00:36:06.610 "workload": "randrw", 00:36:06.610 "percentage": 50, 00:36:06.610 "status": "finished", 00:36:06.610 "queue_depth": 2, 00:36:06.610 "io_size": 3145728, 00:36:06.610 "runtime": 8.231072, 00:36:06.610 "iops": 86.25850921969824, 00:36:06.610 "mibps": 258.77552765909473, 00:36:06.610 "io_failed": 0, 00:36:06.610 "io_timeout": 0, 00:36:06.610 "avg_latency_us": 16919.74224010731, 00:36:06.610 "min_latency_us": 294.52190476190475, 00:36:06.610 "max_latency_us": 112347.42857142857 00:36:06.610 } 00:36:06.610 ], 00:36:06.610 "core_count": 1 00:36:06.610 } 00:36:06.610 17:32:44 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:06.610 17:32:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:36:06.610 17:32:44 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:06.610 17:32:44 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:36:06.610 17:32:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@720 -- # jq length 00:36:06.610 17:32:44 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:06.870 17:32:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:36:06.870 17:32:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:36:06.870 17:32:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@723 -- # '[' true = true ']' 00:36:06.870 17:32:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@725 -- # nbd_start_disks /var/tmp/spdk.sock spare /dev/nbd0 00:36:06.870 17:32:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:36:06.870 17:32:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@10 -- # bdev_list=('spare') 00:36:06.870 17:32:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@10 -- # local bdev_list 00:36:06.870 17:32:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:36:06.870 17:32:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@11 -- # local nbd_list 00:36:06.870 17:32:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@12 -- # local i 00:36:06.870 17:32:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:36:06.870 17:32:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:36:06.870 17:32:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd0 00:36:07.129 /dev/nbd0 00:36:07.129 17:32:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:36:07.129 17:32:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:36:07.129 17:32:44 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:36:07.129 17:32:44 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@873 -- # local i 00:36:07.129 17:32:44 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:36:07.129 17:32:44 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:36:07.129 17:32:44 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:36:07.129 17:32:44 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@877 -- # break 00:36:07.129 17:32:44 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:36:07.129 17:32:44 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:36:07.129 17:32:44 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:36:07.129 1+0 records in 00:36:07.129 1+0 records out 00:36:07.129 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000274714 s, 14.9 MB/s 00:36:07.129 17:32:44 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:36:07.129 17:32:44 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@890 -- # size=4096 00:36:07.129 17:32:44 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:36:07.129 17:32:44 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:36:07.129 17:32:44 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@893 -- # return 0 00:36:07.129 17:32:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:36:07.129 17:32:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:36:07.130 17:32:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@726 -- # for bdev in "${base_bdevs[@]:1}" 00:36:07.130 17:32:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@727 -- # '[' -z '' ']' 00:36:07.130 17:32:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@728 -- # continue 00:36:07.130 17:32:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@726 -- # for bdev in "${base_bdevs[@]:1}" 00:36:07.130 17:32:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@727 -- # '[' -z BaseBdev3 ']' 00:36:07.130 17:32:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@730 -- # nbd_start_disks /var/tmp/spdk.sock BaseBdev3 /dev/nbd1 00:36:07.130 17:32:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:36:07.130 17:32:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev3') 00:36:07.130 17:32:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@10 -- # local bdev_list 00:36:07.130 17:32:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd1') 00:36:07.130 17:32:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@11 -- # local nbd_list 00:36:07.130 17:32:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@12 -- # local i 00:36:07.130 17:32:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:36:07.130 17:32:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:36:07.130 17:32:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev3 /dev/nbd1 00:36:07.389 /dev/nbd1 00:36:07.389 17:32:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:36:07.389 17:32:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:36:07.389 17:32:44 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:36:07.389 17:32:44 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@873 -- # local i 00:36:07.389 17:32:44 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:36:07.389 17:32:44 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:36:07.389 17:32:44 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:36:07.389 17:32:44 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@877 -- # break 00:36:07.389 17:32:44 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:36:07.389 17:32:44 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:36:07.389 17:32:44 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:36:07.389 1+0 records in 00:36:07.389 1+0 records out 00:36:07.389 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00033525 s, 12.2 MB/s 00:36:07.389 17:32:44 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:36:07.389 17:32:44 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@890 -- # size=4096 00:36:07.389 17:32:44 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:36:07.389 17:32:44 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:36:07.389 17:32:44 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@893 -- # return 0 00:36:07.389 17:32:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:36:07.389 17:32:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:36:07.389 17:32:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@731 -- # cmp -i 1048576 /dev/nbd0 /dev/nbd1 00:36:07.649 17:32:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@732 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd1 00:36:07.649 17:32:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:36:07.649 17:32:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd1') 00:36:07.649 17:32:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@50 -- # local nbd_list 00:36:07.649 17:32:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@51 -- # local i 00:36:07.649 17:32:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:36:07.649 17:32:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:36:07.649 17:32:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:36:07.649 17:32:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:36:07.649 17:32:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:36:07.649 17:32:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:36:07.649 17:32:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:36:07.649 17:32:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:36:07.649 17:32:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@41 -- # break 00:36:07.649 17:32:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@45 -- # return 0 00:36:07.649 17:32:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@726 -- # for bdev in "${base_bdevs[@]:1}" 00:36:07.649 17:32:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@727 -- # '[' -z BaseBdev4 ']' 00:36:07.649 17:32:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@730 -- # nbd_start_disks /var/tmp/spdk.sock BaseBdev4 /dev/nbd1 00:36:07.649 17:32:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:36:07.649 17:32:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev4') 00:36:07.649 17:32:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@10 -- # local bdev_list 00:36:07.649 17:32:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd1') 00:36:07.649 17:32:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@11 -- # local nbd_list 00:36:07.649 17:32:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@12 -- # local i 00:36:07.649 17:32:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:36:07.649 17:32:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:36:07.649 17:32:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev4 /dev/nbd1 00:36:07.909 /dev/nbd1 00:36:07.909 17:32:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:36:07.909 17:32:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:36:07.909 17:32:45 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:36:07.909 17:32:45 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@873 -- # local i 00:36:07.909 17:32:45 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:36:07.909 17:32:45 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:36:07.909 17:32:45 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:36:07.909 17:32:45 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@877 -- # break 00:36:07.909 17:32:45 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:36:07.909 17:32:45 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:36:07.909 17:32:45 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:36:07.909 1+0 records in 00:36:07.909 1+0 records out 00:36:07.909 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000343205 s, 11.9 MB/s 00:36:07.909 17:32:45 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:36:07.909 17:32:45 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@890 -- # size=4096 00:36:07.909 17:32:45 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:36:07.909 17:32:45 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:36:07.909 17:32:45 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@893 -- # return 0 00:36:07.909 17:32:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:36:07.909 17:32:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:36:07.909 17:32:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@731 -- # cmp -i 1048576 /dev/nbd0 /dev/nbd1 00:36:08.167 17:32:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@732 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd1 00:36:08.167 17:32:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:36:08.167 17:32:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd1') 00:36:08.167 17:32:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@50 -- # local nbd_list 00:36:08.167 17:32:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@51 -- # local i 00:36:08.167 17:32:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:36:08.167 17:32:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:36:08.167 17:32:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:36:08.167 17:32:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:36:08.167 17:32:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:36:08.167 17:32:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:36:08.426 17:32:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:36:08.426 17:32:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:36:08.426 17:32:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@41 -- # break 00:36:08.426 17:32:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@45 -- # return 0 00:36:08.426 17:32:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@734 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:36:08.426 17:32:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:36:08.426 17:32:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:36:08.426 17:32:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@50 -- # local nbd_list 00:36:08.426 17:32:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@51 -- # local i 00:36:08.426 17:32:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:36:08.426 17:32:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:36:08.426 17:32:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:36:08.426 17:32:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:36:08.426 17:32:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:36:08.426 17:32:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:36:08.426 17:32:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:36:08.426 17:32:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:36:08.426 17:32:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@41 -- # break 00:36:08.426 17:32:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@45 -- # return 0 00:36:08.426 17:32:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@743 -- # '[' true = true ']' 00:36:08.426 17:32:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@745 -- # rpc_cmd bdev_passthru_delete spare 00:36:08.426 17:32:45 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:08.426 17:32:45 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:36:08.426 17:32:45 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:08.426 17:32:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@746 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:36:08.426 17:32:45 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:08.426 17:32:45 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:36:08.426 [2024-11-26 17:32:45.842612] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:36:08.426 [2024-11-26 17:32:45.842681] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:36:08.426 [2024-11-26 17:32:45.842706] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000bd80 00:36:08.426 [2024-11-26 17:32:45.842721] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:36:08.426 [2024-11-26 17:32:45.845308] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:36:08.426 [2024-11-26 17:32:45.845357] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:36:08.426 [2024-11-26 17:32:45.845445] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:36:08.426 [2024-11-26 17:32:45.845506] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:36:08.426 [2024-11-26 17:32:45.845629] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:36:08.426 [2024-11-26 17:32:45.845732] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:36:08.426 spare 00:36:08.426 17:32:45 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:08.426 17:32:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@747 -- # rpc_cmd bdev_wait_for_examine 00:36:08.426 17:32:45 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:08.426 17:32:45 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:36:08.684 [2024-11-26 17:32:45.945821] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007b00 00:36:08.684 [2024-11-26 17:32:45.945860] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:36:08.684 [2024-11-26 17:32:45.946195] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000037160 00:36:08.684 [2024-11-26 17:32:45.946410] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007b00 00:36:08.684 [2024-11-26 17:32:45.946422] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007b00 00:36:08.684 [2024-11-26 17:32:45.946644] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:36:08.684 17:32:45 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:08.684 17:32:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@749 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:36:08.684 17:32:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:36:08.684 17:32:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:36:08.684 17:32:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:36:08.684 17:32:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:36:08.684 17:32:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:36:08.684 17:32:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:36:08.684 17:32:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:36:08.684 17:32:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:36:08.684 17:32:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:36:08.684 17:32:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:36:08.684 17:32:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:36:08.684 17:32:45 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:08.684 17:32:45 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:36:08.684 17:32:45 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:08.684 17:32:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:36:08.684 "name": "raid_bdev1", 00:36:08.684 "uuid": "1531ce49-2e53-4c6a-9473-a21c6f93677f", 00:36:08.684 "strip_size_kb": 0, 00:36:08.685 "state": "online", 00:36:08.685 "raid_level": "raid1", 00:36:08.685 "superblock": true, 00:36:08.685 "num_base_bdevs": 4, 00:36:08.685 "num_base_bdevs_discovered": 3, 00:36:08.685 "num_base_bdevs_operational": 3, 00:36:08.685 "base_bdevs_list": [ 00:36:08.685 { 00:36:08.685 "name": "spare", 00:36:08.685 "uuid": "198ee48c-8f44-56a6-8298-8aea7731dcbc", 00:36:08.685 "is_configured": true, 00:36:08.685 "data_offset": 2048, 00:36:08.685 "data_size": 63488 00:36:08.685 }, 00:36:08.685 { 00:36:08.685 "name": null, 00:36:08.685 "uuid": "00000000-0000-0000-0000-000000000000", 00:36:08.685 "is_configured": false, 00:36:08.685 "data_offset": 2048, 00:36:08.685 "data_size": 63488 00:36:08.685 }, 00:36:08.685 { 00:36:08.685 "name": "BaseBdev3", 00:36:08.685 "uuid": "c0612ada-6677-55be-8d1f-87f32e3901f5", 00:36:08.685 "is_configured": true, 00:36:08.685 "data_offset": 2048, 00:36:08.685 "data_size": 63488 00:36:08.685 }, 00:36:08.685 { 00:36:08.685 "name": "BaseBdev4", 00:36:08.685 "uuid": "94373632-e9eb-5e76-bfb5-963d9b5bcd8c", 00:36:08.685 "is_configured": true, 00:36:08.685 "data_offset": 2048, 00:36:08.685 "data_size": 63488 00:36:08.685 } 00:36:08.685 ] 00:36:08.685 }' 00:36:08.685 17:32:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:36:08.685 17:32:45 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:36:08.943 17:32:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@750 -- # verify_raid_bdev_process raid_bdev1 none none 00:36:08.943 17:32:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:36:08.943 17:32:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:36:08.943 17:32:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:36:08.943 17:32:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:36:08.943 17:32:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:36:08.943 17:32:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:36:08.943 17:32:46 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:08.943 17:32:46 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:36:09.202 17:32:46 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:09.202 17:32:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:36:09.202 "name": "raid_bdev1", 00:36:09.202 "uuid": "1531ce49-2e53-4c6a-9473-a21c6f93677f", 00:36:09.202 "strip_size_kb": 0, 00:36:09.202 "state": "online", 00:36:09.202 "raid_level": "raid1", 00:36:09.202 "superblock": true, 00:36:09.202 "num_base_bdevs": 4, 00:36:09.202 "num_base_bdevs_discovered": 3, 00:36:09.202 "num_base_bdevs_operational": 3, 00:36:09.202 "base_bdevs_list": [ 00:36:09.202 { 00:36:09.202 "name": "spare", 00:36:09.202 "uuid": "198ee48c-8f44-56a6-8298-8aea7731dcbc", 00:36:09.202 "is_configured": true, 00:36:09.202 "data_offset": 2048, 00:36:09.202 "data_size": 63488 00:36:09.202 }, 00:36:09.202 { 00:36:09.202 "name": null, 00:36:09.202 "uuid": "00000000-0000-0000-0000-000000000000", 00:36:09.202 "is_configured": false, 00:36:09.202 "data_offset": 2048, 00:36:09.202 "data_size": 63488 00:36:09.202 }, 00:36:09.202 { 00:36:09.202 "name": "BaseBdev3", 00:36:09.202 "uuid": "c0612ada-6677-55be-8d1f-87f32e3901f5", 00:36:09.202 "is_configured": true, 00:36:09.202 "data_offset": 2048, 00:36:09.202 "data_size": 63488 00:36:09.202 }, 00:36:09.202 { 00:36:09.202 "name": "BaseBdev4", 00:36:09.202 "uuid": "94373632-e9eb-5e76-bfb5-963d9b5bcd8c", 00:36:09.202 "is_configured": true, 00:36:09.202 "data_offset": 2048, 00:36:09.202 "data_size": 63488 00:36:09.202 } 00:36:09.202 ] 00:36:09.202 }' 00:36:09.202 17:32:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:36:09.202 17:32:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:36:09.202 17:32:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:36:09.202 17:32:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:36:09.202 17:32:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@751 -- # rpc_cmd bdev_raid_get_bdevs all 00:36:09.202 17:32:46 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:09.202 17:32:46 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:36:09.202 17:32:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@751 -- # jq -r '.[].base_bdevs_list[0].name' 00:36:09.202 17:32:46 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:09.202 17:32:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@751 -- # [[ spare == \s\p\a\r\e ]] 00:36:09.202 17:32:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@754 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:36:09.202 17:32:46 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:09.202 17:32:46 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:36:09.202 [2024-11-26 17:32:46.566968] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:36:09.202 17:32:46 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:09.202 17:32:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@755 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:36:09.202 17:32:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:36:09.202 17:32:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:36:09.202 17:32:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:36:09.202 17:32:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:36:09.202 17:32:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:36:09.202 17:32:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:36:09.202 17:32:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:36:09.202 17:32:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:36:09.202 17:32:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:36:09.202 17:32:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:36:09.202 17:32:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:36:09.202 17:32:46 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:09.202 17:32:46 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:36:09.202 17:32:46 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:09.202 17:32:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:36:09.202 "name": "raid_bdev1", 00:36:09.202 "uuid": "1531ce49-2e53-4c6a-9473-a21c6f93677f", 00:36:09.202 "strip_size_kb": 0, 00:36:09.202 "state": "online", 00:36:09.202 "raid_level": "raid1", 00:36:09.202 "superblock": true, 00:36:09.202 "num_base_bdevs": 4, 00:36:09.202 "num_base_bdevs_discovered": 2, 00:36:09.202 "num_base_bdevs_operational": 2, 00:36:09.202 "base_bdevs_list": [ 00:36:09.202 { 00:36:09.202 "name": null, 00:36:09.202 "uuid": "00000000-0000-0000-0000-000000000000", 00:36:09.202 "is_configured": false, 00:36:09.202 "data_offset": 0, 00:36:09.202 "data_size": 63488 00:36:09.202 }, 00:36:09.202 { 00:36:09.202 "name": null, 00:36:09.202 "uuid": "00000000-0000-0000-0000-000000000000", 00:36:09.202 "is_configured": false, 00:36:09.202 "data_offset": 2048, 00:36:09.202 "data_size": 63488 00:36:09.202 }, 00:36:09.202 { 00:36:09.202 "name": "BaseBdev3", 00:36:09.202 "uuid": "c0612ada-6677-55be-8d1f-87f32e3901f5", 00:36:09.202 "is_configured": true, 00:36:09.202 "data_offset": 2048, 00:36:09.202 "data_size": 63488 00:36:09.202 }, 00:36:09.202 { 00:36:09.202 "name": "BaseBdev4", 00:36:09.202 "uuid": "94373632-e9eb-5e76-bfb5-963d9b5bcd8c", 00:36:09.202 "is_configured": true, 00:36:09.202 "data_offset": 2048, 00:36:09.202 "data_size": 63488 00:36:09.202 } 00:36:09.202 ] 00:36:09.202 }' 00:36:09.202 17:32:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:36:09.202 17:32:46 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:36:09.770 17:32:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@756 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:36:09.770 17:32:47 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:09.770 17:32:47 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:36:09.770 [2024-11-26 17:32:47.007149] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:36:09.770 [2024-11-26 17:32:47.007342] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (5) smaller than existing raid bdev raid_bdev1 (6) 00:36:09.770 [2024-11-26 17:32:47.007363] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:36:09.770 [2024-11-26 17:32:47.007406] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:36:09.770 [2024-11-26 17:32:47.023126] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000037230 00:36:09.770 17:32:47 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:09.770 17:32:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@757 -- # sleep 1 00:36:09.770 [2024-11-26 17:32:47.025305] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:36:10.706 17:32:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@758 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:36:10.706 17:32:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:36:10.706 17:32:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:36:10.706 17:32:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:36:10.706 17:32:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:36:10.706 17:32:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:36:10.706 17:32:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:36:10.706 17:32:48 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:10.706 17:32:48 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:36:10.706 17:32:48 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:10.706 17:32:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:36:10.706 "name": "raid_bdev1", 00:36:10.706 "uuid": "1531ce49-2e53-4c6a-9473-a21c6f93677f", 00:36:10.706 "strip_size_kb": 0, 00:36:10.706 "state": "online", 00:36:10.706 "raid_level": "raid1", 00:36:10.706 "superblock": true, 00:36:10.706 "num_base_bdevs": 4, 00:36:10.706 "num_base_bdevs_discovered": 3, 00:36:10.706 "num_base_bdevs_operational": 3, 00:36:10.706 "process": { 00:36:10.706 "type": "rebuild", 00:36:10.706 "target": "spare", 00:36:10.706 "progress": { 00:36:10.706 "blocks": 20480, 00:36:10.706 "percent": 32 00:36:10.706 } 00:36:10.706 }, 00:36:10.706 "base_bdevs_list": [ 00:36:10.706 { 00:36:10.706 "name": "spare", 00:36:10.706 "uuid": "198ee48c-8f44-56a6-8298-8aea7731dcbc", 00:36:10.706 "is_configured": true, 00:36:10.706 "data_offset": 2048, 00:36:10.706 "data_size": 63488 00:36:10.706 }, 00:36:10.706 { 00:36:10.706 "name": null, 00:36:10.706 "uuid": "00000000-0000-0000-0000-000000000000", 00:36:10.706 "is_configured": false, 00:36:10.706 "data_offset": 2048, 00:36:10.706 "data_size": 63488 00:36:10.706 }, 00:36:10.706 { 00:36:10.706 "name": "BaseBdev3", 00:36:10.706 "uuid": "c0612ada-6677-55be-8d1f-87f32e3901f5", 00:36:10.706 "is_configured": true, 00:36:10.706 "data_offset": 2048, 00:36:10.706 "data_size": 63488 00:36:10.706 }, 00:36:10.706 { 00:36:10.706 "name": "BaseBdev4", 00:36:10.706 "uuid": "94373632-e9eb-5e76-bfb5-963d9b5bcd8c", 00:36:10.706 "is_configured": true, 00:36:10.706 "data_offset": 2048, 00:36:10.706 "data_size": 63488 00:36:10.706 } 00:36:10.706 ] 00:36:10.706 }' 00:36:10.706 17:32:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:36:10.706 17:32:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:36:10.706 17:32:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:36:10.966 17:32:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:36:10.966 17:32:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@761 -- # rpc_cmd bdev_passthru_delete spare 00:36:10.966 17:32:48 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:10.966 17:32:48 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:36:10.966 [2024-11-26 17:32:48.166900] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:36:10.966 [2024-11-26 17:32:48.233140] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:36:10.966 [2024-11-26 17:32:48.233230] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:36:10.966 [2024-11-26 17:32:48.233249] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:36:10.966 [2024-11-26 17:32:48.233260] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:36:10.966 17:32:48 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:10.966 17:32:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@762 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:36:10.966 17:32:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:36:10.966 17:32:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:36:10.966 17:32:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:36:10.966 17:32:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:36:10.966 17:32:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:36:10.966 17:32:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:36:10.966 17:32:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:36:10.966 17:32:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:36:10.966 17:32:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:36:10.966 17:32:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:36:10.966 17:32:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:36:10.966 17:32:48 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:10.966 17:32:48 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:36:10.966 17:32:48 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:10.966 17:32:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:36:10.966 "name": "raid_bdev1", 00:36:10.966 "uuid": "1531ce49-2e53-4c6a-9473-a21c6f93677f", 00:36:10.966 "strip_size_kb": 0, 00:36:10.966 "state": "online", 00:36:10.966 "raid_level": "raid1", 00:36:10.966 "superblock": true, 00:36:10.966 "num_base_bdevs": 4, 00:36:10.966 "num_base_bdevs_discovered": 2, 00:36:10.966 "num_base_bdevs_operational": 2, 00:36:10.966 "base_bdevs_list": [ 00:36:10.966 { 00:36:10.966 "name": null, 00:36:10.966 "uuid": "00000000-0000-0000-0000-000000000000", 00:36:10.966 "is_configured": false, 00:36:10.966 "data_offset": 0, 00:36:10.966 "data_size": 63488 00:36:10.966 }, 00:36:10.966 { 00:36:10.966 "name": null, 00:36:10.966 "uuid": "00000000-0000-0000-0000-000000000000", 00:36:10.966 "is_configured": false, 00:36:10.966 "data_offset": 2048, 00:36:10.966 "data_size": 63488 00:36:10.966 }, 00:36:10.966 { 00:36:10.966 "name": "BaseBdev3", 00:36:10.966 "uuid": "c0612ada-6677-55be-8d1f-87f32e3901f5", 00:36:10.966 "is_configured": true, 00:36:10.966 "data_offset": 2048, 00:36:10.966 "data_size": 63488 00:36:10.966 }, 00:36:10.966 { 00:36:10.966 "name": "BaseBdev4", 00:36:10.966 "uuid": "94373632-e9eb-5e76-bfb5-963d9b5bcd8c", 00:36:10.966 "is_configured": true, 00:36:10.966 "data_offset": 2048, 00:36:10.966 "data_size": 63488 00:36:10.966 } 00:36:10.966 ] 00:36:10.966 }' 00:36:10.966 17:32:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:36:10.966 17:32:48 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:36:11.535 17:32:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@763 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:36:11.535 17:32:48 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:11.535 17:32:48 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:36:11.535 [2024-11-26 17:32:48.723019] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:36:11.535 [2024-11-26 17:32:48.723106] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:36:11.535 [2024-11-26 17:32:48.723143] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000c680 00:36:11.535 [2024-11-26 17:32:48.723160] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:36:11.535 [2024-11-26 17:32:48.723692] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:36:11.535 [2024-11-26 17:32:48.723719] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:36:11.535 [2024-11-26 17:32:48.723828] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:36:11.535 [2024-11-26 17:32:48.723864] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (5) smaller than existing raid bdev raid_bdev1 (6) 00:36:11.535 [2024-11-26 17:32:48.723878] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:36:11.535 [2024-11-26 17:32:48.723915] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:36:11.535 [2024-11-26 17:32:48.740217] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000037300 00:36:11.535 spare 00:36:11.535 17:32:48 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:11.535 17:32:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@764 -- # sleep 1 00:36:11.535 [2024-11-26 17:32:48.742375] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:36:12.473 17:32:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@765 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:36:12.473 17:32:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:36:12.473 17:32:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:36:12.474 17:32:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:36:12.474 17:32:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:36:12.474 17:32:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:36:12.474 17:32:49 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:12.474 17:32:49 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:36:12.474 17:32:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:36:12.474 17:32:49 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:12.474 17:32:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:36:12.474 "name": "raid_bdev1", 00:36:12.474 "uuid": "1531ce49-2e53-4c6a-9473-a21c6f93677f", 00:36:12.474 "strip_size_kb": 0, 00:36:12.474 "state": "online", 00:36:12.474 "raid_level": "raid1", 00:36:12.474 "superblock": true, 00:36:12.474 "num_base_bdevs": 4, 00:36:12.474 "num_base_bdevs_discovered": 3, 00:36:12.474 "num_base_bdevs_operational": 3, 00:36:12.474 "process": { 00:36:12.474 "type": "rebuild", 00:36:12.474 "target": "spare", 00:36:12.474 "progress": { 00:36:12.474 "blocks": 20480, 00:36:12.474 "percent": 32 00:36:12.474 } 00:36:12.474 }, 00:36:12.474 "base_bdevs_list": [ 00:36:12.474 { 00:36:12.474 "name": "spare", 00:36:12.474 "uuid": "198ee48c-8f44-56a6-8298-8aea7731dcbc", 00:36:12.474 "is_configured": true, 00:36:12.474 "data_offset": 2048, 00:36:12.474 "data_size": 63488 00:36:12.474 }, 00:36:12.474 { 00:36:12.474 "name": null, 00:36:12.474 "uuid": "00000000-0000-0000-0000-000000000000", 00:36:12.474 "is_configured": false, 00:36:12.474 "data_offset": 2048, 00:36:12.474 "data_size": 63488 00:36:12.474 }, 00:36:12.474 { 00:36:12.474 "name": "BaseBdev3", 00:36:12.474 "uuid": "c0612ada-6677-55be-8d1f-87f32e3901f5", 00:36:12.474 "is_configured": true, 00:36:12.474 "data_offset": 2048, 00:36:12.474 "data_size": 63488 00:36:12.474 }, 00:36:12.474 { 00:36:12.474 "name": "BaseBdev4", 00:36:12.474 "uuid": "94373632-e9eb-5e76-bfb5-963d9b5bcd8c", 00:36:12.474 "is_configured": true, 00:36:12.474 "data_offset": 2048, 00:36:12.474 "data_size": 63488 00:36:12.474 } 00:36:12.474 ] 00:36:12.474 }' 00:36:12.474 17:32:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:36:12.474 17:32:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:36:12.474 17:32:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:36:12.474 17:32:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:36:12.474 17:32:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@768 -- # rpc_cmd bdev_passthru_delete spare 00:36:12.474 17:32:49 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:12.474 17:32:49 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:36:12.474 [2024-11-26 17:32:49.887925] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:36:12.734 [2024-11-26 17:32:49.950272] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:36:12.734 [2024-11-26 17:32:49.950331] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:36:12.734 [2024-11-26 17:32:49.950351] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:36:12.734 [2024-11-26 17:32:49.950360] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:36:12.734 17:32:49 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:12.734 17:32:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@769 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:36:12.734 17:32:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:36:12.734 17:32:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:36:12.734 17:32:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:36:12.734 17:32:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:36:12.734 17:32:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:36:12.734 17:32:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:36:12.734 17:32:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:36:12.734 17:32:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:36:12.734 17:32:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:36:12.734 17:32:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:36:12.734 17:32:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:36:12.734 17:32:49 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:12.734 17:32:49 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:36:12.734 17:32:50 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:12.734 17:32:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:36:12.734 "name": "raid_bdev1", 00:36:12.734 "uuid": "1531ce49-2e53-4c6a-9473-a21c6f93677f", 00:36:12.734 "strip_size_kb": 0, 00:36:12.734 "state": "online", 00:36:12.734 "raid_level": "raid1", 00:36:12.734 "superblock": true, 00:36:12.734 "num_base_bdevs": 4, 00:36:12.734 "num_base_bdevs_discovered": 2, 00:36:12.734 "num_base_bdevs_operational": 2, 00:36:12.734 "base_bdevs_list": [ 00:36:12.734 { 00:36:12.734 "name": null, 00:36:12.734 "uuid": "00000000-0000-0000-0000-000000000000", 00:36:12.734 "is_configured": false, 00:36:12.734 "data_offset": 0, 00:36:12.734 "data_size": 63488 00:36:12.734 }, 00:36:12.734 { 00:36:12.734 "name": null, 00:36:12.734 "uuid": "00000000-0000-0000-0000-000000000000", 00:36:12.734 "is_configured": false, 00:36:12.734 "data_offset": 2048, 00:36:12.734 "data_size": 63488 00:36:12.734 }, 00:36:12.734 { 00:36:12.734 "name": "BaseBdev3", 00:36:12.734 "uuid": "c0612ada-6677-55be-8d1f-87f32e3901f5", 00:36:12.734 "is_configured": true, 00:36:12.734 "data_offset": 2048, 00:36:12.734 "data_size": 63488 00:36:12.734 }, 00:36:12.734 { 00:36:12.734 "name": "BaseBdev4", 00:36:12.734 "uuid": "94373632-e9eb-5e76-bfb5-963d9b5bcd8c", 00:36:12.734 "is_configured": true, 00:36:12.734 "data_offset": 2048, 00:36:12.734 "data_size": 63488 00:36:12.734 } 00:36:12.734 ] 00:36:12.734 }' 00:36:12.734 17:32:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:36:12.734 17:32:50 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:36:13.006 17:32:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@770 -- # verify_raid_bdev_process raid_bdev1 none none 00:36:13.006 17:32:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:36:13.006 17:32:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:36:13.006 17:32:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:36:13.006 17:32:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:36:13.006 17:32:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:36:13.006 17:32:50 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:13.007 17:32:50 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:36:13.007 17:32:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:36:13.293 17:32:50 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:13.293 17:32:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:36:13.293 "name": "raid_bdev1", 00:36:13.293 "uuid": "1531ce49-2e53-4c6a-9473-a21c6f93677f", 00:36:13.293 "strip_size_kb": 0, 00:36:13.293 "state": "online", 00:36:13.293 "raid_level": "raid1", 00:36:13.293 "superblock": true, 00:36:13.293 "num_base_bdevs": 4, 00:36:13.293 "num_base_bdevs_discovered": 2, 00:36:13.293 "num_base_bdevs_operational": 2, 00:36:13.293 "base_bdevs_list": [ 00:36:13.293 { 00:36:13.293 "name": null, 00:36:13.293 "uuid": "00000000-0000-0000-0000-000000000000", 00:36:13.293 "is_configured": false, 00:36:13.293 "data_offset": 0, 00:36:13.293 "data_size": 63488 00:36:13.293 }, 00:36:13.293 { 00:36:13.293 "name": null, 00:36:13.293 "uuid": "00000000-0000-0000-0000-000000000000", 00:36:13.293 "is_configured": false, 00:36:13.293 "data_offset": 2048, 00:36:13.293 "data_size": 63488 00:36:13.293 }, 00:36:13.293 { 00:36:13.293 "name": "BaseBdev3", 00:36:13.293 "uuid": "c0612ada-6677-55be-8d1f-87f32e3901f5", 00:36:13.293 "is_configured": true, 00:36:13.293 "data_offset": 2048, 00:36:13.293 "data_size": 63488 00:36:13.293 }, 00:36:13.293 { 00:36:13.293 "name": "BaseBdev4", 00:36:13.293 "uuid": "94373632-e9eb-5e76-bfb5-963d9b5bcd8c", 00:36:13.293 "is_configured": true, 00:36:13.293 "data_offset": 2048, 00:36:13.293 "data_size": 63488 00:36:13.293 } 00:36:13.293 ] 00:36:13.293 }' 00:36:13.293 17:32:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:36:13.293 17:32:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:36:13.293 17:32:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:36:13.293 17:32:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:36:13.293 17:32:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@773 -- # rpc_cmd bdev_passthru_delete BaseBdev1 00:36:13.293 17:32:50 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:13.293 17:32:50 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:36:13.293 17:32:50 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:13.293 17:32:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@774 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:36:13.293 17:32:50 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:13.293 17:32:50 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:36:13.293 [2024-11-26 17:32:50.580082] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:36:13.293 [2024-11-26 17:32:50.580140] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:36:13.293 [2024-11-26 17:32:50.580165] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000cc80 00:36:13.293 [2024-11-26 17:32:50.580177] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:36:13.293 [2024-11-26 17:32:50.580657] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:36:13.293 [2024-11-26 17:32:50.580676] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:36:13.293 [2024-11-26 17:32:50.580761] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev BaseBdev1 00:36:13.293 [2024-11-26 17:32:50.580776] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (6) 00:36:13.293 [2024-11-26 17:32:50.580793] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:36:13.293 [2024-11-26 17:32:50.580804] bdev_raid.c:3894:raid_bdev_examine_done: *ERROR*: Failed to examine bdev BaseBdev1: Invalid argument 00:36:13.293 BaseBdev1 00:36:13.293 17:32:50 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:13.293 17:32:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@775 -- # sleep 1 00:36:14.231 17:32:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@776 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:36:14.231 17:32:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:36:14.231 17:32:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:36:14.231 17:32:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:36:14.231 17:32:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:36:14.231 17:32:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:36:14.231 17:32:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:36:14.231 17:32:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:36:14.231 17:32:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:36:14.231 17:32:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:36:14.231 17:32:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:36:14.231 17:32:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:36:14.231 17:32:51 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:14.231 17:32:51 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:36:14.231 17:32:51 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:14.231 17:32:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:36:14.231 "name": "raid_bdev1", 00:36:14.231 "uuid": "1531ce49-2e53-4c6a-9473-a21c6f93677f", 00:36:14.231 "strip_size_kb": 0, 00:36:14.231 "state": "online", 00:36:14.231 "raid_level": "raid1", 00:36:14.231 "superblock": true, 00:36:14.231 "num_base_bdevs": 4, 00:36:14.231 "num_base_bdevs_discovered": 2, 00:36:14.231 "num_base_bdevs_operational": 2, 00:36:14.231 "base_bdevs_list": [ 00:36:14.231 { 00:36:14.231 "name": null, 00:36:14.231 "uuid": "00000000-0000-0000-0000-000000000000", 00:36:14.231 "is_configured": false, 00:36:14.231 "data_offset": 0, 00:36:14.231 "data_size": 63488 00:36:14.231 }, 00:36:14.231 { 00:36:14.231 "name": null, 00:36:14.231 "uuid": "00000000-0000-0000-0000-000000000000", 00:36:14.231 "is_configured": false, 00:36:14.231 "data_offset": 2048, 00:36:14.231 "data_size": 63488 00:36:14.231 }, 00:36:14.231 { 00:36:14.231 "name": "BaseBdev3", 00:36:14.231 "uuid": "c0612ada-6677-55be-8d1f-87f32e3901f5", 00:36:14.231 "is_configured": true, 00:36:14.231 "data_offset": 2048, 00:36:14.231 "data_size": 63488 00:36:14.231 }, 00:36:14.231 { 00:36:14.231 "name": "BaseBdev4", 00:36:14.231 "uuid": "94373632-e9eb-5e76-bfb5-963d9b5bcd8c", 00:36:14.231 "is_configured": true, 00:36:14.231 "data_offset": 2048, 00:36:14.231 "data_size": 63488 00:36:14.231 } 00:36:14.231 ] 00:36:14.231 }' 00:36:14.231 17:32:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:36:14.231 17:32:51 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:36:14.799 17:32:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@777 -- # verify_raid_bdev_process raid_bdev1 none none 00:36:14.799 17:32:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:36:14.799 17:32:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:36:14.799 17:32:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:36:14.799 17:32:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:36:14.799 17:32:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:36:14.799 17:32:52 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:14.799 17:32:52 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:36:14.799 17:32:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:36:14.799 17:32:52 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:14.799 17:32:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:36:14.799 "name": "raid_bdev1", 00:36:14.799 "uuid": "1531ce49-2e53-4c6a-9473-a21c6f93677f", 00:36:14.799 "strip_size_kb": 0, 00:36:14.799 "state": "online", 00:36:14.799 "raid_level": "raid1", 00:36:14.799 "superblock": true, 00:36:14.799 "num_base_bdevs": 4, 00:36:14.799 "num_base_bdevs_discovered": 2, 00:36:14.799 "num_base_bdevs_operational": 2, 00:36:14.799 "base_bdevs_list": [ 00:36:14.799 { 00:36:14.799 "name": null, 00:36:14.799 "uuid": "00000000-0000-0000-0000-000000000000", 00:36:14.799 "is_configured": false, 00:36:14.799 "data_offset": 0, 00:36:14.799 "data_size": 63488 00:36:14.799 }, 00:36:14.799 { 00:36:14.799 "name": null, 00:36:14.799 "uuid": "00000000-0000-0000-0000-000000000000", 00:36:14.799 "is_configured": false, 00:36:14.799 "data_offset": 2048, 00:36:14.799 "data_size": 63488 00:36:14.799 }, 00:36:14.799 { 00:36:14.799 "name": "BaseBdev3", 00:36:14.799 "uuid": "c0612ada-6677-55be-8d1f-87f32e3901f5", 00:36:14.799 "is_configured": true, 00:36:14.799 "data_offset": 2048, 00:36:14.799 "data_size": 63488 00:36:14.799 }, 00:36:14.799 { 00:36:14.799 "name": "BaseBdev4", 00:36:14.799 "uuid": "94373632-e9eb-5e76-bfb5-963d9b5bcd8c", 00:36:14.799 "is_configured": true, 00:36:14.799 "data_offset": 2048, 00:36:14.799 "data_size": 63488 00:36:14.799 } 00:36:14.799 ] 00:36:14.799 }' 00:36:14.799 17:32:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:36:14.799 17:32:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:36:14.799 17:32:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:36:14.799 17:32:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:36:14.800 17:32:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@778 -- # NOT rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:36:14.800 17:32:52 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@652 -- # local es=0 00:36:14.800 17:32:52 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:36:14.800 17:32:52 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:36:14.800 17:32:52 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:36:14.800 17:32:52 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:36:14.800 17:32:52 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:36:14.800 17:32:52 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:36:14.800 17:32:52 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:14.800 17:32:52 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:36:14.800 [2024-11-26 17:32:52.132975] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:36:14.800 [2024-11-26 17:32:52.133166] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (6) 00:36:14.800 [2024-11-26 17:32:52.133186] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:36:14.800 request: 00:36:14.800 { 00:36:14.800 "base_bdev": "BaseBdev1", 00:36:14.800 "raid_bdev": "raid_bdev1", 00:36:14.800 "method": "bdev_raid_add_base_bdev", 00:36:14.800 "req_id": 1 00:36:14.800 } 00:36:14.800 Got JSON-RPC error response 00:36:14.800 response: 00:36:14.800 { 00:36:14.800 "code": -22, 00:36:14.800 "message": "Failed to add base bdev to RAID bdev: Invalid argument" 00:36:14.800 } 00:36:14.800 17:32:52 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:36:14.800 17:32:52 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@655 -- # es=1 00:36:14.800 17:32:52 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:36:14.800 17:32:52 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:36:14.800 17:32:52 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:36:14.800 17:32:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@779 -- # sleep 1 00:36:15.737 17:32:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@780 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:36:15.737 17:32:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:36:15.737 17:32:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:36:15.737 17:32:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:36:15.737 17:32:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:36:15.737 17:32:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:36:15.737 17:32:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:36:15.737 17:32:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:36:15.737 17:32:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:36:15.737 17:32:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:36:15.737 17:32:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:36:15.737 17:32:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:36:15.737 17:32:53 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:15.737 17:32:53 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:36:15.737 17:32:53 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:15.997 17:32:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:36:15.997 "name": "raid_bdev1", 00:36:15.997 "uuid": "1531ce49-2e53-4c6a-9473-a21c6f93677f", 00:36:15.997 "strip_size_kb": 0, 00:36:15.997 "state": "online", 00:36:15.997 "raid_level": "raid1", 00:36:15.997 "superblock": true, 00:36:15.997 "num_base_bdevs": 4, 00:36:15.997 "num_base_bdevs_discovered": 2, 00:36:15.997 "num_base_bdevs_operational": 2, 00:36:15.997 "base_bdevs_list": [ 00:36:15.997 { 00:36:15.997 "name": null, 00:36:15.997 "uuid": "00000000-0000-0000-0000-000000000000", 00:36:15.997 "is_configured": false, 00:36:15.997 "data_offset": 0, 00:36:15.997 "data_size": 63488 00:36:15.997 }, 00:36:15.997 { 00:36:15.997 "name": null, 00:36:15.997 "uuid": "00000000-0000-0000-0000-000000000000", 00:36:15.997 "is_configured": false, 00:36:15.997 "data_offset": 2048, 00:36:15.997 "data_size": 63488 00:36:15.997 }, 00:36:15.997 { 00:36:15.997 "name": "BaseBdev3", 00:36:15.997 "uuid": "c0612ada-6677-55be-8d1f-87f32e3901f5", 00:36:15.997 "is_configured": true, 00:36:15.997 "data_offset": 2048, 00:36:15.997 "data_size": 63488 00:36:15.997 }, 00:36:15.997 { 00:36:15.997 "name": "BaseBdev4", 00:36:15.997 "uuid": "94373632-e9eb-5e76-bfb5-963d9b5bcd8c", 00:36:15.997 "is_configured": true, 00:36:15.997 "data_offset": 2048, 00:36:15.997 "data_size": 63488 00:36:15.997 } 00:36:15.997 ] 00:36:15.997 }' 00:36:15.997 17:32:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:36:15.997 17:32:53 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:36:16.256 17:32:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@781 -- # verify_raid_bdev_process raid_bdev1 none none 00:36:16.256 17:32:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:36:16.256 17:32:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:36:16.256 17:32:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:36:16.256 17:32:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:36:16.256 17:32:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:36:16.256 17:32:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:36:16.256 17:32:53 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:16.256 17:32:53 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:36:16.256 17:32:53 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:16.256 17:32:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:36:16.256 "name": "raid_bdev1", 00:36:16.256 "uuid": "1531ce49-2e53-4c6a-9473-a21c6f93677f", 00:36:16.256 "strip_size_kb": 0, 00:36:16.256 "state": "online", 00:36:16.256 "raid_level": "raid1", 00:36:16.256 "superblock": true, 00:36:16.256 "num_base_bdevs": 4, 00:36:16.256 "num_base_bdevs_discovered": 2, 00:36:16.256 "num_base_bdevs_operational": 2, 00:36:16.256 "base_bdevs_list": [ 00:36:16.256 { 00:36:16.256 "name": null, 00:36:16.256 "uuid": "00000000-0000-0000-0000-000000000000", 00:36:16.256 "is_configured": false, 00:36:16.256 "data_offset": 0, 00:36:16.256 "data_size": 63488 00:36:16.256 }, 00:36:16.256 { 00:36:16.256 "name": null, 00:36:16.256 "uuid": "00000000-0000-0000-0000-000000000000", 00:36:16.256 "is_configured": false, 00:36:16.256 "data_offset": 2048, 00:36:16.256 "data_size": 63488 00:36:16.256 }, 00:36:16.256 { 00:36:16.256 "name": "BaseBdev3", 00:36:16.256 "uuid": "c0612ada-6677-55be-8d1f-87f32e3901f5", 00:36:16.256 "is_configured": true, 00:36:16.256 "data_offset": 2048, 00:36:16.256 "data_size": 63488 00:36:16.256 }, 00:36:16.256 { 00:36:16.256 "name": "BaseBdev4", 00:36:16.256 "uuid": "94373632-e9eb-5e76-bfb5-963d9b5bcd8c", 00:36:16.256 "is_configured": true, 00:36:16.256 "data_offset": 2048, 00:36:16.256 "data_size": 63488 00:36:16.256 } 00:36:16.256 ] 00:36:16.256 }' 00:36:16.256 17:32:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:36:16.516 17:32:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:36:16.516 17:32:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:36:16.516 17:32:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:36:16.516 17:32:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@784 -- # killprocess 79636 00:36:16.516 17:32:53 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@954 -- # '[' -z 79636 ']' 00:36:16.516 17:32:53 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@958 -- # kill -0 79636 00:36:16.516 17:32:53 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@959 -- # uname 00:36:16.516 17:32:53 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:36:16.516 17:32:53 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 79636 00:36:16.516 killing process with pid 79636 00:36:16.516 Received shutdown signal, test time was about 18.044417 seconds 00:36:16.516 00:36:16.516 Latency(us) 00:36:16.516 [2024-11-26T17:32:53.963Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:36:16.516 [2024-11-26T17:32:53.963Z] =================================================================================================================== 00:36:16.516 [2024-11-26T17:32:53.963Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:36:16.516 17:32:53 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:36:16.516 17:32:53 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:36:16.516 17:32:53 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@972 -- # echo 'killing process with pid 79636' 00:36:16.516 17:32:53 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@973 -- # kill 79636 00:36:16.516 [2024-11-26 17:32:53.808619] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:36:16.516 17:32:53 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@978 -- # wait 79636 00:36:16.516 [2024-11-26 17:32:53.808744] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:36:16.516 [2024-11-26 17:32:53.808813] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:36:16.516 [2024-11-26 17:32:53.808831] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state offline 00:36:17.084 [2024-11-26 17:32:54.234464] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:36:18.019 17:32:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@786 -- # return 0 00:36:18.019 00:36:18.019 real 0m21.663s 00:36:18.019 user 0m28.306s 00:36:18.019 sys 0m2.768s 00:36:18.019 17:32:55 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@1130 -- # xtrace_disable 00:36:18.019 17:32:55 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:36:18.019 ************************************ 00:36:18.019 END TEST raid_rebuild_test_sb_io 00:36:18.019 ************************************ 00:36:18.278 17:32:55 bdev_raid -- bdev/bdev_raid.sh@985 -- # for n in {3..4} 00:36:18.278 17:32:55 bdev_raid -- bdev/bdev_raid.sh@986 -- # run_test raid5f_state_function_test raid_state_function_test raid5f 3 false 00:36:18.278 17:32:55 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:36:18.278 17:32:55 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:36:18.278 17:32:55 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:36:18.278 ************************************ 00:36:18.278 START TEST raid5f_state_function_test 00:36:18.278 ************************************ 00:36:18.278 17:32:55 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@1129 -- # raid_state_function_test raid5f 3 false 00:36:18.278 17:32:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@205 -- # local raid_level=raid5f 00:36:18.278 17:32:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=3 00:36:18.278 17:32:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@207 -- # local superblock=false 00:36:18.278 17:32:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:36:18.278 17:32:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:36:18.278 17:32:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:36:18.278 17:32:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:36:18.278 17:32:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:36:18.278 17:32:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:36:18.278 17:32:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:36:18.278 17:32:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:36:18.278 17:32:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:36:18.278 17:32:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:36:18.278 17:32:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:36:18.278 17:32:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:36:18.278 17:32:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:36:18.278 17:32:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:36:18.278 17:32:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:36:18.278 17:32:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@211 -- # local strip_size 00:36:18.278 17:32:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:36:18.278 17:32:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:36:18.278 17:32:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@215 -- # '[' raid5f '!=' raid1 ']' 00:36:18.278 17:32:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:36:18.278 17:32:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:36:18.278 17:32:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@222 -- # '[' false = true ']' 00:36:18.278 17:32:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@225 -- # superblock_create_arg= 00:36:18.278 Process raid pid: 80358 00:36:18.278 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:36:18.278 17:32:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@229 -- # raid_pid=80358 00:36:18.278 17:32:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 80358' 00:36:18.278 17:32:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@231 -- # waitforlisten 80358 00:36:18.278 17:32:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:36:18.278 17:32:55 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@835 -- # '[' -z 80358 ']' 00:36:18.278 17:32:55 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:36:18.278 17:32:55 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:36:18.278 17:32:55 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:36:18.278 17:32:55 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:36:18.278 17:32:55 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:36:18.278 [2024-11-26 17:32:55.580419] Starting SPDK v25.01-pre git sha1 f7ce15267 / DPDK 24.03.0 initialization... 00:36:18.278 [2024-11-26 17:32:55.580568] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:36:18.536 [2024-11-26 17:32:55.751494] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:36:18.536 [2024-11-26 17:32:55.867183] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:36:18.794 [2024-11-26 17:32:56.069886] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:36:18.794 [2024-11-26 17:32:56.069927] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:36:19.052 17:32:56 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:36:19.052 17:32:56 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@868 -- # return 0 00:36:19.052 17:32:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:36:19.052 17:32:56 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:19.052 17:32:56 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:36:19.052 [2024-11-26 17:32:56.470554] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:36:19.052 [2024-11-26 17:32:56.470753] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:36:19.052 [2024-11-26 17:32:56.470776] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:36:19.052 [2024-11-26 17:32:56.470791] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:36:19.052 [2024-11-26 17:32:56.470800] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:36:19.052 [2024-11-26 17:32:56.470812] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:36:19.052 17:32:56 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:19.052 17:32:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:36:19.052 17:32:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:36:19.052 17:32:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:36:19.052 17:32:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:36:19.052 17:32:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:36:19.052 17:32:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:36:19.052 17:32:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:36:19.052 17:32:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:36:19.052 17:32:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:36:19.052 17:32:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:36:19.052 17:32:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:36:19.052 17:32:56 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:19.052 17:32:56 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:36:19.052 17:32:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:36:19.310 17:32:56 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:19.310 17:32:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:36:19.310 "name": "Existed_Raid", 00:36:19.310 "uuid": "00000000-0000-0000-0000-000000000000", 00:36:19.310 "strip_size_kb": 64, 00:36:19.310 "state": "configuring", 00:36:19.310 "raid_level": "raid5f", 00:36:19.310 "superblock": false, 00:36:19.310 "num_base_bdevs": 3, 00:36:19.310 "num_base_bdevs_discovered": 0, 00:36:19.310 "num_base_bdevs_operational": 3, 00:36:19.310 "base_bdevs_list": [ 00:36:19.310 { 00:36:19.310 "name": "BaseBdev1", 00:36:19.310 "uuid": "00000000-0000-0000-0000-000000000000", 00:36:19.310 "is_configured": false, 00:36:19.310 "data_offset": 0, 00:36:19.310 "data_size": 0 00:36:19.310 }, 00:36:19.310 { 00:36:19.310 "name": "BaseBdev2", 00:36:19.310 "uuid": "00000000-0000-0000-0000-000000000000", 00:36:19.310 "is_configured": false, 00:36:19.310 "data_offset": 0, 00:36:19.310 "data_size": 0 00:36:19.310 }, 00:36:19.310 { 00:36:19.310 "name": "BaseBdev3", 00:36:19.310 "uuid": "00000000-0000-0000-0000-000000000000", 00:36:19.310 "is_configured": false, 00:36:19.310 "data_offset": 0, 00:36:19.310 "data_size": 0 00:36:19.310 } 00:36:19.310 ] 00:36:19.310 }' 00:36:19.310 17:32:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:36:19.310 17:32:56 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:36:19.568 17:32:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:36:19.568 17:32:56 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:19.568 17:32:56 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:36:19.568 [2024-11-26 17:32:56.914612] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:36:19.568 [2024-11-26 17:32:56.914780] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:36:19.568 17:32:56 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:19.568 17:32:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:36:19.569 17:32:56 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:19.569 17:32:56 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:36:19.569 [2024-11-26 17:32:56.926606] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:36:19.569 [2024-11-26 17:32:56.926783] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:36:19.569 [2024-11-26 17:32:56.926868] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:36:19.569 [2024-11-26 17:32:56.926912] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:36:19.569 [2024-11-26 17:32:56.926987] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:36:19.569 [2024-11-26 17:32:56.927028] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:36:19.569 17:32:56 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:19.569 17:32:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:36:19.569 17:32:56 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:19.569 17:32:56 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:36:19.569 [2024-11-26 17:32:56.979365] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:36:19.569 BaseBdev1 00:36:19.569 17:32:56 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:19.569 17:32:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:36:19.569 17:32:56 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:36:19.569 17:32:56 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:36:19.569 17:32:56 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@905 -- # local i 00:36:19.569 17:32:56 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:36:19.569 17:32:56 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:36:19.569 17:32:56 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:36:19.569 17:32:56 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:19.569 17:32:56 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:36:19.569 17:32:56 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:19.569 17:32:56 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:36:19.569 17:32:56 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:19.569 17:32:56 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:36:19.569 [ 00:36:19.569 { 00:36:19.569 "name": "BaseBdev1", 00:36:19.569 "aliases": [ 00:36:19.569 "6116a75e-cb5c-4c5d-bd97-14ecd50da277" 00:36:19.569 ], 00:36:19.569 "product_name": "Malloc disk", 00:36:19.569 "block_size": 512, 00:36:19.569 "num_blocks": 65536, 00:36:19.569 "uuid": "6116a75e-cb5c-4c5d-bd97-14ecd50da277", 00:36:19.569 "assigned_rate_limits": { 00:36:19.569 "rw_ios_per_sec": 0, 00:36:19.569 "rw_mbytes_per_sec": 0, 00:36:19.569 "r_mbytes_per_sec": 0, 00:36:19.569 "w_mbytes_per_sec": 0 00:36:19.569 }, 00:36:19.569 "claimed": true, 00:36:19.569 "claim_type": "exclusive_write", 00:36:19.569 "zoned": false, 00:36:19.569 "supported_io_types": { 00:36:19.569 "read": true, 00:36:19.569 "write": true, 00:36:19.569 "unmap": true, 00:36:19.569 "flush": true, 00:36:19.569 "reset": true, 00:36:19.569 "nvme_admin": false, 00:36:19.569 "nvme_io": false, 00:36:19.569 "nvme_io_md": false, 00:36:19.569 "write_zeroes": true, 00:36:19.569 "zcopy": true, 00:36:19.569 "get_zone_info": false, 00:36:19.569 "zone_management": false, 00:36:19.569 "zone_append": false, 00:36:19.828 "compare": false, 00:36:19.828 "compare_and_write": false, 00:36:19.828 "abort": true, 00:36:19.828 "seek_hole": false, 00:36:19.828 "seek_data": false, 00:36:19.828 "copy": true, 00:36:19.828 "nvme_iov_md": false 00:36:19.828 }, 00:36:19.828 "memory_domains": [ 00:36:19.828 { 00:36:19.828 "dma_device_id": "system", 00:36:19.828 "dma_device_type": 1 00:36:19.828 }, 00:36:19.828 { 00:36:19.828 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:36:19.828 "dma_device_type": 2 00:36:19.828 } 00:36:19.828 ], 00:36:19.828 "driver_specific": {} 00:36:19.828 } 00:36:19.828 ] 00:36:19.828 17:32:57 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:19.828 17:32:57 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:36:19.828 17:32:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:36:19.828 17:32:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:36:19.828 17:32:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:36:19.828 17:32:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:36:19.828 17:32:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:36:19.828 17:32:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:36:19.828 17:32:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:36:19.828 17:32:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:36:19.828 17:32:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:36:19.828 17:32:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:36:19.828 17:32:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:36:19.828 17:32:57 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:19.828 17:32:57 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:36:19.828 17:32:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:36:19.828 17:32:57 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:19.828 17:32:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:36:19.828 "name": "Existed_Raid", 00:36:19.828 "uuid": "00000000-0000-0000-0000-000000000000", 00:36:19.828 "strip_size_kb": 64, 00:36:19.828 "state": "configuring", 00:36:19.828 "raid_level": "raid5f", 00:36:19.828 "superblock": false, 00:36:19.828 "num_base_bdevs": 3, 00:36:19.828 "num_base_bdevs_discovered": 1, 00:36:19.828 "num_base_bdevs_operational": 3, 00:36:19.828 "base_bdevs_list": [ 00:36:19.828 { 00:36:19.828 "name": "BaseBdev1", 00:36:19.828 "uuid": "6116a75e-cb5c-4c5d-bd97-14ecd50da277", 00:36:19.828 "is_configured": true, 00:36:19.828 "data_offset": 0, 00:36:19.828 "data_size": 65536 00:36:19.828 }, 00:36:19.828 { 00:36:19.828 "name": "BaseBdev2", 00:36:19.828 "uuid": "00000000-0000-0000-0000-000000000000", 00:36:19.828 "is_configured": false, 00:36:19.828 "data_offset": 0, 00:36:19.828 "data_size": 0 00:36:19.828 }, 00:36:19.828 { 00:36:19.828 "name": "BaseBdev3", 00:36:19.828 "uuid": "00000000-0000-0000-0000-000000000000", 00:36:19.828 "is_configured": false, 00:36:19.828 "data_offset": 0, 00:36:19.828 "data_size": 0 00:36:19.828 } 00:36:19.828 ] 00:36:19.828 }' 00:36:19.828 17:32:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:36:19.828 17:32:57 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:36:20.086 17:32:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:36:20.086 17:32:57 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:20.086 17:32:57 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:36:20.086 [2024-11-26 17:32:57.479542] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:36:20.086 [2024-11-26 17:32:57.479732] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:36:20.086 17:32:57 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:20.086 17:32:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:36:20.086 17:32:57 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:20.086 17:32:57 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:36:20.086 [2024-11-26 17:32:57.487580] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:36:20.086 [2024-11-26 17:32:57.489698] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:36:20.086 [2024-11-26 17:32:57.489742] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:36:20.086 [2024-11-26 17:32:57.489753] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:36:20.086 [2024-11-26 17:32:57.489766] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:36:20.086 17:32:57 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:20.086 17:32:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:36:20.086 17:32:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:36:20.087 17:32:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:36:20.087 17:32:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:36:20.087 17:32:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:36:20.087 17:32:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:36:20.087 17:32:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:36:20.087 17:32:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:36:20.087 17:32:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:36:20.087 17:32:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:36:20.087 17:32:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:36:20.087 17:32:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:36:20.087 17:32:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:36:20.087 17:32:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:36:20.087 17:32:57 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:20.087 17:32:57 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:36:20.087 17:32:57 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:20.087 17:32:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:36:20.087 "name": "Existed_Raid", 00:36:20.087 "uuid": "00000000-0000-0000-0000-000000000000", 00:36:20.087 "strip_size_kb": 64, 00:36:20.087 "state": "configuring", 00:36:20.087 "raid_level": "raid5f", 00:36:20.087 "superblock": false, 00:36:20.087 "num_base_bdevs": 3, 00:36:20.087 "num_base_bdevs_discovered": 1, 00:36:20.087 "num_base_bdevs_operational": 3, 00:36:20.087 "base_bdevs_list": [ 00:36:20.087 { 00:36:20.087 "name": "BaseBdev1", 00:36:20.087 "uuid": "6116a75e-cb5c-4c5d-bd97-14ecd50da277", 00:36:20.087 "is_configured": true, 00:36:20.087 "data_offset": 0, 00:36:20.087 "data_size": 65536 00:36:20.087 }, 00:36:20.087 { 00:36:20.087 "name": "BaseBdev2", 00:36:20.087 "uuid": "00000000-0000-0000-0000-000000000000", 00:36:20.087 "is_configured": false, 00:36:20.087 "data_offset": 0, 00:36:20.087 "data_size": 0 00:36:20.087 }, 00:36:20.087 { 00:36:20.087 "name": "BaseBdev3", 00:36:20.087 "uuid": "00000000-0000-0000-0000-000000000000", 00:36:20.087 "is_configured": false, 00:36:20.087 "data_offset": 0, 00:36:20.087 "data_size": 0 00:36:20.087 } 00:36:20.087 ] 00:36:20.087 }' 00:36:20.087 17:32:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:36:20.087 17:32:57 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:36:20.654 17:32:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:36:20.654 17:32:57 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:20.654 17:32:57 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:36:20.654 [2024-11-26 17:32:57.939450] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:36:20.654 BaseBdev2 00:36:20.654 17:32:57 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:20.654 17:32:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:36:20.654 17:32:57 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:36:20.654 17:32:57 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:36:20.654 17:32:57 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@905 -- # local i 00:36:20.654 17:32:57 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:36:20.654 17:32:57 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:36:20.654 17:32:57 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:36:20.654 17:32:57 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:20.654 17:32:57 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:36:20.654 17:32:57 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:20.654 17:32:57 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:36:20.654 17:32:57 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:20.655 17:32:57 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:36:20.655 [ 00:36:20.655 { 00:36:20.655 "name": "BaseBdev2", 00:36:20.655 "aliases": [ 00:36:20.655 "27474c2b-173f-4d69-887e-16b756686337" 00:36:20.655 ], 00:36:20.655 "product_name": "Malloc disk", 00:36:20.655 "block_size": 512, 00:36:20.655 "num_blocks": 65536, 00:36:20.655 "uuid": "27474c2b-173f-4d69-887e-16b756686337", 00:36:20.655 "assigned_rate_limits": { 00:36:20.655 "rw_ios_per_sec": 0, 00:36:20.655 "rw_mbytes_per_sec": 0, 00:36:20.655 "r_mbytes_per_sec": 0, 00:36:20.655 "w_mbytes_per_sec": 0 00:36:20.655 }, 00:36:20.655 "claimed": true, 00:36:20.655 "claim_type": "exclusive_write", 00:36:20.655 "zoned": false, 00:36:20.655 "supported_io_types": { 00:36:20.655 "read": true, 00:36:20.655 "write": true, 00:36:20.655 "unmap": true, 00:36:20.655 "flush": true, 00:36:20.655 "reset": true, 00:36:20.655 "nvme_admin": false, 00:36:20.655 "nvme_io": false, 00:36:20.655 "nvme_io_md": false, 00:36:20.655 "write_zeroes": true, 00:36:20.655 "zcopy": true, 00:36:20.655 "get_zone_info": false, 00:36:20.655 "zone_management": false, 00:36:20.655 "zone_append": false, 00:36:20.655 "compare": false, 00:36:20.655 "compare_and_write": false, 00:36:20.655 "abort": true, 00:36:20.655 "seek_hole": false, 00:36:20.655 "seek_data": false, 00:36:20.655 "copy": true, 00:36:20.655 "nvme_iov_md": false 00:36:20.655 }, 00:36:20.655 "memory_domains": [ 00:36:20.655 { 00:36:20.655 "dma_device_id": "system", 00:36:20.655 "dma_device_type": 1 00:36:20.655 }, 00:36:20.655 { 00:36:20.655 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:36:20.655 "dma_device_type": 2 00:36:20.655 } 00:36:20.655 ], 00:36:20.655 "driver_specific": {} 00:36:20.655 } 00:36:20.655 ] 00:36:20.655 17:32:57 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:20.655 17:32:57 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:36:20.655 17:32:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:36:20.655 17:32:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:36:20.655 17:32:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:36:20.655 17:32:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:36:20.655 17:32:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:36:20.655 17:32:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:36:20.655 17:32:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:36:20.655 17:32:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:36:20.655 17:32:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:36:20.655 17:32:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:36:20.655 17:32:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:36:20.655 17:32:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:36:20.655 17:32:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:36:20.655 17:32:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:36:20.655 17:32:57 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:20.655 17:32:57 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:36:20.655 17:32:58 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:20.655 17:32:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:36:20.655 "name": "Existed_Raid", 00:36:20.655 "uuid": "00000000-0000-0000-0000-000000000000", 00:36:20.655 "strip_size_kb": 64, 00:36:20.655 "state": "configuring", 00:36:20.655 "raid_level": "raid5f", 00:36:20.655 "superblock": false, 00:36:20.655 "num_base_bdevs": 3, 00:36:20.655 "num_base_bdevs_discovered": 2, 00:36:20.655 "num_base_bdevs_operational": 3, 00:36:20.655 "base_bdevs_list": [ 00:36:20.655 { 00:36:20.655 "name": "BaseBdev1", 00:36:20.655 "uuid": "6116a75e-cb5c-4c5d-bd97-14ecd50da277", 00:36:20.655 "is_configured": true, 00:36:20.655 "data_offset": 0, 00:36:20.655 "data_size": 65536 00:36:20.655 }, 00:36:20.655 { 00:36:20.655 "name": "BaseBdev2", 00:36:20.655 "uuid": "27474c2b-173f-4d69-887e-16b756686337", 00:36:20.655 "is_configured": true, 00:36:20.655 "data_offset": 0, 00:36:20.655 "data_size": 65536 00:36:20.655 }, 00:36:20.655 { 00:36:20.655 "name": "BaseBdev3", 00:36:20.655 "uuid": "00000000-0000-0000-0000-000000000000", 00:36:20.655 "is_configured": false, 00:36:20.655 "data_offset": 0, 00:36:20.655 "data_size": 0 00:36:20.655 } 00:36:20.655 ] 00:36:20.655 }' 00:36:20.655 17:32:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:36:20.655 17:32:58 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:36:21.222 17:32:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:36:21.222 17:32:58 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:21.222 17:32:58 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:36:21.222 [2024-11-26 17:32:58.473953] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:36:21.222 [2024-11-26 17:32:58.474017] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:36:21.222 [2024-11-26 17:32:58.474035] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 131072, blocklen 512 00:36:21.222 [2024-11-26 17:32:58.474487] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:36:21.222 [2024-11-26 17:32:58.480910] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:36:21.222 [2024-11-26 17:32:58.481037] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:36:21.222 [2024-11-26 17:32:58.481545] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:36:21.222 BaseBdev3 00:36:21.222 17:32:58 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:21.222 17:32:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:36:21.222 17:32:58 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:36:21.222 17:32:58 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:36:21.222 17:32:58 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@905 -- # local i 00:36:21.222 17:32:58 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:36:21.222 17:32:58 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:36:21.222 17:32:58 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:36:21.222 17:32:58 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:21.222 17:32:58 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:36:21.222 17:32:58 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:21.222 17:32:58 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:36:21.222 17:32:58 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:21.222 17:32:58 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:36:21.222 [ 00:36:21.222 { 00:36:21.222 "name": "BaseBdev3", 00:36:21.222 "aliases": [ 00:36:21.222 "9018a77d-ac1e-4b25-b6ba-df040e511941" 00:36:21.222 ], 00:36:21.222 "product_name": "Malloc disk", 00:36:21.222 "block_size": 512, 00:36:21.222 "num_blocks": 65536, 00:36:21.222 "uuid": "9018a77d-ac1e-4b25-b6ba-df040e511941", 00:36:21.222 "assigned_rate_limits": { 00:36:21.222 "rw_ios_per_sec": 0, 00:36:21.222 "rw_mbytes_per_sec": 0, 00:36:21.222 "r_mbytes_per_sec": 0, 00:36:21.222 "w_mbytes_per_sec": 0 00:36:21.222 }, 00:36:21.222 "claimed": true, 00:36:21.222 "claim_type": "exclusive_write", 00:36:21.222 "zoned": false, 00:36:21.222 "supported_io_types": { 00:36:21.222 "read": true, 00:36:21.222 "write": true, 00:36:21.222 "unmap": true, 00:36:21.222 "flush": true, 00:36:21.222 "reset": true, 00:36:21.222 "nvme_admin": false, 00:36:21.222 "nvme_io": false, 00:36:21.222 "nvme_io_md": false, 00:36:21.222 "write_zeroes": true, 00:36:21.222 "zcopy": true, 00:36:21.222 "get_zone_info": false, 00:36:21.222 "zone_management": false, 00:36:21.222 "zone_append": false, 00:36:21.222 "compare": false, 00:36:21.222 "compare_and_write": false, 00:36:21.222 "abort": true, 00:36:21.222 "seek_hole": false, 00:36:21.222 "seek_data": false, 00:36:21.222 "copy": true, 00:36:21.222 "nvme_iov_md": false 00:36:21.222 }, 00:36:21.222 "memory_domains": [ 00:36:21.222 { 00:36:21.222 "dma_device_id": "system", 00:36:21.222 "dma_device_type": 1 00:36:21.222 }, 00:36:21.222 { 00:36:21.222 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:36:21.222 "dma_device_type": 2 00:36:21.222 } 00:36:21.222 ], 00:36:21.222 "driver_specific": {} 00:36:21.222 } 00:36:21.222 ] 00:36:21.222 17:32:58 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:21.222 17:32:58 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:36:21.222 17:32:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:36:21.222 17:32:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:36:21.222 17:32:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 3 00:36:21.222 17:32:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:36:21.222 17:32:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:36:21.222 17:32:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:36:21.223 17:32:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:36:21.223 17:32:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:36:21.223 17:32:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:36:21.223 17:32:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:36:21.223 17:32:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:36:21.223 17:32:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:36:21.223 17:32:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:36:21.223 17:32:58 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:21.223 17:32:58 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:36:21.223 17:32:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:36:21.223 17:32:58 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:21.223 17:32:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:36:21.223 "name": "Existed_Raid", 00:36:21.223 "uuid": "da2e53b5-4208-40a9-a602-85f07d3f312b", 00:36:21.223 "strip_size_kb": 64, 00:36:21.223 "state": "online", 00:36:21.223 "raid_level": "raid5f", 00:36:21.223 "superblock": false, 00:36:21.223 "num_base_bdevs": 3, 00:36:21.223 "num_base_bdevs_discovered": 3, 00:36:21.223 "num_base_bdevs_operational": 3, 00:36:21.223 "base_bdevs_list": [ 00:36:21.223 { 00:36:21.223 "name": "BaseBdev1", 00:36:21.223 "uuid": "6116a75e-cb5c-4c5d-bd97-14ecd50da277", 00:36:21.223 "is_configured": true, 00:36:21.223 "data_offset": 0, 00:36:21.223 "data_size": 65536 00:36:21.223 }, 00:36:21.223 { 00:36:21.223 "name": "BaseBdev2", 00:36:21.223 "uuid": "27474c2b-173f-4d69-887e-16b756686337", 00:36:21.223 "is_configured": true, 00:36:21.223 "data_offset": 0, 00:36:21.223 "data_size": 65536 00:36:21.223 }, 00:36:21.223 { 00:36:21.223 "name": "BaseBdev3", 00:36:21.223 "uuid": "9018a77d-ac1e-4b25-b6ba-df040e511941", 00:36:21.223 "is_configured": true, 00:36:21.223 "data_offset": 0, 00:36:21.223 "data_size": 65536 00:36:21.223 } 00:36:21.223 ] 00:36:21.223 }' 00:36:21.223 17:32:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:36:21.223 17:32:58 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:36:21.788 17:32:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:36:21.788 17:32:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:36:21.788 17:32:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:36:21.788 17:32:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:36:21.788 17:32:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:36:21.788 17:32:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:36:21.788 17:32:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:36:21.788 17:32:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:36:21.788 17:32:58 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:21.788 17:32:58 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:36:21.788 [2024-11-26 17:32:58.976478] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:36:21.788 17:32:58 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:21.788 17:32:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:36:21.788 "name": "Existed_Raid", 00:36:21.788 "aliases": [ 00:36:21.788 "da2e53b5-4208-40a9-a602-85f07d3f312b" 00:36:21.788 ], 00:36:21.788 "product_name": "Raid Volume", 00:36:21.788 "block_size": 512, 00:36:21.788 "num_blocks": 131072, 00:36:21.788 "uuid": "da2e53b5-4208-40a9-a602-85f07d3f312b", 00:36:21.788 "assigned_rate_limits": { 00:36:21.788 "rw_ios_per_sec": 0, 00:36:21.788 "rw_mbytes_per_sec": 0, 00:36:21.788 "r_mbytes_per_sec": 0, 00:36:21.788 "w_mbytes_per_sec": 0 00:36:21.788 }, 00:36:21.788 "claimed": false, 00:36:21.788 "zoned": false, 00:36:21.788 "supported_io_types": { 00:36:21.788 "read": true, 00:36:21.788 "write": true, 00:36:21.788 "unmap": false, 00:36:21.788 "flush": false, 00:36:21.788 "reset": true, 00:36:21.788 "nvme_admin": false, 00:36:21.788 "nvme_io": false, 00:36:21.788 "nvme_io_md": false, 00:36:21.788 "write_zeroes": true, 00:36:21.788 "zcopy": false, 00:36:21.788 "get_zone_info": false, 00:36:21.788 "zone_management": false, 00:36:21.788 "zone_append": false, 00:36:21.788 "compare": false, 00:36:21.788 "compare_and_write": false, 00:36:21.788 "abort": false, 00:36:21.788 "seek_hole": false, 00:36:21.788 "seek_data": false, 00:36:21.788 "copy": false, 00:36:21.788 "nvme_iov_md": false 00:36:21.788 }, 00:36:21.788 "driver_specific": { 00:36:21.788 "raid": { 00:36:21.788 "uuid": "da2e53b5-4208-40a9-a602-85f07d3f312b", 00:36:21.788 "strip_size_kb": 64, 00:36:21.788 "state": "online", 00:36:21.788 "raid_level": "raid5f", 00:36:21.788 "superblock": false, 00:36:21.788 "num_base_bdevs": 3, 00:36:21.788 "num_base_bdevs_discovered": 3, 00:36:21.788 "num_base_bdevs_operational": 3, 00:36:21.788 "base_bdevs_list": [ 00:36:21.788 { 00:36:21.788 "name": "BaseBdev1", 00:36:21.788 "uuid": "6116a75e-cb5c-4c5d-bd97-14ecd50da277", 00:36:21.788 "is_configured": true, 00:36:21.788 "data_offset": 0, 00:36:21.788 "data_size": 65536 00:36:21.788 }, 00:36:21.788 { 00:36:21.788 "name": "BaseBdev2", 00:36:21.788 "uuid": "27474c2b-173f-4d69-887e-16b756686337", 00:36:21.788 "is_configured": true, 00:36:21.788 "data_offset": 0, 00:36:21.788 "data_size": 65536 00:36:21.788 }, 00:36:21.788 { 00:36:21.788 "name": "BaseBdev3", 00:36:21.788 "uuid": "9018a77d-ac1e-4b25-b6ba-df040e511941", 00:36:21.788 "is_configured": true, 00:36:21.788 "data_offset": 0, 00:36:21.788 "data_size": 65536 00:36:21.788 } 00:36:21.788 ] 00:36:21.788 } 00:36:21.788 } 00:36:21.788 }' 00:36:21.788 17:32:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:36:21.788 17:32:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:36:21.788 BaseBdev2 00:36:21.788 BaseBdev3' 00:36:21.788 17:32:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:36:21.788 17:32:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:36:21.788 17:32:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:36:21.788 17:32:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:36:21.788 17:32:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:21.788 17:32:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:36:21.788 17:32:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:36:21.788 17:32:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:21.788 17:32:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:36:21.788 17:32:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:36:21.788 17:32:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:36:21.788 17:32:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:36:21.788 17:32:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:36:21.788 17:32:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:21.788 17:32:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:36:21.788 17:32:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:21.788 17:32:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:36:21.788 17:32:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:36:21.788 17:32:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:36:21.788 17:32:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:36:21.788 17:32:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:36:21.788 17:32:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:21.788 17:32:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:36:21.788 17:32:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:22.045 17:32:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:36:22.045 17:32:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:36:22.045 17:32:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:36:22.045 17:32:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:22.045 17:32:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:36:22.046 [2024-11-26 17:32:59.248379] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:36:22.046 17:32:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:22.046 17:32:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@260 -- # local expected_state 00:36:22.046 17:32:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@261 -- # has_redundancy raid5f 00:36:22.046 17:32:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:36:22.046 17:32:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@199 -- # return 0 00:36:22.046 17:32:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@264 -- # expected_state=online 00:36:22.046 17:32:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 2 00:36:22.046 17:32:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:36:22.046 17:32:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:36:22.046 17:32:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:36:22.046 17:32:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:36:22.046 17:32:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:36:22.046 17:32:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:36:22.046 17:32:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:36:22.046 17:32:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:36:22.046 17:32:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:36:22.046 17:32:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:36:22.046 17:32:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:36:22.046 17:32:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:22.046 17:32:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:36:22.046 17:32:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:22.046 17:32:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:36:22.046 "name": "Existed_Raid", 00:36:22.046 "uuid": "da2e53b5-4208-40a9-a602-85f07d3f312b", 00:36:22.046 "strip_size_kb": 64, 00:36:22.046 "state": "online", 00:36:22.046 "raid_level": "raid5f", 00:36:22.046 "superblock": false, 00:36:22.046 "num_base_bdevs": 3, 00:36:22.046 "num_base_bdevs_discovered": 2, 00:36:22.046 "num_base_bdevs_operational": 2, 00:36:22.046 "base_bdevs_list": [ 00:36:22.046 { 00:36:22.046 "name": null, 00:36:22.046 "uuid": "00000000-0000-0000-0000-000000000000", 00:36:22.046 "is_configured": false, 00:36:22.046 "data_offset": 0, 00:36:22.046 "data_size": 65536 00:36:22.046 }, 00:36:22.046 { 00:36:22.046 "name": "BaseBdev2", 00:36:22.046 "uuid": "27474c2b-173f-4d69-887e-16b756686337", 00:36:22.046 "is_configured": true, 00:36:22.046 "data_offset": 0, 00:36:22.046 "data_size": 65536 00:36:22.046 }, 00:36:22.046 { 00:36:22.046 "name": "BaseBdev3", 00:36:22.046 "uuid": "9018a77d-ac1e-4b25-b6ba-df040e511941", 00:36:22.046 "is_configured": true, 00:36:22.046 "data_offset": 0, 00:36:22.046 "data_size": 65536 00:36:22.046 } 00:36:22.046 ] 00:36:22.046 }' 00:36:22.046 17:32:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:36:22.046 17:32:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:36:22.631 17:32:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:36:22.631 17:32:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:36:22.631 17:32:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:36:22.631 17:32:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:22.632 17:32:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:36:22.632 17:32:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:36:22.632 17:32:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:22.632 17:32:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:36:22.632 17:32:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:36:22.632 17:32:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:36:22.632 17:32:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:22.632 17:32:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:36:22.632 [2024-11-26 17:32:59.828824] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:36:22.632 [2024-11-26 17:32:59.828924] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:36:22.632 [2024-11-26 17:32:59.924239] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:36:22.632 17:32:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:22.632 17:32:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:36:22.632 17:32:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:36:22.632 17:32:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:36:22.632 17:32:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:36:22.632 17:32:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:22.632 17:32:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:36:22.632 17:32:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:22.632 17:32:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:36:22.632 17:32:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:36:22.632 17:32:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:36:22.632 17:32:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:22.632 17:32:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:36:22.632 [2024-11-26 17:32:59.980308] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:36:22.632 [2024-11-26 17:32:59.980360] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:36:22.890 17:33:00 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:22.890 17:33:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:36:22.890 17:33:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:36:22.890 17:33:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:36:22.890 17:33:00 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:22.890 17:33:00 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:36:22.890 17:33:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:36:22.890 17:33:00 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:22.890 17:33:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:36:22.890 17:33:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:36:22.890 17:33:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@284 -- # '[' 3 -gt 2 ']' 00:36:22.890 17:33:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:36:22.890 17:33:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:36:22.890 17:33:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:36:22.890 17:33:00 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:22.890 17:33:00 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:36:22.890 BaseBdev2 00:36:22.890 17:33:00 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:22.890 17:33:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:36:22.890 17:33:00 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:36:22.890 17:33:00 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:36:22.890 17:33:00 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@905 -- # local i 00:36:22.890 17:33:00 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:36:22.890 17:33:00 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:36:22.890 17:33:00 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:36:22.890 17:33:00 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:22.890 17:33:00 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:36:22.890 17:33:00 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:22.890 17:33:00 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:36:22.890 17:33:00 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:22.890 17:33:00 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:36:22.890 [ 00:36:22.890 { 00:36:22.890 "name": "BaseBdev2", 00:36:22.890 "aliases": [ 00:36:22.890 "bb97062c-340a-4017-880f-52854e1a7cd5" 00:36:22.890 ], 00:36:22.890 "product_name": "Malloc disk", 00:36:22.890 "block_size": 512, 00:36:22.890 "num_blocks": 65536, 00:36:22.890 "uuid": "bb97062c-340a-4017-880f-52854e1a7cd5", 00:36:22.890 "assigned_rate_limits": { 00:36:22.890 "rw_ios_per_sec": 0, 00:36:22.890 "rw_mbytes_per_sec": 0, 00:36:22.890 "r_mbytes_per_sec": 0, 00:36:22.890 "w_mbytes_per_sec": 0 00:36:22.890 }, 00:36:22.890 "claimed": false, 00:36:22.890 "zoned": false, 00:36:22.890 "supported_io_types": { 00:36:22.890 "read": true, 00:36:22.890 "write": true, 00:36:22.890 "unmap": true, 00:36:22.890 "flush": true, 00:36:22.890 "reset": true, 00:36:22.890 "nvme_admin": false, 00:36:22.890 "nvme_io": false, 00:36:22.890 "nvme_io_md": false, 00:36:22.890 "write_zeroes": true, 00:36:22.890 "zcopy": true, 00:36:22.890 "get_zone_info": false, 00:36:22.890 "zone_management": false, 00:36:22.890 "zone_append": false, 00:36:22.890 "compare": false, 00:36:22.890 "compare_and_write": false, 00:36:22.890 "abort": true, 00:36:22.890 "seek_hole": false, 00:36:22.890 "seek_data": false, 00:36:22.890 "copy": true, 00:36:22.890 "nvme_iov_md": false 00:36:22.890 }, 00:36:22.890 "memory_domains": [ 00:36:22.890 { 00:36:22.890 "dma_device_id": "system", 00:36:22.890 "dma_device_type": 1 00:36:22.890 }, 00:36:22.890 { 00:36:22.890 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:36:22.890 "dma_device_type": 2 00:36:22.890 } 00:36:22.890 ], 00:36:22.890 "driver_specific": {} 00:36:22.890 } 00:36:22.890 ] 00:36:22.890 17:33:00 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:22.890 17:33:00 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:36:22.890 17:33:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:36:22.890 17:33:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:36:22.890 17:33:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:36:22.890 17:33:00 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:22.890 17:33:00 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:36:22.890 BaseBdev3 00:36:22.890 17:33:00 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:22.890 17:33:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:36:22.890 17:33:00 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:36:22.890 17:33:00 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:36:22.890 17:33:00 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@905 -- # local i 00:36:22.890 17:33:00 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:36:22.890 17:33:00 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:36:22.890 17:33:00 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:36:22.890 17:33:00 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:22.890 17:33:00 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:36:22.891 17:33:00 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:22.891 17:33:00 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:36:22.891 17:33:00 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:22.891 17:33:00 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:36:22.891 [ 00:36:22.891 { 00:36:22.891 "name": "BaseBdev3", 00:36:22.891 "aliases": [ 00:36:22.891 "82d6dfcb-f34f-4648-a41e-0cb123c69531" 00:36:22.891 ], 00:36:22.891 "product_name": "Malloc disk", 00:36:22.891 "block_size": 512, 00:36:22.891 "num_blocks": 65536, 00:36:22.891 "uuid": "82d6dfcb-f34f-4648-a41e-0cb123c69531", 00:36:22.891 "assigned_rate_limits": { 00:36:22.891 "rw_ios_per_sec": 0, 00:36:22.891 "rw_mbytes_per_sec": 0, 00:36:22.891 "r_mbytes_per_sec": 0, 00:36:22.891 "w_mbytes_per_sec": 0 00:36:22.891 }, 00:36:22.891 "claimed": false, 00:36:22.891 "zoned": false, 00:36:22.891 "supported_io_types": { 00:36:22.891 "read": true, 00:36:22.891 "write": true, 00:36:22.891 "unmap": true, 00:36:22.891 "flush": true, 00:36:22.891 "reset": true, 00:36:22.891 "nvme_admin": false, 00:36:22.891 "nvme_io": false, 00:36:22.891 "nvme_io_md": false, 00:36:22.891 "write_zeroes": true, 00:36:22.891 "zcopy": true, 00:36:22.891 "get_zone_info": false, 00:36:22.891 "zone_management": false, 00:36:22.891 "zone_append": false, 00:36:22.891 "compare": false, 00:36:22.891 "compare_and_write": false, 00:36:22.891 "abort": true, 00:36:22.891 "seek_hole": false, 00:36:22.891 "seek_data": false, 00:36:22.891 "copy": true, 00:36:22.891 "nvme_iov_md": false 00:36:22.891 }, 00:36:22.891 "memory_domains": [ 00:36:22.891 { 00:36:22.891 "dma_device_id": "system", 00:36:22.891 "dma_device_type": 1 00:36:22.891 }, 00:36:22.891 { 00:36:22.891 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:36:22.891 "dma_device_type": 2 00:36:22.891 } 00:36:22.891 ], 00:36:22.891 "driver_specific": {} 00:36:22.891 } 00:36:22.891 ] 00:36:22.891 17:33:00 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:22.891 17:33:00 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:36:22.891 17:33:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:36:22.891 17:33:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:36:22.891 17:33:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:36:22.891 17:33:00 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:22.891 17:33:00 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:36:22.891 [2024-11-26 17:33:00.300648] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:36:22.891 [2024-11-26 17:33:00.300697] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:36:22.891 [2024-11-26 17:33:00.300726] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:36:22.891 [2024-11-26 17:33:00.302817] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:36:22.891 17:33:00 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:22.891 17:33:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:36:22.891 17:33:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:36:22.891 17:33:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:36:22.891 17:33:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:36:22.891 17:33:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:36:22.891 17:33:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:36:22.891 17:33:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:36:22.891 17:33:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:36:22.891 17:33:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:36:22.891 17:33:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:36:22.891 17:33:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:36:22.891 17:33:00 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:22.891 17:33:00 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:36:22.891 17:33:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:36:22.891 17:33:00 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:23.147 17:33:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:36:23.147 "name": "Existed_Raid", 00:36:23.147 "uuid": "00000000-0000-0000-0000-000000000000", 00:36:23.147 "strip_size_kb": 64, 00:36:23.147 "state": "configuring", 00:36:23.147 "raid_level": "raid5f", 00:36:23.147 "superblock": false, 00:36:23.147 "num_base_bdevs": 3, 00:36:23.147 "num_base_bdevs_discovered": 2, 00:36:23.147 "num_base_bdevs_operational": 3, 00:36:23.147 "base_bdevs_list": [ 00:36:23.147 { 00:36:23.147 "name": "BaseBdev1", 00:36:23.147 "uuid": "00000000-0000-0000-0000-000000000000", 00:36:23.147 "is_configured": false, 00:36:23.147 "data_offset": 0, 00:36:23.147 "data_size": 0 00:36:23.147 }, 00:36:23.147 { 00:36:23.147 "name": "BaseBdev2", 00:36:23.147 "uuid": "bb97062c-340a-4017-880f-52854e1a7cd5", 00:36:23.147 "is_configured": true, 00:36:23.147 "data_offset": 0, 00:36:23.147 "data_size": 65536 00:36:23.147 }, 00:36:23.147 { 00:36:23.147 "name": "BaseBdev3", 00:36:23.147 "uuid": "82d6dfcb-f34f-4648-a41e-0cb123c69531", 00:36:23.147 "is_configured": true, 00:36:23.147 "data_offset": 0, 00:36:23.147 "data_size": 65536 00:36:23.147 } 00:36:23.147 ] 00:36:23.147 }' 00:36:23.147 17:33:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:36:23.147 17:33:00 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:36:23.407 17:33:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:36:23.407 17:33:00 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:23.407 17:33:00 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:36:23.407 [2024-11-26 17:33:00.780757] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:36:23.407 17:33:00 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:23.407 17:33:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:36:23.407 17:33:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:36:23.407 17:33:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:36:23.407 17:33:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:36:23.407 17:33:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:36:23.407 17:33:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:36:23.407 17:33:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:36:23.407 17:33:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:36:23.407 17:33:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:36:23.407 17:33:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:36:23.407 17:33:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:36:23.407 17:33:00 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:23.407 17:33:00 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:36:23.407 17:33:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:36:23.407 17:33:00 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:23.407 17:33:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:36:23.407 "name": "Existed_Raid", 00:36:23.407 "uuid": "00000000-0000-0000-0000-000000000000", 00:36:23.407 "strip_size_kb": 64, 00:36:23.407 "state": "configuring", 00:36:23.407 "raid_level": "raid5f", 00:36:23.407 "superblock": false, 00:36:23.407 "num_base_bdevs": 3, 00:36:23.407 "num_base_bdevs_discovered": 1, 00:36:23.407 "num_base_bdevs_operational": 3, 00:36:23.407 "base_bdevs_list": [ 00:36:23.407 { 00:36:23.407 "name": "BaseBdev1", 00:36:23.407 "uuid": "00000000-0000-0000-0000-000000000000", 00:36:23.407 "is_configured": false, 00:36:23.407 "data_offset": 0, 00:36:23.407 "data_size": 0 00:36:23.407 }, 00:36:23.407 { 00:36:23.407 "name": null, 00:36:23.407 "uuid": "bb97062c-340a-4017-880f-52854e1a7cd5", 00:36:23.407 "is_configured": false, 00:36:23.407 "data_offset": 0, 00:36:23.407 "data_size": 65536 00:36:23.407 }, 00:36:23.407 { 00:36:23.407 "name": "BaseBdev3", 00:36:23.407 "uuid": "82d6dfcb-f34f-4648-a41e-0cb123c69531", 00:36:23.407 "is_configured": true, 00:36:23.407 "data_offset": 0, 00:36:23.407 "data_size": 65536 00:36:23.407 } 00:36:23.407 ] 00:36:23.407 }' 00:36:23.407 17:33:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:36:23.407 17:33:00 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:36:23.974 17:33:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:36:23.974 17:33:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:36:23.974 17:33:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:23.974 17:33:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:36:23.974 17:33:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:23.974 17:33:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:36:23.974 17:33:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:36:23.974 17:33:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:23.974 17:33:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:36:23.974 [2024-11-26 17:33:01.323199] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:36:23.974 BaseBdev1 00:36:23.974 17:33:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:23.974 17:33:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:36:23.974 17:33:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:36:23.974 17:33:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:36:23.974 17:33:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@905 -- # local i 00:36:23.974 17:33:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:36:23.974 17:33:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:36:23.974 17:33:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:36:23.974 17:33:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:23.974 17:33:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:36:23.974 17:33:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:23.974 17:33:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:36:23.974 17:33:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:23.974 17:33:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:36:23.974 [ 00:36:23.974 { 00:36:23.974 "name": "BaseBdev1", 00:36:23.974 "aliases": [ 00:36:23.974 "d2422551-cf9c-47dd-aa50-bda1b22500b3" 00:36:23.974 ], 00:36:23.974 "product_name": "Malloc disk", 00:36:23.974 "block_size": 512, 00:36:23.974 "num_blocks": 65536, 00:36:23.974 "uuid": "d2422551-cf9c-47dd-aa50-bda1b22500b3", 00:36:23.974 "assigned_rate_limits": { 00:36:23.974 "rw_ios_per_sec": 0, 00:36:23.974 "rw_mbytes_per_sec": 0, 00:36:23.974 "r_mbytes_per_sec": 0, 00:36:23.974 "w_mbytes_per_sec": 0 00:36:23.974 }, 00:36:23.974 "claimed": true, 00:36:23.974 "claim_type": "exclusive_write", 00:36:23.974 "zoned": false, 00:36:23.974 "supported_io_types": { 00:36:23.974 "read": true, 00:36:23.974 "write": true, 00:36:23.974 "unmap": true, 00:36:23.974 "flush": true, 00:36:23.974 "reset": true, 00:36:23.974 "nvme_admin": false, 00:36:23.974 "nvme_io": false, 00:36:23.974 "nvme_io_md": false, 00:36:23.974 "write_zeroes": true, 00:36:23.974 "zcopy": true, 00:36:23.974 "get_zone_info": false, 00:36:23.974 "zone_management": false, 00:36:23.974 "zone_append": false, 00:36:23.974 "compare": false, 00:36:23.974 "compare_and_write": false, 00:36:23.974 "abort": true, 00:36:23.974 "seek_hole": false, 00:36:23.974 "seek_data": false, 00:36:23.974 "copy": true, 00:36:23.974 "nvme_iov_md": false 00:36:23.974 }, 00:36:23.974 "memory_domains": [ 00:36:23.974 { 00:36:23.974 "dma_device_id": "system", 00:36:23.974 "dma_device_type": 1 00:36:23.974 }, 00:36:23.974 { 00:36:23.974 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:36:23.974 "dma_device_type": 2 00:36:23.974 } 00:36:23.974 ], 00:36:23.974 "driver_specific": {} 00:36:23.974 } 00:36:23.974 ] 00:36:23.974 17:33:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:23.974 17:33:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:36:23.974 17:33:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:36:23.974 17:33:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:36:23.974 17:33:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:36:23.974 17:33:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:36:23.974 17:33:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:36:23.974 17:33:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:36:23.974 17:33:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:36:23.974 17:33:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:36:23.974 17:33:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:36:23.974 17:33:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:36:23.974 17:33:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:36:23.974 17:33:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:36:23.974 17:33:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:23.975 17:33:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:36:23.975 17:33:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:23.975 17:33:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:36:23.975 "name": "Existed_Raid", 00:36:23.975 "uuid": "00000000-0000-0000-0000-000000000000", 00:36:23.975 "strip_size_kb": 64, 00:36:23.975 "state": "configuring", 00:36:23.975 "raid_level": "raid5f", 00:36:23.975 "superblock": false, 00:36:23.975 "num_base_bdevs": 3, 00:36:23.975 "num_base_bdevs_discovered": 2, 00:36:23.975 "num_base_bdevs_operational": 3, 00:36:23.975 "base_bdevs_list": [ 00:36:23.975 { 00:36:23.975 "name": "BaseBdev1", 00:36:23.975 "uuid": "d2422551-cf9c-47dd-aa50-bda1b22500b3", 00:36:23.975 "is_configured": true, 00:36:23.975 "data_offset": 0, 00:36:23.975 "data_size": 65536 00:36:23.975 }, 00:36:23.975 { 00:36:23.975 "name": null, 00:36:23.975 "uuid": "bb97062c-340a-4017-880f-52854e1a7cd5", 00:36:23.975 "is_configured": false, 00:36:23.975 "data_offset": 0, 00:36:23.975 "data_size": 65536 00:36:23.975 }, 00:36:23.975 { 00:36:23.975 "name": "BaseBdev3", 00:36:23.975 "uuid": "82d6dfcb-f34f-4648-a41e-0cb123c69531", 00:36:23.975 "is_configured": true, 00:36:23.975 "data_offset": 0, 00:36:23.975 "data_size": 65536 00:36:23.975 } 00:36:23.975 ] 00:36:23.975 }' 00:36:23.975 17:33:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:36:23.975 17:33:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:36:24.573 17:33:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:36:24.574 17:33:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:24.574 17:33:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:36:24.574 17:33:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:36:24.574 17:33:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:24.574 17:33:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:36:24.574 17:33:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:36:24.574 17:33:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:24.574 17:33:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:36:24.574 [2024-11-26 17:33:01.823355] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:36:24.574 17:33:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:24.574 17:33:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:36:24.574 17:33:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:36:24.574 17:33:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:36:24.574 17:33:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:36:24.574 17:33:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:36:24.574 17:33:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:36:24.574 17:33:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:36:24.574 17:33:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:36:24.574 17:33:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:36:24.574 17:33:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:36:24.574 17:33:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:36:24.574 17:33:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:24.574 17:33:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:36:24.574 17:33:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:36:24.574 17:33:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:24.574 17:33:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:36:24.574 "name": "Existed_Raid", 00:36:24.574 "uuid": "00000000-0000-0000-0000-000000000000", 00:36:24.574 "strip_size_kb": 64, 00:36:24.574 "state": "configuring", 00:36:24.574 "raid_level": "raid5f", 00:36:24.574 "superblock": false, 00:36:24.574 "num_base_bdevs": 3, 00:36:24.574 "num_base_bdevs_discovered": 1, 00:36:24.574 "num_base_bdevs_operational": 3, 00:36:24.574 "base_bdevs_list": [ 00:36:24.574 { 00:36:24.574 "name": "BaseBdev1", 00:36:24.574 "uuid": "d2422551-cf9c-47dd-aa50-bda1b22500b3", 00:36:24.574 "is_configured": true, 00:36:24.574 "data_offset": 0, 00:36:24.574 "data_size": 65536 00:36:24.574 }, 00:36:24.574 { 00:36:24.574 "name": null, 00:36:24.574 "uuid": "bb97062c-340a-4017-880f-52854e1a7cd5", 00:36:24.574 "is_configured": false, 00:36:24.574 "data_offset": 0, 00:36:24.574 "data_size": 65536 00:36:24.574 }, 00:36:24.574 { 00:36:24.574 "name": null, 00:36:24.574 "uuid": "82d6dfcb-f34f-4648-a41e-0cb123c69531", 00:36:24.574 "is_configured": false, 00:36:24.574 "data_offset": 0, 00:36:24.574 "data_size": 65536 00:36:24.574 } 00:36:24.574 ] 00:36:24.574 }' 00:36:24.574 17:33:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:36:24.574 17:33:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:36:24.835 17:33:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:36:24.835 17:33:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:24.835 17:33:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:36:24.835 17:33:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:36:25.093 17:33:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:25.093 17:33:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:36:25.093 17:33:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:36:25.093 17:33:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:25.093 17:33:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:36:25.093 [2024-11-26 17:33:02.327531] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:36:25.093 17:33:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:25.093 17:33:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:36:25.093 17:33:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:36:25.093 17:33:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:36:25.093 17:33:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:36:25.093 17:33:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:36:25.093 17:33:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:36:25.093 17:33:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:36:25.093 17:33:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:36:25.093 17:33:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:36:25.093 17:33:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:36:25.093 17:33:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:36:25.093 17:33:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:25.093 17:33:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:36:25.093 17:33:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:36:25.093 17:33:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:25.093 17:33:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:36:25.093 "name": "Existed_Raid", 00:36:25.093 "uuid": "00000000-0000-0000-0000-000000000000", 00:36:25.093 "strip_size_kb": 64, 00:36:25.093 "state": "configuring", 00:36:25.093 "raid_level": "raid5f", 00:36:25.093 "superblock": false, 00:36:25.093 "num_base_bdevs": 3, 00:36:25.093 "num_base_bdevs_discovered": 2, 00:36:25.093 "num_base_bdevs_operational": 3, 00:36:25.093 "base_bdevs_list": [ 00:36:25.093 { 00:36:25.093 "name": "BaseBdev1", 00:36:25.093 "uuid": "d2422551-cf9c-47dd-aa50-bda1b22500b3", 00:36:25.093 "is_configured": true, 00:36:25.093 "data_offset": 0, 00:36:25.093 "data_size": 65536 00:36:25.093 }, 00:36:25.093 { 00:36:25.093 "name": null, 00:36:25.093 "uuid": "bb97062c-340a-4017-880f-52854e1a7cd5", 00:36:25.093 "is_configured": false, 00:36:25.093 "data_offset": 0, 00:36:25.093 "data_size": 65536 00:36:25.093 }, 00:36:25.093 { 00:36:25.093 "name": "BaseBdev3", 00:36:25.093 "uuid": "82d6dfcb-f34f-4648-a41e-0cb123c69531", 00:36:25.093 "is_configured": true, 00:36:25.093 "data_offset": 0, 00:36:25.093 "data_size": 65536 00:36:25.093 } 00:36:25.093 ] 00:36:25.093 }' 00:36:25.093 17:33:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:36:25.093 17:33:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:36:25.352 17:33:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:36:25.352 17:33:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:36:25.352 17:33:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:25.352 17:33:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:36:25.352 17:33:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:25.609 17:33:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:36:25.609 17:33:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:36:25.609 17:33:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:25.609 17:33:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:36:25.610 [2024-11-26 17:33:02.811611] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:36:25.610 17:33:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:25.610 17:33:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:36:25.610 17:33:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:36:25.610 17:33:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:36:25.610 17:33:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:36:25.610 17:33:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:36:25.610 17:33:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:36:25.610 17:33:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:36:25.610 17:33:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:36:25.610 17:33:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:36:25.610 17:33:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:36:25.610 17:33:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:36:25.610 17:33:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:36:25.610 17:33:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:25.610 17:33:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:36:25.610 17:33:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:25.610 17:33:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:36:25.610 "name": "Existed_Raid", 00:36:25.610 "uuid": "00000000-0000-0000-0000-000000000000", 00:36:25.610 "strip_size_kb": 64, 00:36:25.610 "state": "configuring", 00:36:25.610 "raid_level": "raid5f", 00:36:25.610 "superblock": false, 00:36:25.610 "num_base_bdevs": 3, 00:36:25.610 "num_base_bdevs_discovered": 1, 00:36:25.610 "num_base_bdevs_operational": 3, 00:36:25.610 "base_bdevs_list": [ 00:36:25.610 { 00:36:25.610 "name": null, 00:36:25.610 "uuid": "d2422551-cf9c-47dd-aa50-bda1b22500b3", 00:36:25.610 "is_configured": false, 00:36:25.610 "data_offset": 0, 00:36:25.610 "data_size": 65536 00:36:25.610 }, 00:36:25.610 { 00:36:25.610 "name": null, 00:36:25.610 "uuid": "bb97062c-340a-4017-880f-52854e1a7cd5", 00:36:25.610 "is_configured": false, 00:36:25.610 "data_offset": 0, 00:36:25.610 "data_size": 65536 00:36:25.610 }, 00:36:25.610 { 00:36:25.610 "name": "BaseBdev3", 00:36:25.610 "uuid": "82d6dfcb-f34f-4648-a41e-0cb123c69531", 00:36:25.610 "is_configured": true, 00:36:25.610 "data_offset": 0, 00:36:25.610 "data_size": 65536 00:36:25.610 } 00:36:25.610 ] 00:36:25.610 }' 00:36:25.610 17:33:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:36:25.610 17:33:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:36:26.178 17:33:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:36:26.178 17:33:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:36:26.178 17:33:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:26.178 17:33:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:36:26.178 17:33:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:26.178 17:33:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:36:26.178 17:33:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:36:26.178 17:33:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:26.178 17:33:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:36:26.178 [2024-11-26 17:33:03.426244] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:36:26.178 17:33:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:26.178 17:33:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:36:26.178 17:33:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:36:26.178 17:33:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:36:26.178 17:33:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:36:26.178 17:33:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:36:26.178 17:33:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:36:26.178 17:33:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:36:26.178 17:33:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:36:26.178 17:33:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:36:26.178 17:33:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:36:26.178 17:33:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:36:26.178 17:33:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:26.178 17:33:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:36:26.178 17:33:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:36:26.178 17:33:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:26.178 17:33:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:36:26.178 "name": "Existed_Raid", 00:36:26.178 "uuid": "00000000-0000-0000-0000-000000000000", 00:36:26.178 "strip_size_kb": 64, 00:36:26.178 "state": "configuring", 00:36:26.178 "raid_level": "raid5f", 00:36:26.178 "superblock": false, 00:36:26.178 "num_base_bdevs": 3, 00:36:26.178 "num_base_bdevs_discovered": 2, 00:36:26.179 "num_base_bdevs_operational": 3, 00:36:26.179 "base_bdevs_list": [ 00:36:26.179 { 00:36:26.179 "name": null, 00:36:26.179 "uuid": "d2422551-cf9c-47dd-aa50-bda1b22500b3", 00:36:26.179 "is_configured": false, 00:36:26.179 "data_offset": 0, 00:36:26.179 "data_size": 65536 00:36:26.179 }, 00:36:26.179 { 00:36:26.179 "name": "BaseBdev2", 00:36:26.179 "uuid": "bb97062c-340a-4017-880f-52854e1a7cd5", 00:36:26.179 "is_configured": true, 00:36:26.179 "data_offset": 0, 00:36:26.179 "data_size": 65536 00:36:26.179 }, 00:36:26.179 { 00:36:26.179 "name": "BaseBdev3", 00:36:26.179 "uuid": "82d6dfcb-f34f-4648-a41e-0cb123c69531", 00:36:26.179 "is_configured": true, 00:36:26.179 "data_offset": 0, 00:36:26.179 "data_size": 65536 00:36:26.179 } 00:36:26.179 ] 00:36:26.179 }' 00:36:26.179 17:33:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:36:26.179 17:33:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:36:26.437 17:33:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:36:26.437 17:33:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:36:26.437 17:33:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:26.437 17:33:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:36:26.697 17:33:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:26.697 17:33:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:36:26.697 17:33:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:36:26.697 17:33:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:26.697 17:33:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:36:26.697 17:33:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:36:26.697 17:33:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:26.697 17:33:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u d2422551-cf9c-47dd-aa50-bda1b22500b3 00:36:26.697 17:33:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:26.697 17:33:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:36:26.697 [2024-11-26 17:33:03.989492] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:36:26.697 [2024-11-26 17:33:03.989547] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:36:26.697 [2024-11-26 17:33:03.989558] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 131072, blocklen 512 00:36:26.697 [2024-11-26 17:33:03.989808] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:36:26.697 [2024-11-26 17:33:03.995423] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:36:26.697 [2024-11-26 17:33:03.995447] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000008200 00:36:26.697 [2024-11-26 17:33:03.995728] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:36:26.697 NewBaseBdev 00:36:26.697 17:33:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:26.697 17:33:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:36:26.697 17:33:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=NewBaseBdev 00:36:26.697 17:33:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:36:26.697 17:33:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@905 -- # local i 00:36:26.697 17:33:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:36:26.697 17:33:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:36:26.697 17:33:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:36:26.697 17:33:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:26.697 17:33:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:36:26.697 17:33:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:26.697 17:33:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:36:26.697 17:33:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:26.697 17:33:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:36:26.697 [ 00:36:26.697 { 00:36:26.697 "name": "NewBaseBdev", 00:36:26.697 "aliases": [ 00:36:26.697 "d2422551-cf9c-47dd-aa50-bda1b22500b3" 00:36:26.697 ], 00:36:26.697 "product_name": "Malloc disk", 00:36:26.697 "block_size": 512, 00:36:26.697 "num_blocks": 65536, 00:36:26.697 "uuid": "d2422551-cf9c-47dd-aa50-bda1b22500b3", 00:36:26.697 "assigned_rate_limits": { 00:36:26.697 "rw_ios_per_sec": 0, 00:36:26.697 "rw_mbytes_per_sec": 0, 00:36:26.697 "r_mbytes_per_sec": 0, 00:36:26.697 "w_mbytes_per_sec": 0 00:36:26.697 }, 00:36:26.697 "claimed": true, 00:36:26.698 "claim_type": "exclusive_write", 00:36:26.698 "zoned": false, 00:36:26.698 "supported_io_types": { 00:36:26.698 "read": true, 00:36:26.698 "write": true, 00:36:26.698 "unmap": true, 00:36:26.698 "flush": true, 00:36:26.698 "reset": true, 00:36:26.698 "nvme_admin": false, 00:36:26.698 "nvme_io": false, 00:36:26.698 "nvme_io_md": false, 00:36:26.698 "write_zeroes": true, 00:36:26.698 "zcopy": true, 00:36:26.698 "get_zone_info": false, 00:36:26.698 "zone_management": false, 00:36:26.698 "zone_append": false, 00:36:26.698 "compare": false, 00:36:26.698 "compare_and_write": false, 00:36:26.698 "abort": true, 00:36:26.698 "seek_hole": false, 00:36:26.698 "seek_data": false, 00:36:26.698 "copy": true, 00:36:26.698 "nvme_iov_md": false 00:36:26.698 }, 00:36:26.698 "memory_domains": [ 00:36:26.698 { 00:36:26.698 "dma_device_id": "system", 00:36:26.698 "dma_device_type": 1 00:36:26.698 }, 00:36:26.698 { 00:36:26.698 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:36:26.698 "dma_device_type": 2 00:36:26.698 } 00:36:26.698 ], 00:36:26.698 "driver_specific": {} 00:36:26.698 } 00:36:26.698 ] 00:36:26.698 17:33:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:26.698 17:33:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:36:26.698 17:33:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 3 00:36:26.698 17:33:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:36:26.698 17:33:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:36:26.698 17:33:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:36:26.698 17:33:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:36:26.698 17:33:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:36:26.698 17:33:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:36:26.698 17:33:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:36:26.698 17:33:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:36:26.698 17:33:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:36:26.698 17:33:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:36:26.698 17:33:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:36:26.698 17:33:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:26.698 17:33:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:36:26.698 17:33:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:26.698 17:33:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:36:26.698 "name": "Existed_Raid", 00:36:26.698 "uuid": "b38657ba-9e10-416e-ab28-064bcc486165", 00:36:26.698 "strip_size_kb": 64, 00:36:26.698 "state": "online", 00:36:26.698 "raid_level": "raid5f", 00:36:26.698 "superblock": false, 00:36:26.698 "num_base_bdevs": 3, 00:36:26.698 "num_base_bdevs_discovered": 3, 00:36:26.698 "num_base_bdevs_operational": 3, 00:36:26.698 "base_bdevs_list": [ 00:36:26.698 { 00:36:26.698 "name": "NewBaseBdev", 00:36:26.698 "uuid": "d2422551-cf9c-47dd-aa50-bda1b22500b3", 00:36:26.698 "is_configured": true, 00:36:26.698 "data_offset": 0, 00:36:26.698 "data_size": 65536 00:36:26.698 }, 00:36:26.698 { 00:36:26.698 "name": "BaseBdev2", 00:36:26.698 "uuid": "bb97062c-340a-4017-880f-52854e1a7cd5", 00:36:26.698 "is_configured": true, 00:36:26.698 "data_offset": 0, 00:36:26.698 "data_size": 65536 00:36:26.698 }, 00:36:26.698 { 00:36:26.698 "name": "BaseBdev3", 00:36:26.698 "uuid": "82d6dfcb-f34f-4648-a41e-0cb123c69531", 00:36:26.698 "is_configured": true, 00:36:26.698 "data_offset": 0, 00:36:26.698 "data_size": 65536 00:36:26.698 } 00:36:26.698 ] 00:36:26.698 }' 00:36:26.698 17:33:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:36:26.698 17:33:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:36:27.266 17:33:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:36:27.266 17:33:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:36:27.266 17:33:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:36:27.266 17:33:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:36:27.266 17:33:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:36:27.266 17:33:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:36:27.266 17:33:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:36:27.266 17:33:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:27.266 17:33:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:36:27.266 17:33:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:36:27.266 [2024-11-26 17:33:04.474433] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:36:27.266 17:33:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:27.266 17:33:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:36:27.266 "name": "Existed_Raid", 00:36:27.266 "aliases": [ 00:36:27.266 "b38657ba-9e10-416e-ab28-064bcc486165" 00:36:27.266 ], 00:36:27.266 "product_name": "Raid Volume", 00:36:27.266 "block_size": 512, 00:36:27.266 "num_blocks": 131072, 00:36:27.266 "uuid": "b38657ba-9e10-416e-ab28-064bcc486165", 00:36:27.266 "assigned_rate_limits": { 00:36:27.266 "rw_ios_per_sec": 0, 00:36:27.266 "rw_mbytes_per_sec": 0, 00:36:27.266 "r_mbytes_per_sec": 0, 00:36:27.266 "w_mbytes_per_sec": 0 00:36:27.266 }, 00:36:27.266 "claimed": false, 00:36:27.266 "zoned": false, 00:36:27.266 "supported_io_types": { 00:36:27.266 "read": true, 00:36:27.266 "write": true, 00:36:27.266 "unmap": false, 00:36:27.266 "flush": false, 00:36:27.266 "reset": true, 00:36:27.266 "nvme_admin": false, 00:36:27.266 "nvme_io": false, 00:36:27.266 "nvme_io_md": false, 00:36:27.266 "write_zeroes": true, 00:36:27.266 "zcopy": false, 00:36:27.266 "get_zone_info": false, 00:36:27.266 "zone_management": false, 00:36:27.266 "zone_append": false, 00:36:27.266 "compare": false, 00:36:27.267 "compare_and_write": false, 00:36:27.267 "abort": false, 00:36:27.267 "seek_hole": false, 00:36:27.267 "seek_data": false, 00:36:27.267 "copy": false, 00:36:27.267 "nvme_iov_md": false 00:36:27.267 }, 00:36:27.267 "driver_specific": { 00:36:27.267 "raid": { 00:36:27.267 "uuid": "b38657ba-9e10-416e-ab28-064bcc486165", 00:36:27.267 "strip_size_kb": 64, 00:36:27.267 "state": "online", 00:36:27.267 "raid_level": "raid5f", 00:36:27.267 "superblock": false, 00:36:27.267 "num_base_bdevs": 3, 00:36:27.267 "num_base_bdevs_discovered": 3, 00:36:27.267 "num_base_bdevs_operational": 3, 00:36:27.267 "base_bdevs_list": [ 00:36:27.267 { 00:36:27.267 "name": "NewBaseBdev", 00:36:27.267 "uuid": "d2422551-cf9c-47dd-aa50-bda1b22500b3", 00:36:27.267 "is_configured": true, 00:36:27.267 "data_offset": 0, 00:36:27.267 "data_size": 65536 00:36:27.267 }, 00:36:27.267 { 00:36:27.267 "name": "BaseBdev2", 00:36:27.267 "uuid": "bb97062c-340a-4017-880f-52854e1a7cd5", 00:36:27.267 "is_configured": true, 00:36:27.267 "data_offset": 0, 00:36:27.267 "data_size": 65536 00:36:27.267 }, 00:36:27.267 { 00:36:27.267 "name": "BaseBdev3", 00:36:27.267 "uuid": "82d6dfcb-f34f-4648-a41e-0cb123c69531", 00:36:27.267 "is_configured": true, 00:36:27.267 "data_offset": 0, 00:36:27.267 "data_size": 65536 00:36:27.267 } 00:36:27.267 ] 00:36:27.267 } 00:36:27.267 } 00:36:27.267 }' 00:36:27.267 17:33:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:36:27.267 17:33:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:36:27.267 BaseBdev2 00:36:27.267 BaseBdev3' 00:36:27.267 17:33:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:36:27.267 17:33:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:36:27.267 17:33:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:36:27.267 17:33:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:36:27.267 17:33:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:27.267 17:33:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:36:27.267 17:33:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:36:27.267 17:33:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:27.267 17:33:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:36:27.267 17:33:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:36:27.267 17:33:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:36:27.267 17:33:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:36:27.267 17:33:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:27.267 17:33:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:36:27.267 17:33:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:36:27.267 17:33:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:27.267 17:33:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:36:27.267 17:33:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:36:27.267 17:33:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:36:27.267 17:33:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:36:27.267 17:33:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:36:27.267 17:33:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:27.267 17:33:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:36:27.527 17:33:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:27.527 17:33:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:36:27.527 17:33:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:36:27.527 17:33:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:36:27.527 17:33:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:27.527 17:33:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:36:27.527 [2024-11-26 17:33:04.730297] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:36:27.527 [2024-11-26 17:33:04.730328] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:36:27.527 [2024-11-26 17:33:04.730413] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:36:27.527 [2024-11-26 17:33:04.730698] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:36:27.527 [2024-11-26 17:33:04.730715] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name Existed_Raid, state offline 00:36:27.527 17:33:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:27.527 17:33:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@326 -- # killprocess 80358 00:36:27.527 17:33:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@954 -- # '[' -z 80358 ']' 00:36:27.527 17:33:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@958 -- # kill -0 80358 00:36:27.527 17:33:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@959 -- # uname 00:36:27.527 17:33:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:36:27.527 17:33:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 80358 00:36:27.527 17:33:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:36:27.527 17:33:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:36:27.527 killing process with pid 80358 00:36:27.527 17:33:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 80358' 00:36:27.527 17:33:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@973 -- # kill 80358 00:36:27.527 [2024-11-26 17:33:04.777548] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:36:27.527 17:33:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@978 -- # wait 80358 00:36:27.787 [2024-11-26 17:33:05.083395] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:36:29.166 ************************************ 00:36:29.166 END TEST raid5f_state_function_test 00:36:29.166 ************************************ 00:36:29.166 17:33:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@328 -- # return 0 00:36:29.166 00:36:29.166 real 0m10.722s 00:36:29.166 user 0m17.089s 00:36:29.166 sys 0m1.974s 00:36:29.166 17:33:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:36:29.166 17:33:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:36:29.166 17:33:06 bdev_raid -- bdev/bdev_raid.sh@987 -- # run_test raid5f_state_function_test_sb raid_state_function_test raid5f 3 true 00:36:29.166 17:33:06 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:36:29.166 17:33:06 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:36:29.166 17:33:06 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:36:29.166 ************************************ 00:36:29.166 START TEST raid5f_state_function_test_sb 00:36:29.166 ************************************ 00:36:29.166 17:33:06 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@1129 -- # raid_state_function_test raid5f 3 true 00:36:29.166 17:33:06 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # local raid_level=raid5f 00:36:29.166 17:33:06 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=3 00:36:29.166 17:33:06 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:36:29.166 17:33:06 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:36:29.166 17:33:06 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:36:29.166 17:33:06 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:36:29.166 17:33:06 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:36:29.166 17:33:06 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:36:29.166 17:33:06 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:36:29.166 17:33:06 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:36:29.166 17:33:06 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:36:29.166 17:33:06 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:36:29.166 17:33:06 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:36:29.166 17:33:06 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:36:29.166 17:33:06 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:36:29.166 17:33:06 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:36:29.166 17:33:06 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:36:29.166 17:33:06 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:36:29.166 17:33:06 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # local strip_size 00:36:29.166 17:33:06 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:36:29.166 17:33:06 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:36:29.166 17:33:06 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@215 -- # '[' raid5f '!=' raid1 ']' 00:36:29.166 17:33:06 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:36:29.166 17:33:06 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:36:29.166 17:33:06 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:36:29.166 17:33:06 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:36:29.166 17:33:06 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@229 -- # raid_pid=80986 00:36:29.166 Process raid pid: 80986 00:36:29.166 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:36:29.166 17:33:06 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 80986' 00:36:29.166 17:33:06 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # waitforlisten 80986 00:36:29.166 17:33:06 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:36:29.166 17:33:06 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@835 -- # '[' -z 80986 ']' 00:36:29.166 17:33:06 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:36:29.166 17:33:06 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@840 -- # local max_retries=100 00:36:29.166 17:33:06 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:36:29.166 17:33:06 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@844 -- # xtrace_disable 00:36:29.166 17:33:06 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:36:29.166 [2024-11-26 17:33:06.399685] Starting SPDK v25.01-pre git sha1 f7ce15267 / DPDK 24.03.0 initialization... 00:36:29.166 [2024-11-26 17:33:06.400123] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:36:29.166 [2024-11-26 17:33:06.588772] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:36:29.425 [2024-11-26 17:33:06.703436] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:36:29.686 [2024-11-26 17:33:06.904268] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:36:29.686 [2024-11-26 17:33:06.904318] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:36:29.945 17:33:07 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:36:29.945 17:33:07 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@868 -- # return 0 00:36:29.945 17:33:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:36:29.946 17:33:07 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:29.946 17:33:07 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:36:29.946 [2024-11-26 17:33:07.237795] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:36:29.946 [2024-11-26 17:33:07.237848] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:36:29.946 [2024-11-26 17:33:07.237861] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:36:29.946 [2024-11-26 17:33:07.237876] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:36:29.946 [2024-11-26 17:33:07.237891] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:36:29.946 [2024-11-26 17:33:07.237904] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:36:29.946 17:33:07 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:29.946 17:33:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:36:29.946 17:33:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:36:29.946 17:33:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:36:29.946 17:33:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:36:29.946 17:33:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:36:29.946 17:33:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:36:29.946 17:33:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:36:29.946 17:33:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:36:29.946 17:33:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:36:29.946 17:33:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:36:29.946 17:33:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:36:29.946 17:33:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:36:29.946 17:33:07 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:29.946 17:33:07 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:36:29.946 17:33:07 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:29.946 17:33:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:36:29.946 "name": "Existed_Raid", 00:36:29.946 "uuid": "9fceb974-a3fe-4320-9338-1e27c87f9eba", 00:36:29.946 "strip_size_kb": 64, 00:36:29.946 "state": "configuring", 00:36:29.946 "raid_level": "raid5f", 00:36:29.946 "superblock": true, 00:36:29.946 "num_base_bdevs": 3, 00:36:29.946 "num_base_bdevs_discovered": 0, 00:36:29.946 "num_base_bdevs_operational": 3, 00:36:29.946 "base_bdevs_list": [ 00:36:29.946 { 00:36:29.946 "name": "BaseBdev1", 00:36:29.946 "uuid": "00000000-0000-0000-0000-000000000000", 00:36:29.946 "is_configured": false, 00:36:29.946 "data_offset": 0, 00:36:29.946 "data_size": 0 00:36:29.946 }, 00:36:29.946 { 00:36:29.946 "name": "BaseBdev2", 00:36:29.946 "uuid": "00000000-0000-0000-0000-000000000000", 00:36:29.946 "is_configured": false, 00:36:29.946 "data_offset": 0, 00:36:29.946 "data_size": 0 00:36:29.946 }, 00:36:29.946 { 00:36:29.946 "name": "BaseBdev3", 00:36:29.946 "uuid": "00000000-0000-0000-0000-000000000000", 00:36:29.946 "is_configured": false, 00:36:29.946 "data_offset": 0, 00:36:29.946 "data_size": 0 00:36:29.946 } 00:36:29.946 ] 00:36:29.946 }' 00:36:29.946 17:33:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:36:29.946 17:33:07 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:36:30.204 17:33:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:36:30.204 17:33:07 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:30.204 17:33:07 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:36:30.204 [2024-11-26 17:33:07.645788] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:36:30.204 [2024-11-26 17:33:07.645831] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:36:30.205 17:33:07 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:30.205 17:33:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:36:30.465 17:33:07 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:30.465 17:33:07 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:36:30.465 [2024-11-26 17:33:07.653807] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:36:30.465 [2024-11-26 17:33:07.653851] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:36:30.465 [2024-11-26 17:33:07.653861] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:36:30.465 [2024-11-26 17:33:07.653874] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:36:30.465 [2024-11-26 17:33:07.653881] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:36:30.465 [2024-11-26 17:33:07.653894] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:36:30.465 17:33:07 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:30.465 17:33:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:36:30.465 17:33:07 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:30.465 17:33:07 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:36:30.465 [2024-11-26 17:33:07.696192] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:36:30.465 BaseBdev1 00:36:30.465 17:33:07 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:30.465 17:33:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:36:30.465 17:33:07 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:36:30.465 17:33:07 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:36:30.465 17:33:07 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:36:30.465 17:33:07 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:36:30.465 17:33:07 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:36:30.465 17:33:07 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:36:30.465 17:33:07 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:30.465 17:33:07 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:36:30.465 17:33:07 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:30.465 17:33:07 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:36:30.465 17:33:07 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:30.465 17:33:07 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:36:30.465 [ 00:36:30.465 { 00:36:30.465 "name": "BaseBdev1", 00:36:30.465 "aliases": [ 00:36:30.465 "b33ed462-b062-48bf-9242-a5df8d70f4d9" 00:36:30.465 ], 00:36:30.465 "product_name": "Malloc disk", 00:36:30.465 "block_size": 512, 00:36:30.465 "num_blocks": 65536, 00:36:30.465 "uuid": "b33ed462-b062-48bf-9242-a5df8d70f4d9", 00:36:30.465 "assigned_rate_limits": { 00:36:30.465 "rw_ios_per_sec": 0, 00:36:30.465 "rw_mbytes_per_sec": 0, 00:36:30.465 "r_mbytes_per_sec": 0, 00:36:30.465 "w_mbytes_per_sec": 0 00:36:30.465 }, 00:36:30.465 "claimed": true, 00:36:30.465 "claim_type": "exclusive_write", 00:36:30.465 "zoned": false, 00:36:30.465 "supported_io_types": { 00:36:30.465 "read": true, 00:36:30.465 "write": true, 00:36:30.465 "unmap": true, 00:36:30.465 "flush": true, 00:36:30.465 "reset": true, 00:36:30.465 "nvme_admin": false, 00:36:30.465 "nvme_io": false, 00:36:30.465 "nvme_io_md": false, 00:36:30.465 "write_zeroes": true, 00:36:30.465 "zcopy": true, 00:36:30.465 "get_zone_info": false, 00:36:30.465 "zone_management": false, 00:36:30.465 "zone_append": false, 00:36:30.465 "compare": false, 00:36:30.465 "compare_and_write": false, 00:36:30.465 "abort": true, 00:36:30.465 "seek_hole": false, 00:36:30.465 "seek_data": false, 00:36:30.465 "copy": true, 00:36:30.465 "nvme_iov_md": false 00:36:30.465 }, 00:36:30.465 "memory_domains": [ 00:36:30.465 { 00:36:30.465 "dma_device_id": "system", 00:36:30.465 "dma_device_type": 1 00:36:30.465 }, 00:36:30.465 { 00:36:30.465 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:36:30.465 "dma_device_type": 2 00:36:30.465 } 00:36:30.465 ], 00:36:30.465 "driver_specific": {} 00:36:30.465 } 00:36:30.465 ] 00:36:30.465 17:33:07 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:30.465 17:33:07 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:36:30.465 17:33:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:36:30.465 17:33:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:36:30.465 17:33:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:36:30.465 17:33:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:36:30.465 17:33:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:36:30.465 17:33:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:36:30.465 17:33:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:36:30.465 17:33:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:36:30.465 17:33:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:36:30.465 17:33:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:36:30.465 17:33:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:36:30.465 17:33:07 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:30.465 17:33:07 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:36:30.465 17:33:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:36:30.465 17:33:07 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:30.465 17:33:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:36:30.465 "name": "Existed_Raid", 00:36:30.465 "uuid": "f0d710f8-e612-48bf-b713-485d73a64f5d", 00:36:30.465 "strip_size_kb": 64, 00:36:30.465 "state": "configuring", 00:36:30.465 "raid_level": "raid5f", 00:36:30.465 "superblock": true, 00:36:30.465 "num_base_bdevs": 3, 00:36:30.465 "num_base_bdevs_discovered": 1, 00:36:30.465 "num_base_bdevs_operational": 3, 00:36:30.465 "base_bdevs_list": [ 00:36:30.465 { 00:36:30.465 "name": "BaseBdev1", 00:36:30.465 "uuid": "b33ed462-b062-48bf-9242-a5df8d70f4d9", 00:36:30.465 "is_configured": true, 00:36:30.465 "data_offset": 2048, 00:36:30.465 "data_size": 63488 00:36:30.465 }, 00:36:30.465 { 00:36:30.465 "name": "BaseBdev2", 00:36:30.465 "uuid": "00000000-0000-0000-0000-000000000000", 00:36:30.465 "is_configured": false, 00:36:30.465 "data_offset": 0, 00:36:30.465 "data_size": 0 00:36:30.465 }, 00:36:30.465 { 00:36:30.465 "name": "BaseBdev3", 00:36:30.465 "uuid": "00000000-0000-0000-0000-000000000000", 00:36:30.465 "is_configured": false, 00:36:30.465 "data_offset": 0, 00:36:30.465 "data_size": 0 00:36:30.465 } 00:36:30.465 ] 00:36:30.465 }' 00:36:30.465 17:33:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:36:30.465 17:33:07 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:36:30.725 17:33:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:36:30.725 17:33:08 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:30.725 17:33:08 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:36:30.725 [2024-11-26 17:33:08.168338] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:36:30.725 [2024-11-26 17:33:08.168397] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:36:30.985 17:33:08 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:30.985 17:33:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:36:30.985 17:33:08 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:30.985 17:33:08 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:36:30.985 [2024-11-26 17:33:08.180401] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:36:30.985 [2024-11-26 17:33:08.182547] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:36:30.985 [2024-11-26 17:33:08.182593] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:36:30.985 [2024-11-26 17:33:08.182604] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:36:30.985 [2024-11-26 17:33:08.182617] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:36:30.985 17:33:08 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:30.985 17:33:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:36:30.985 17:33:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:36:30.985 17:33:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:36:30.985 17:33:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:36:30.985 17:33:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:36:30.985 17:33:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:36:30.985 17:33:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:36:30.985 17:33:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:36:30.985 17:33:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:36:30.985 17:33:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:36:30.985 17:33:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:36:30.985 17:33:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:36:30.985 17:33:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:36:30.985 17:33:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:36:30.985 17:33:08 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:30.985 17:33:08 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:36:30.985 17:33:08 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:30.985 17:33:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:36:30.985 "name": "Existed_Raid", 00:36:30.985 "uuid": "6353339b-a3fd-4d3b-b099-247b6f430cd7", 00:36:30.985 "strip_size_kb": 64, 00:36:30.985 "state": "configuring", 00:36:30.985 "raid_level": "raid5f", 00:36:30.985 "superblock": true, 00:36:30.985 "num_base_bdevs": 3, 00:36:30.985 "num_base_bdevs_discovered": 1, 00:36:30.985 "num_base_bdevs_operational": 3, 00:36:30.985 "base_bdevs_list": [ 00:36:30.985 { 00:36:30.985 "name": "BaseBdev1", 00:36:30.985 "uuid": "b33ed462-b062-48bf-9242-a5df8d70f4d9", 00:36:30.985 "is_configured": true, 00:36:30.985 "data_offset": 2048, 00:36:30.985 "data_size": 63488 00:36:30.985 }, 00:36:30.985 { 00:36:30.985 "name": "BaseBdev2", 00:36:30.985 "uuid": "00000000-0000-0000-0000-000000000000", 00:36:30.985 "is_configured": false, 00:36:30.985 "data_offset": 0, 00:36:30.985 "data_size": 0 00:36:30.985 }, 00:36:30.985 { 00:36:30.985 "name": "BaseBdev3", 00:36:30.985 "uuid": "00000000-0000-0000-0000-000000000000", 00:36:30.985 "is_configured": false, 00:36:30.985 "data_offset": 0, 00:36:30.985 "data_size": 0 00:36:30.985 } 00:36:30.985 ] 00:36:30.985 }' 00:36:30.985 17:33:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:36:30.985 17:33:08 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:36:31.244 17:33:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:36:31.244 17:33:08 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:31.244 17:33:08 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:36:31.244 [2024-11-26 17:33:08.668146] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:36:31.244 BaseBdev2 00:36:31.244 17:33:08 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:31.244 17:33:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:36:31.244 17:33:08 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:36:31.244 17:33:08 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:36:31.244 17:33:08 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:36:31.244 17:33:08 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:36:31.244 17:33:08 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:36:31.244 17:33:08 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:36:31.244 17:33:08 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:31.244 17:33:08 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:36:31.244 17:33:08 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:31.244 17:33:08 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:36:31.244 17:33:08 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:31.245 17:33:08 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:36:31.245 [ 00:36:31.245 { 00:36:31.245 "name": "BaseBdev2", 00:36:31.245 "aliases": [ 00:36:31.245 "48e4a84d-0079-4fe8-a39f-6a2857424dba" 00:36:31.245 ], 00:36:31.245 "product_name": "Malloc disk", 00:36:31.245 "block_size": 512, 00:36:31.245 "num_blocks": 65536, 00:36:31.504 "uuid": "48e4a84d-0079-4fe8-a39f-6a2857424dba", 00:36:31.504 "assigned_rate_limits": { 00:36:31.504 "rw_ios_per_sec": 0, 00:36:31.504 "rw_mbytes_per_sec": 0, 00:36:31.504 "r_mbytes_per_sec": 0, 00:36:31.504 "w_mbytes_per_sec": 0 00:36:31.504 }, 00:36:31.504 "claimed": true, 00:36:31.504 "claim_type": "exclusive_write", 00:36:31.504 "zoned": false, 00:36:31.504 "supported_io_types": { 00:36:31.504 "read": true, 00:36:31.504 "write": true, 00:36:31.504 "unmap": true, 00:36:31.504 "flush": true, 00:36:31.504 "reset": true, 00:36:31.504 "nvme_admin": false, 00:36:31.504 "nvme_io": false, 00:36:31.504 "nvme_io_md": false, 00:36:31.504 "write_zeroes": true, 00:36:31.504 "zcopy": true, 00:36:31.504 "get_zone_info": false, 00:36:31.504 "zone_management": false, 00:36:31.504 "zone_append": false, 00:36:31.504 "compare": false, 00:36:31.504 "compare_and_write": false, 00:36:31.504 "abort": true, 00:36:31.504 "seek_hole": false, 00:36:31.504 "seek_data": false, 00:36:31.504 "copy": true, 00:36:31.504 "nvme_iov_md": false 00:36:31.504 }, 00:36:31.504 "memory_domains": [ 00:36:31.504 { 00:36:31.504 "dma_device_id": "system", 00:36:31.504 "dma_device_type": 1 00:36:31.504 }, 00:36:31.504 { 00:36:31.504 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:36:31.504 "dma_device_type": 2 00:36:31.504 } 00:36:31.504 ], 00:36:31.504 "driver_specific": {} 00:36:31.504 } 00:36:31.504 ] 00:36:31.504 17:33:08 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:31.504 17:33:08 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:36:31.504 17:33:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:36:31.504 17:33:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:36:31.504 17:33:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:36:31.504 17:33:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:36:31.504 17:33:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:36:31.504 17:33:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:36:31.504 17:33:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:36:31.504 17:33:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:36:31.504 17:33:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:36:31.504 17:33:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:36:31.504 17:33:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:36:31.504 17:33:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:36:31.504 17:33:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:36:31.504 17:33:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:36:31.504 17:33:08 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:31.504 17:33:08 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:36:31.504 17:33:08 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:31.504 17:33:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:36:31.504 "name": "Existed_Raid", 00:36:31.504 "uuid": "6353339b-a3fd-4d3b-b099-247b6f430cd7", 00:36:31.504 "strip_size_kb": 64, 00:36:31.504 "state": "configuring", 00:36:31.504 "raid_level": "raid5f", 00:36:31.504 "superblock": true, 00:36:31.504 "num_base_bdevs": 3, 00:36:31.504 "num_base_bdevs_discovered": 2, 00:36:31.504 "num_base_bdevs_operational": 3, 00:36:31.504 "base_bdevs_list": [ 00:36:31.504 { 00:36:31.504 "name": "BaseBdev1", 00:36:31.504 "uuid": "b33ed462-b062-48bf-9242-a5df8d70f4d9", 00:36:31.505 "is_configured": true, 00:36:31.505 "data_offset": 2048, 00:36:31.505 "data_size": 63488 00:36:31.505 }, 00:36:31.505 { 00:36:31.505 "name": "BaseBdev2", 00:36:31.505 "uuid": "48e4a84d-0079-4fe8-a39f-6a2857424dba", 00:36:31.505 "is_configured": true, 00:36:31.505 "data_offset": 2048, 00:36:31.505 "data_size": 63488 00:36:31.505 }, 00:36:31.505 { 00:36:31.505 "name": "BaseBdev3", 00:36:31.505 "uuid": "00000000-0000-0000-0000-000000000000", 00:36:31.505 "is_configured": false, 00:36:31.505 "data_offset": 0, 00:36:31.505 "data_size": 0 00:36:31.505 } 00:36:31.505 ] 00:36:31.505 }' 00:36:31.505 17:33:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:36:31.505 17:33:08 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:36:31.764 17:33:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:36:31.764 17:33:09 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:31.764 17:33:09 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:36:31.764 [2024-11-26 17:33:09.206503] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:36:31.764 [2024-11-26 17:33:09.206788] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:36:31.764 [2024-11-26 17:33:09.206818] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:36:31.764 [2024-11-26 17:33:09.207257] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:36:31.764 BaseBdev3 00:36:31.764 17:33:09 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:31.764 17:33:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:36:31.764 17:33:09 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:36:31.764 17:33:09 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:36:31.764 17:33:09 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:36:31.764 17:33:09 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:36:31.764 17:33:09 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:36:32.023 17:33:09 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:36:32.023 17:33:09 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:32.023 17:33:09 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:36:32.023 [2024-11-26 17:33:09.213086] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:36:32.023 [2024-11-26 17:33:09.213109] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:36:32.023 [2024-11-26 17:33:09.213400] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:36:32.023 17:33:09 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:32.023 17:33:09 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:36:32.023 17:33:09 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:32.023 17:33:09 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:36:32.023 [ 00:36:32.023 { 00:36:32.023 "name": "BaseBdev3", 00:36:32.023 "aliases": [ 00:36:32.023 "8d69dd36-4dcd-4c8a-8669-be997d08c9e2" 00:36:32.023 ], 00:36:32.023 "product_name": "Malloc disk", 00:36:32.023 "block_size": 512, 00:36:32.023 "num_blocks": 65536, 00:36:32.023 "uuid": "8d69dd36-4dcd-4c8a-8669-be997d08c9e2", 00:36:32.023 "assigned_rate_limits": { 00:36:32.023 "rw_ios_per_sec": 0, 00:36:32.023 "rw_mbytes_per_sec": 0, 00:36:32.023 "r_mbytes_per_sec": 0, 00:36:32.023 "w_mbytes_per_sec": 0 00:36:32.023 }, 00:36:32.023 "claimed": true, 00:36:32.023 "claim_type": "exclusive_write", 00:36:32.023 "zoned": false, 00:36:32.023 "supported_io_types": { 00:36:32.023 "read": true, 00:36:32.023 "write": true, 00:36:32.023 "unmap": true, 00:36:32.023 "flush": true, 00:36:32.023 "reset": true, 00:36:32.023 "nvme_admin": false, 00:36:32.023 "nvme_io": false, 00:36:32.023 "nvme_io_md": false, 00:36:32.023 "write_zeroes": true, 00:36:32.023 "zcopy": true, 00:36:32.023 "get_zone_info": false, 00:36:32.023 "zone_management": false, 00:36:32.023 "zone_append": false, 00:36:32.023 "compare": false, 00:36:32.023 "compare_and_write": false, 00:36:32.023 "abort": true, 00:36:32.023 "seek_hole": false, 00:36:32.023 "seek_data": false, 00:36:32.023 "copy": true, 00:36:32.023 "nvme_iov_md": false 00:36:32.023 }, 00:36:32.023 "memory_domains": [ 00:36:32.023 { 00:36:32.023 "dma_device_id": "system", 00:36:32.023 "dma_device_type": 1 00:36:32.023 }, 00:36:32.023 { 00:36:32.023 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:36:32.023 "dma_device_type": 2 00:36:32.023 } 00:36:32.023 ], 00:36:32.023 "driver_specific": {} 00:36:32.023 } 00:36:32.023 ] 00:36:32.023 17:33:09 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:32.023 17:33:09 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:36:32.023 17:33:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:36:32.023 17:33:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:36:32.023 17:33:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 3 00:36:32.023 17:33:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:36:32.023 17:33:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:36:32.023 17:33:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:36:32.023 17:33:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:36:32.023 17:33:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:36:32.023 17:33:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:36:32.023 17:33:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:36:32.023 17:33:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:36:32.023 17:33:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:36:32.023 17:33:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:36:32.023 17:33:09 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:32.023 17:33:09 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:36:32.023 17:33:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:36:32.023 17:33:09 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:32.023 17:33:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:36:32.023 "name": "Existed_Raid", 00:36:32.023 "uuid": "6353339b-a3fd-4d3b-b099-247b6f430cd7", 00:36:32.023 "strip_size_kb": 64, 00:36:32.023 "state": "online", 00:36:32.023 "raid_level": "raid5f", 00:36:32.023 "superblock": true, 00:36:32.023 "num_base_bdevs": 3, 00:36:32.023 "num_base_bdevs_discovered": 3, 00:36:32.023 "num_base_bdevs_operational": 3, 00:36:32.023 "base_bdevs_list": [ 00:36:32.023 { 00:36:32.023 "name": "BaseBdev1", 00:36:32.023 "uuid": "b33ed462-b062-48bf-9242-a5df8d70f4d9", 00:36:32.023 "is_configured": true, 00:36:32.023 "data_offset": 2048, 00:36:32.023 "data_size": 63488 00:36:32.023 }, 00:36:32.023 { 00:36:32.023 "name": "BaseBdev2", 00:36:32.023 "uuid": "48e4a84d-0079-4fe8-a39f-6a2857424dba", 00:36:32.023 "is_configured": true, 00:36:32.023 "data_offset": 2048, 00:36:32.023 "data_size": 63488 00:36:32.023 }, 00:36:32.023 { 00:36:32.023 "name": "BaseBdev3", 00:36:32.023 "uuid": "8d69dd36-4dcd-4c8a-8669-be997d08c9e2", 00:36:32.023 "is_configured": true, 00:36:32.023 "data_offset": 2048, 00:36:32.023 "data_size": 63488 00:36:32.024 } 00:36:32.024 ] 00:36:32.024 }' 00:36:32.024 17:33:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:36:32.024 17:33:09 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:36:32.283 17:33:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:36:32.283 17:33:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:36:32.283 17:33:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:36:32.283 17:33:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:36:32.283 17:33:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:36:32.283 17:33:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:36:32.283 17:33:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:36:32.283 17:33:09 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:32.283 17:33:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:36:32.283 17:33:09 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:36:32.283 [2024-11-26 17:33:09.660175] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:36:32.283 17:33:09 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:32.283 17:33:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:36:32.283 "name": "Existed_Raid", 00:36:32.283 "aliases": [ 00:36:32.283 "6353339b-a3fd-4d3b-b099-247b6f430cd7" 00:36:32.283 ], 00:36:32.283 "product_name": "Raid Volume", 00:36:32.283 "block_size": 512, 00:36:32.283 "num_blocks": 126976, 00:36:32.283 "uuid": "6353339b-a3fd-4d3b-b099-247b6f430cd7", 00:36:32.283 "assigned_rate_limits": { 00:36:32.283 "rw_ios_per_sec": 0, 00:36:32.283 "rw_mbytes_per_sec": 0, 00:36:32.283 "r_mbytes_per_sec": 0, 00:36:32.283 "w_mbytes_per_sec": 0 00:36:32.283 }, 00:36:32.283 "claimed": false, 00:36:32.283 "zoned": false, 00:36:32.283 "supported_io_types": { 00:36:32.283 "read": true, 00:36:32.283 "write": true, 00:36:32.283 "unmap": false, 00:36:32.283 "flush": false, 00:36:32.283 "reset": true, 00:36:32.283 "nvme_admin": false, 00:36:32.283 "nvme_io": false, 00:36:32.283 "nvme_io_md": false, 00:36:32.283 "write_zeroes": true, 00:36:32.283 "zcopy": false, 00:36:32.283 "get_zone_info": false, 00:36:32.283 "zone_management": false, 00:36:32.283 "zone_append": false, 00:36:32.283 "compare": false, 00:36:32.283 "compare_and_write": false, 00:36:32.283 "abort": false, 00:36:32.283 "seek_hole": false, 00:36:32.283 "seek_data": false, 00:36:32.283 "copy": false, 00:36:32.283 "nvme_iov_md": false 00:36:32.283 }, 00:36:32.283 "driver_specific": { 00:36:32.283 "raid": { 00:36:32.283 "uuid": "6353339b-a3fd-4d3b-b099-247b6f430cd7", 00:36:32.283 "strip_size_kb": 64, 00:36:32.283 "state": "online", 00:36:32.283 "raid_level": "raid5f", 00:36:32.283 "superblock": true, 00:36:32.283 "num_base_bdevs": 3, 00:36:32.283 "num_base_bdevs_discovered": 3, 00:36:32.283 "num_base_bdevs_operational": 3, 00:36:32.283 "base_bdevs_list": [ 00:36:32.283 { 00:36:32.283 "name": "BaseBdev1", 00:36:32.283 "uuid": "b33ed462-b062-48bf-9242-a5df8d70f4d9", 00:36:32.283 "is_configured": true, 00:36:32.283 "data_offset": 2048, 00:36:32.283 "data_size": 63488 00:36:32.283 }, 00:36:32.283 { 00:36:32.283 "name": "BaseBdev2", 00:36:32.283 "uuid": "48e4a84d-0079-4fe8-a39f-6a2857424dba", 00:36:32.283 "is_configured": true, 00:36:32.283 "data_offset": 2048, 00:36:32.283 "data_size": 63488 00:36:32.283 }, 00:36:32.283 { 00:36:32.283 "name": "BaseBdev3", 00:36:32.283 "uuid": "8d69dd36-4dcd-4c8a-8669-be997d08c9e2", 00:36:32.283 "is_configured": true, 00:36:32.283 "data_offset": 2048, 00:36:32.283 "data_size": 63488 00:36:32.283 } 00:36:32.283 ] 00:36:32.283 } 00:36:32.283 } 00:36:32.283 }' 00:36:32.283 17:33:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:36:32.543 17:33:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:36:32.543 BaseBdev2 00:36:32.543 BaseBdev3' 00:36:32.543 17:33:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:36:32.543 17:33:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:36:32.543 17:33:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:36:32.543 17:33:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:36:32.543 17:33:09 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:32.543 17:33:09 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:36:32.543 17:33:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:36:32.543 17:33:09 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:32.543 17:33:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:36:32.543 17:33:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:36:32.543 17:33:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:36:32.543 17:33:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:36:32.543 17:33:09 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:32.543 17:33:09 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:36:32.543 17:33:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:36:32.543 17:33:09 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:32.543 17:33:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:36:32.543 17:33:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:36:32.543 17:33:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:36:32.543 17:33:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:36:32.543 17:33:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:36:32.543 17:33:09 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:32.543 17:33:09 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:36:32.543 17:33:09 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:32.543 17:33:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:36:32.543 17:33:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:36:32.543 17:33:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:36:32.543 17:33:09 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:32.543 17:33:09 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:36:32.543 [2024-11-26 17:33:09.912073] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:36:32.802 17:33:10 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:32.802 17:33:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # local expected_state 00:36:32.802 17:33:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@261 -- # has_redundancy raid5f 00:36:32.802 17:33:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # case $1 in 00:36:32.802 17:33:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@199 -- # return 0 00:36:32.802 17:33:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@264 -- # expected_state=online 00:36:32.802 17:33:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 2 00:36:32.802 17:33:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:36:32.802 17:33:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:36:32.802 17:33:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:36:32.802 17:33:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:36:32.802 17:33:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:36:32.802 17:33:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:36:32.802 17:33:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:36:32.802 17:33:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:36:32.802 17:33:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:36:32.802 17:33:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:36:32.802 17:33:10 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:32.802 17:33:10 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:36:32.803 17:33:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:36:32.803 17:33:10 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:32.803 17:33:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:36:32.803 "name": "Existed_Raid", 00:36:32.803 "uuid": "6353339b-a3fd-4d3b-b099-247b6f430cd7", 00:36:32.803 "strip_size_kb": 64, 00:36:32.803 "state": "online", 00:36:32.803 "raid_level": "raid5f", 00:36:32.803 "superblock": true, 00:36:32.803 "num_base_bdevs": 3, 00:36:32.803 "num_base_bdevs_discovered": 2, 00:36:32.803 "num_base_bdevs_operational": 2, 00:36:32.803 "base_bdevs_list": [ 00:36:32.803 { 00:36:32.803 "name": null, 00:36:32.803 "uuid": "00000000-0000-0000-0000-000000000000", 00:36:32.803 "is_configured": false, 00:36:32.803 "data_offset": 0, 00:36:32.803 "data_size": 63488 00:36:32.803 }, 00:36:32.803 { 00:36:32.803 "name": "BaseBdev2", 00:36:32.803 "uuid": "48e4a84d-0079-4fe8-a39f-6a2857424dba", 00:36:32.803 "is_configured": true, 00:36:32.803 "data_offset": 2048, 00:36:32.803 "data_size": 63488 00:36:32.803 }, 00:36:32.803 { 00:36:32.803 "name": "BaseBdev3", 00:36:32.803 "uuid": "8d69dd36-4dcd-4c8a-8669-be997d08c9e2", 00:36:32.803 "is_configured": true, 00:36:32.803 "data_offset": 2048, 00:36:32.803 "data_size": 63488 00:36:32.803 } 00:36:32.803 ] 00:36:32.803 }' 00:36:32.803 17:33:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:36:32.803 17:33:10 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:36:33.061 17:33:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:36:33.061 17:33:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:36:33.062 17:33:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:36:33.062 17:33:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:36:33.062 17:33:10 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:33.062 17:33:10 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:36:33.062 17:33:10 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:33.062 17:33:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:36:33.062 17:33:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:36:33.062 17:33:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:36:33.062 17:33:10 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:33.062 17:33:10 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:36:33.062 [2024-11-26 17:33:10.499283] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:36:33.062 [2024-11-26 17:33:10.499432] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:36:33.321 [2024-11-26 17:33:10.592938] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:36:33.321 17:33:10 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:33.321 17:33:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:36:33.321 17:33:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:36:33.321 17:33:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:36:33.321 17:33:10 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:33.321 17:33:10 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:36:33.321 17:33:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:36:33.321 17:33:10 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:33.321 17:33:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:36:33.321 17:33:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:36:33.321 17:33:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:36:33.321 17:33:10 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:33.321 17:33:10 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:36:33.321 [2024-11-26 17:33:10.652963] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:36:33.321 [2024-11-26 17:33:10.653017] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:36:33.321 17:33:10 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:33.321 17:33:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:36:33.321 17:33:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:36:33.321 17:33:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:36:33.321 17:33:10 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:33.321 17:33:10 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:36:33.321 17:33:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:36:33.581 17:33:10 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:33.581 17:33:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:36:33.581 17:33:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:36:33.581 17:33:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@284 -- # '[' 3 -gt 2 ']' 00:36:33.581 17:33:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:36:33.581 17:33:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:36:33.581 17:33:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:36:33.581 17:33:10 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:33.581 17:33:10 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:36:33.581 BaseBdev2 00:36:33.581 17:33:10 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:33.581 17:33:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:36:33.581 17:33:10 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:36:33.581 17:33:10 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:36:33.581 17:33:10 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:36:33.581 17:33:10 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:36:33.581 17:33:10 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:36:33.581 17:33:10 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:36:33.581 17:33:10 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:33.581 17:33:10 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:36:33.581 17:33:10 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:33.581 17:33:10 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:36:33.581 17:33:10 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:33.581 17:33:10 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:36:33.581 [ 00:36:33.581 { 00:36:33.581 "name": "BaseBdev2", 00:36:33.581 "aliases": [ 00:36:33.581 "4be95279-da08-4083-8507-5203520dde41" 00:36:33.581 ], 00:36:33.581 "product_name": "Malloc disk", 00:36:33.581 "block_size": 512, 00:36:33.581 "num_blocks": 65536, 00:36:33.581 "uuid": "4be95279-da08-4083-8507-5203520dde41", 00:36:33.581 "assigned_rate_limits": { 00:36:33.581 "rw_ios_per_sec": 0, 00:36:33.581 "rw_mbytes_per_sec": 0, 00:36:33.581 "r_mbytes_per_sec": 0, 00:36:33.581 "w_mbytes_per_sec": 0 00:36:33.581 }, 00:36:33.581 "claimed": false, 00:36:33.581 "zoned": false, 00:36:33.581 "supported_io_types": { 00:36:33.581 "read": true, 00:36:33.581 "write": true, 00:36:33.581 "unmap": true, 00:36:33.581 "flush": true, 00:36:33.581 "reset": true, 00:36:33.581 "nvme_admin": false, 00:36:33.581 "nvme_io": false, 00:36:33.581 "nvme_io_md": false, 00:36:33.581 "write_zeroes": true, 00:36:33.581 "zcopy": true, 00:36:33.581 "get_zone_info": false, 00:36:33.581 "zone_management": false, 00:36:33.581 "zone_append": false, 00:36:33.581 "compare": false, 00:36:33.581 "compare_and_write": false, 00:36:33.581 "abort": true, 00:36:33.581 "seek_hole": false, 00:36:33.582 "seek_data": false, 00:36:33.582 "copy": true, 00:36:33.582 "nvme_iov_md": false 00:36:33.582 }, 00:36:33.582 "memory_domains": [ 00:36:33.582 { 00:36:33.582 "dma_device_id": "system", 00:36:33.582 "dma_device_type": 1 00:36:33.582 }, 00:36:33.582 { 00:36:33.582 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:36:33.582 "dma_device_type": 2 00:36:33.582 } 00:36:33.582 ], 00:36:33.582 "driver_specific": {} 00:36:33.582 } 00:36:33.582 ] 00:36:33.582 17:33:10 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:33.582 17:33:10 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:36:33.582 17:33:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:36:33.582 17:33:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:36:33.582 17:33:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:36:33.582 17:33:10 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:33.582 17:33:10 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:36:33.582 BaseBdev3 00:36:33.582 17:33:10 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:33.582 17:33:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:36:33.582 17:33:10 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:36:33.582 17:33:10 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:36:33.582 17:33:10 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:36:33.582 17:33:10 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:36:33.582 17:33:10 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:36:33.582 17:33:10 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:36:33.582 17:33:10 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:33.582 17:33:10 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:36:33.582 17:33:10 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:33.582 17:33:10 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:36:33.582 17:33:10 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:33.582 17:33:10 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:36:33.582 [ 00:36:33.582 { 00:36:33.582 "name": "BaseBdev3", 00:36:33.582 "aliases": [ 00:36:33.582 "7fc644a9-fbf1-40c7-bdb3-49c2038f9e12" 00:36:33.582 ], 00:36:33.582 "product_name": "Malloc disk", 00:36:33.582 "block_size": 512, 00:36:33.582 "num_blocks": 65536, 00:36:33.582 "uuid": "7fc644a9-fbf1-40c7-bdb3-49c2038f9e12", 00:36:33.582 "assigned_rate_limits": { 00:36:33.582 "rw_ios_per_sec": 0, 00:36:33.582 "rw_mbytes_per_sec": 0, 00:36:33.582 "r_mbytes_per_sec": 0, 00:36:33.582 "w_mbytes_per_sec": 0 00:36:33.582 }, 00:36:33.582 "claimed": false, 00:36:33.582 "zoned": false, 00:36:33.582 "supported_io_types": { 00:36:33.582 "read": true, 00:36:33.582 "write": true, 00:36:33.582 "unmap": true, 00:36:33.582 "flush": true, 00:36:33.582 "reset": true, 00:36:33.582 "nvme_admin": false, 00:36:33.582 "nvme_io": false, 00:36:33.582 "nvme_io_md": false, 00:36:33.582 "write_zeroes": true, 00:36:33.582 "zcopy": true, 00:36:33.582 "get_zone_info": false, 00:36:33.582 "zone_management": false, 00:36:33.582 "zone_append": false, 00:36:33.582 "compare": false, 00:36:33.582 "compare_and_write": false, 00:36:33.582 "abort": true, 00:36:33.582 "seek_hole": false, 00:36:33.582 "seek_data": false, 00:36:33.582 "copy": true, 00:36:33.582 "nvme_iov_md": false 00:36:33.582 }, 00:36:33.582 "memory_domains": [ 00:36:33.582 { 00:36:33.582 "dma_device_id": "system", 00:36:33.582 "dma_device_type": 1 00:36:33.582 }, 00:36:33.582 { 00:36:33.582 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:36:33.582 "dma_device_type": 2 00:36:33.582 } 00:36:33.582 ], 00:36:33.582 "driver_specific": {} 00:36:33.582 } 00:36:33.582 ] 00:36:33.582 17:33:10 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:33.582 17:33:10 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:36:33.582 17:33:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:36:33.582 17:33:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:36:33.582 17:33:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:36:33.582 17:33:10 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:33.582 17:33:10 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:36:33.582 [2024-11-26 17:33:10.960278] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:36:33.582 [2024-11-26 17:33:10.960322] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:36:33.582 [2024-11-26 17:33:10.960345] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:36:33.582 [2024-11-26 17:33:10.962402] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:36:33.582 17:33:10 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:33.582 17:33:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:36:33.582 17:33:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:36:33.582 17:33:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:36:33.582 17:33:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:36:33.582 17:33:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:36:33.582 17:33:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:36:33.582 17:33:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:36:33.582 17:33:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:36:33.582 17:33:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:36:33.582 17:33:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:36:33.582 17:33:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:36:33.582 17:33:10 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:33.582 17:33:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:36:33.582 17:33:10 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:36:33.582 17:33:10 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:33.582 17:33:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:36:33.583 "name": "Existed_Raid", 00:36:33.583 "uuid": "9775e6b9-c763-476b-8eec-e6d6e7b71325", 00:36:33.583 "strip_size_kb": 64, 00:36:33.583 "state": "configuring", 00:36:33.583 "raid_level": "raid5f", 00:36:33.583 "superblock": true, 00:36:33.583 "num_base_bdevs": 3, 00:36:33.583 "num_base_bdevs_discovered": 2, 00:36:33.583 "num_base_bdevs_operational": 3, 00:36:33.583 "base_bdevs_list": [ 00:36:33.583 { 00:36:33.583 "name": "BaseBdev1", 00:36:33.583 "uuid": "00000000-0000-0000-0000-000000000000", 00:36:33.583 "is_configured": false, 00:36:33.583 "data_offset": 0, 00:36:33.583 "data_size": 0 00:36:33.583 }, 00:36:33.583 { 00:36:33.583 "name": "BaseBdev2", 00:36:33.583 "uuid": "4be95279-da08-4083-8507-5203520dde41", 00:36:33.583 "is_configured": true, 00:36:33.583 "data_offset": 2048, 00:36:33.583 "data_size": 63488 00:36:33.583 }, 00:36:33.583 { 00:36:33.583 "name": "BaseBdev3", 00:36:33.583 "uuid": "7fc644a9-fbf1-40c7-bdb3-49c2038f9e12", 00:36:33.583 "is_configured": true, 00:36:33.583 "data_offset": 2048, 00:36:33.583 "data_size": 63488 00:36:33.583 } 00:36:33.583 ] 00:36:33.583 }' 00:36:33.583 17:33:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:36:33.583 17:33:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:36:34.152 17:33:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:36:34.152 17:33:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:34.152 17:33:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:36:34.152 [2024-11-26 17:33:11.420399] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:36:34.152 17:33:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:34.152 17:33:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:36:34.152 17:33:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:36:34.152 17:33:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:36:34.152 17:33:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:36:34.152 17:33:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:36:34.152 17:33:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:36:34.152 17:33:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:36:34.152 17:33:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:36:34.152 17:33:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:36:34.152 17:33:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:36:34.152 17:33:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:36:34.152 17:33:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:34.152 17:33:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:36:34.152 17:33:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:36:34.152 17:33:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:34.152 17:33:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:36:34.152 "name": "Existed_Raid", 00:36:34.152 "uuid": "9775e6b9-c763-476b-8eec-e6d6e7b71325", 00:36:34.152 "strip_size_kb": 64, 00:36:34.152 "state": "configuring", 00:36:34.152 "raid_level": "raid5f", 00:36:34.152 "superblock": true, 00:36:34.152 "num_base_bdevs": 3, 00:36:34.152 "num_base_bdevs_discovered": 1, 00:36:34.152 "num_base_bdevs_operational": 3, 00:36:34.152 "base_bdevs_list": [ 00:36:34.152 { 00:36:34.152 "name": "BaseBdev1", 00:36:34.152 "uuid": "00000000-0000-0000-0000-000000000000", 00:36:34.152 "is_configured": false, 00:36:34.152 "data_offset": 0, 00:36:34.152 "data_size": 0 00:36:34.152 }, 00:36:34.152 { 00:36:34.152 "name": null, 00:36:34.152 "uuid": "4be95279-da08-4083-8507-5203520dde41", 00:36:34.152 "is_configured": false, 00:36:34.152 "data_offset": 0, 00:36:34.152 "data_size": 63488 00:36:34.152 }, 00:36:34.152 { 00:36:34.152 "name": "BaseBdev3", 00:36:34.152 "uuid": "7fc644a9-fbf1-40c7-bdb3-49c2038f9e12", 00:36:34.152 "is_configured": true, 00:36:34.152 "data_offset": 2048, 00:36:34.152 "data_size": 63488 00:36:34.152 } 00:36:34.152 ] 00:36:34.152 }' 00:36:34.152 17:33:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:36:34.152 17:33:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:36:34.411 17:33:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:36:34.411 17:33:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:36:34.411 17:33:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:34.411 17:33:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:36:34.670 17:33:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:34.670 17:33:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:36:34.670 17:33:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:36:34.670 17:33:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:34.670 17:33:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:36:34.670 [2024-11-26 17:33:11.935713] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:36:34.670 BaseBdev1 00:36:34.670 17:33:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:34.670 17:33:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:36:34.670 17:33:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:36:34.670 17:33:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:36:34.670 17:33:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:36:34.670 17:33:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:36:34.670 17:33:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:36:34.670 17:33:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:36:34.670 17:33:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:34.670 17:33:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:36:34.670 17:33:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:34.670 17:33:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:36:34.670 17:33:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:34.670 17:33:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:36:34.670 [ 00:36:34.670 { 00:36:34.670 "name": "BaseBdev1", 00:36:34.670 "aliases": [ 00:36:34.670 "d103436a-7399-45b6-8a7f-05d47af64f3b" 00:36:34.670 ], 00:36:34.670 "product_name": "Malloc disk", 00:36:34.670 "block_size": 512, 00:36:34.670 "num_blocks": 65536, 00:36:34.670 "uuid": "d103436a-7399-45b6-8a7f-05d47af64f3b", 00:36:34.671 "assigned_rate_limits": { 00:36:34.671 "rw_ios_per_sec": 0, 00:36:34.671 "rw_mbytes_per_sec": 0, 00:36:34.671 "r_mbytes_per_sec": 0, 00:36:34.671 "w_mbytes_per_sec": 0 00:36:34.671 }, 00:36:34.671 "claimed": true, 00:36:34.671 "claim_type": "exclusive_write", 00:36:34.671 "zoned": false, 00:36:34.671 "supported_io_types": { 00:36:34.671 "read": true, 00:36:34.671 "write": true, 00:36:34.671 "unmap": true, 00:36:34.671 "flush": true, 00:36:34.671 "reset": true, 00:36:34.671 "nvme_admin": false, 00:36:34.671 "nvme_io": false, 00:36:34.671 "nvme_io_md": false, 00:36:34.671 "write_zeroes": true, 00:36:34.671 "zcopy": true, 00:36:34.671 "get_zone_info": false, 00:36:34.671 "zone_management": false, 00:36:34.671 "zone_append": false, 00:36:34.671 "compare": false, 00:36:34.671 "compare_and_write": false, 00:36:34.671 "abort": true, 00:36:34.671 "seek_hole": false, 00:36:34.671 "seek_data": false, 00:36:34.671 "copy": true, 00:36:34.671 "nvme_iov_md": false 00:36:34.671 }, 00:36:34.671 "memory_domains": [ 00:36:34.671 { 00:36:34.671 "dma_device_id": "system", 00:36:34.671 "dma_device_type": 1 00:36:34.671 }, 00:36:34.671 { 00:36:34.671 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:36:34.671 "dma_device_type": 2 00:36:34.671 } 00:36:34.671 ], 00:36:34.671 "driver_specific": {} 00:36:34.671 } 00:36:34.671 ] 00:36:34.671 17:33:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:34.671 17:33:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:36:34.671 17:33:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:36:34.671 17:33:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:36:34.671 17:33:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:36:34.671 17:33:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:36:34.671 17:33:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:36:34.671 17:33:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:36:34.671 17:33:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:36:34.671 17:33:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:36:34.671 17:33:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:36:34.671 17:33:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:36:34.671 17:33:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:36:34.671 17:33:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:34.671 17:33:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:36:34.671 17:33:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:36:34.671 17:33:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:34.671 17:33:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:36:34.671 "name": "Existed_Raid", 00:36:34.671 "uuid": "9775e6b9-c763-476b-8eec-e6d6e7b71325", 00:36:34.671 "strip_size_kb": 64, 00:36:34.671 "state": "configuring", 00:36:34.671 "raid_level": "raid5f", 00:36:34.671 "superblock": true, 00:36:34.671 "num_base_bdevs": 3, 00:36:34.671 "num_base_bdevs_discovered": 2, 00:36:34.671 "num_base_bdevs_operational": 3, 00:36:34.671 "base_bdevs_list": [ 00:36:34.671 { 00:36:34.671 "name": "BaseBdev1", 00:36:34.671 "uuid": "d103436a-7399-45b6-8a7f-05d47af64f3b", 00:36:34.671 "is_configured": true, 00:36:34.671 "data_offset": 2048, 00:36:34.671 "data_size": 63488 00:36:34.671 }, 00:36:34.671 { 00:36:34.671 "name": null, 00:36:34.671 "uuid": "4be95279-da08-4083-8507-5203520dde41", 00:36:34.671 "is_configured": false, 00:36:34.671 "data_offset": 0, 00:36:34.671 "data_size": 63488 00:36:34.671 }, 00:36:34.671 { 00:36:34.671 "name": "BaseBdev3", 00:36:34.671 "uuid": "7fc644a9-fbf1-40c7-bdb3-49c2038f9e12", 00:36:34.671 "is_configured": true, 00:36:34.671 "data_offset": 2048, 00:36:34.671 "data_size": 63488 00:36:34.671 } 00:36:34.671 ] 00:36:34.671 }' 00:36:34.671 17:33:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:36:34.671 17:33:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:36:35.240 17:33:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:36:35.240 17:33:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:36:35.240 17:33:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:35.240 17:33:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:36:35.240 17:33:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:35.240 17:33:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:36:35.240 17:33:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:36:35.240 17:33:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:35.240 17:33:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:36:35.240 [2024-11-26 17:33:12.483891] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:36:35.240 17:33:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:35.240 17:33:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:36:35.240 17:33:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:36:35.240 17:33:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:36:35.240 17:33:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:36:35.240 17:33:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:36:35.240 17:33:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:36:35.240 17:33:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:36:35.240 17:33:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:36:35.240 17:33:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:36:35.240 17:33:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:36:35.240 17:33:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:36:35.240 17:33:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:36:35.240 17:33:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:35.240 17:33:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:36:35.240 17:33:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:35.240 17:33:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:36:35.240 "name": "Existed_Raid", 00:36:35.240 "uuid": "9775e6b9-c763-476b-8eec-e6d6e7b71325", 00:36:35.240 "strip_size_kb": 64, 00:36:35.240 "state": "configuring", 00:36:35.240 "raid_level": "raid5f", 00:36:35.240 "superblock": true, 00:36:35.240 "num_base_bdevs": 3, 00:36:35.240 "num_base_bdevs_discovered": 1, 00:36:35.240 "num_base_bdevs_operational": 3, 00:36:35.240 "base_bdevs_list": [ 00:36:35.240 { 00:36:35.240 "name": "BaseBdev1", 00:36:35.240 "uuid": "d103436a-7399-45b6-8a7f-05d47af64f3b", 00:36:35.240 "is_configured": true, 00:36:35.240 "data_offset": 2048, 00:36:35.240 "data_size": 63488 00:36:35.240 }, 00:36:35.240 { 00:36:35.240 "name": null, 00:36:35.240 "uuid": "4be95279-da08-4083-8507-5203520dde41", 00:36:35.240 "is_configured": false, 00:36:35.240 "data_offset": 0, 00:36:35.240 "data_size": 63488 00:36:35.240 }, 00:36:35.240 { 00:36:35.240 "name": null, 00:36:35.240 "uuid": "7fc644a9-fbf1-40c7-bdb3-49c2038f9e12", 00:36:35.240 "is_configured": false, 00:36:35.240 "data_offset": 0, 00:36:35.240 "data_size": 63488 00:36:35.240 } 00:36:35.240 ] 00:36:35.240 }' 00:36:35.240 17:33:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:36:35.240 17:33:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:36:35.809 17:33:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:36:35.809 17:33:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:35.809 17:33:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:36:35.809 17:33:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:36:35.809 17:33:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:35.809 17:33:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:36:35.809 17:33:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:36:35.809 17:33:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:35.809 17:33:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:36:35.809 [2024-11-26 17:33:13.000013] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:36:35.809 17:33:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:35.809 17:33:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:36:35.809 17:33:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:36:35.809 17:33:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:36:35.809 17:33:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:36:35.809 17:33:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:36:35.809 17:33:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:36:35.809 17:33:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:36:35.809 17:33:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:36:35.809 17:33:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:36:35.809 17:33:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:36:35.809 17:33:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:36:35.809 17:33:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:35.809 17:33:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:36:35.809 17:33:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:36:35.809 17:33:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:35.809 17:33:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:36:35.809 "name": "Existed_Raid", 00:36:35.809 "uuid": "9775e6b9-c763-476b-8eec-e6d6e7b71325", 00:36:35.809 "strip_size_kb": 64, 00:36:35.809 "state": "configuring", 00:36:35.809 "raid_level": "raid5f", 00:36:35.809 "superblock": true, 00:36:35.809 "num_base_bdevs": 3, 00:36:35.809 "num_base_bdevs_discovered": 2, 00:36:35.809 "num_base_bdevs_operational": 3, 00:36:35.809 "base_bdevs_list": [ 00:36:35.809 { 00:36:35.809 "name": "BaseBdev1", 00:36:35.809 "uuid": "d103436a-7399-45b6-8a7f-05d47af64f3b", 00:36:35.809 "is_configured": true, 00:36:35.809 "data_offset": 2048, 00:36:35.809 "data_size": 63488 00:36:35.809 }, 00:36:35.809 { 00:36:35.809 "name": null, 00:36:35.809 "uuid": "4be95279-da08-4083-8507-5203520dde41", 00:36:35.809 "is_configured": false, 00:36:35.809 "data_offset": 0, 00:36:35.809 "data_size": 63488 00:36:35.809 }, 00:36:35.809 { 00:36:35.809 "name": "BaseBdev3", 00:36:35.809 "uuid": "7fc644a9-fbf1-40c7-bdb3-49c2038f9e12", 00:36:35.809 "is_configured": true, 00:36:35.809 "data_offset": 2048, 00:36:35.809 "data_size": 63488 00:36:35.809 } 00:36:35.809 ] 00:36:35.809 }' 00:36:35.809 17:33:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:36:35.809 17:33:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:36:36.069 17:33:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:36:36.069 17:33:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:36.069 17:33:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:36:36.069 17:33:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:36:36.069 17:33:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:36.069 17:33:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:36:36.069 17:33:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:36:36.069 17:33:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:36.069 17:33:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:36:36.069 [2024-11-26 17:33:13.484148] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:36:36.328 17:33:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:36.328 17:33:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:36:36.328 17:33:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:36:36.328 17:33:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:36:36.328 17:33:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:36:36.328 17:33:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:36:36.328 17:33:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:36:36.328 17:33:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:36:36.328 17:33:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:36:36.328 17:33:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:36:36.328 17:33:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:36:36.328 17:33:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:36:36.328 17:33:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:36:36.328 17:33:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:36.328 17:33:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:36:36.328 17:33:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:36.328 17:33:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:36:36.328 "name": "Existed_Raid", 00:36:36.328 "uuid": "9775e6b9-c763-476b-8eec-e6d6e7b71325", 00:36:36.328 "strip_size_kb": 64, 00:36:36.328 "state": "configuring", 00:36:36.328 "raid_level": "raid5f", 00:36:36.328 "superblock": true, 00:36:36.328 "num_base_bdevs": 3, 00:36:36.328 "num_base_bdevs_discovered": 1, 00:36:36.328 "num_base_bdevs_operational": 3, 00:36:36.328 "base_bdevs_list": [ 00:36:36.328 { 00:36:36.328 "name": null, 00:36:36.328 "uuid": "d103436a-7399-45b6-8a7f-05d47af64f3b", 00:36:36.328 "is_configured": false, 00:36:36.328 "data_offset": 0, 00:36:36.328 "data_size": 63488 00:36:36.328 }, 00:36:36.328 { 00:36:36.328 "name": null, 00:36:36.328 "uuid": "4be95279-da08-4083-8507-5203520dde41", 00:36:36.328 "is_configured": false, 00:36:36.328 "data_offset": 0, 00:36:36.328 "data_size": 63488 00:36:36.328 }, 00:36:36.328 { 00:36:36.328 "name": "BaseBdev3", 00:36:36.328 "uuid": "7fc644a9-fbf1-40c7-bdb3-49c2038f9e12", 00:36:36.328 "is_configured": true, 00:36:36.328 "data_offset": 2048, 00:36:36.328 "data_size": 63488 00:36:36.328 } 00:36:36.328 ] 00:36:36.328 }' 00:36:36.328 17:33:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:36:36.328 17:33:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:36:36.588 17:33:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:36:36.588 17:33:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:36:36.588 17:33:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:36.588 17:33:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:36:36.851 17:33:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:36.851 17:33:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:36:36.851 17:33:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:36:36.851 17:33:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:36.851 17:33:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:36:36.851 [2024-11-26 17:33:14.062260] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:36:36.851 17:33:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:36.851 17:33:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:36:36.851 17:33:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:36:36.852 17:33:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:36:36.852 17:33:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:36:36.852 17:33:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:36:36.852 17:33:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:36:36.852 17:33:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:36:36.852 17:33:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:36:36.852 17:33:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:36:36.852 17:33:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:36:36.852 17:33:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:36:36.852 17:33:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:36:36.852 17:33:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:36.852 17:33:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:36:36.852 17:33:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:36.852 17:33:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:36:36.852 "name": "Existed_Raid", 00:36:36.852 "uuid": "9775e6b9-c763-476b-8eec-e6d6e7b71325", 00:36:36.852 "strip_size_kb": 64, 00:36:36.852 "state": "configuring", 00:36:36.852 "raid_level": "raid5f", 00:36:36.852 "superblock": true, 00:36:36.852 "num_base_bdevs": 3, 00:36:36.852 "num_base_bdevs_discovered": 2, 00:36:36.852 "num_base_bdevs_operational": 3, 00:36:36.852 "base_bdevs_list": [ 00:36:36.852 { 00:36:36.852 "name": null, 00:36:36.852 "uuid": "d103436a-7399-45b6-8a7f-05d47af64f3b", 00:36:36.852 "is_configured": false, 00:36:36.852 "data_offset": 0, 00:36:36.852 "data_size": 63488 00:36:36.852 }, 00:36:36.852 { 00:36:36.852 "name": "BaseBdev2", 00:36:36.852 "uuid": "4be95279-da08-4083-8507-5203520dde41", 00:36:36.852 "is_configured": true, 00:36:36.852 "data_offset": 2048, 00:36:36.852 "data_size": 63488 00:36:36.852 }, 00:36:36.852 { 00:36:36.852 "name": "BaseBdev3", 00:36:36.852 "uuid": "7fc644a9-fbf1-40c7-bdb3-49c2038f9e12", 00:36:36.852 "is_configured": true, 00:36:36.852 "data_offset": 2048, 00:36:36.852 "data_size": 63488 00:36:36.852 } 00:36:36.852 ] 00:36:36.852 }' 00:36:36.852 17:33:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:36:36.852 17:33:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:36:37.167 17:33:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:36:37.167 17:33:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:37.167 17:33:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:36:37.167 17:33:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:36:37.167 17:33:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:37.167 17:33:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:36:37.167 17:33:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:36:37.167 17:33:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:36:37.167 17:33:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:37.167 17:33:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:36:37.426 17:33:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:37.426 17:33:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u d103436a-7399-45b6-8a7f-05d47af64f3b 00:36:37.426 17:33:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:37.426 17:33:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:36:37.426 [2024-11-26 17:33:14.649368] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:36:37.426 [2024-11-26 17:33:14.649601] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:36:37.426 [2024-11-26 17:33:14.649619] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:36:37.426 [2024-11-26 17:33:14.649876] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:36:37.426 NewBaseBdev 00:36:37.426 17:33:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:37.426 17:33:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:36:37.426 17:33:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=NewBaseBdev 00:36:37.426 17:33:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:36:37.426 17:33:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:36:37.426 17:33:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:36:37.426 17:33:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:36:37.426 17:33:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:36:37.426 17:33:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:37.426 17:33:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:36:37.426 [2024-11-26 17:33:14.655451] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:36:37.426 [2024-11-26 17:33:14.655607] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000008200 00:36:37.426 [2024-11-26 17:33:14.655802] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:36:37.426 17:33:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:37.426 17:33:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:36:37.426 17:33:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:37.426 17:33:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:36:37.426 [ 00:36:37.426 { 00:36:37.426 "name": "NewBaseBdev", 00:36:37.426 "aliases": [ 00:36:37.426 "d103436a-7399-45b6-8a7f-05d47af64f3b" 00:36:37.426 ], 00:36:37.426 "product_name": "Malloc disk", 00:36:37.426 "block_size": 512, 00:36:37.426 "num_blocks": 65536, 00:36:37.426 "uuid": "d103436a-7399-45b6-8a7f-05d47af64f3b", 00:36:37.426 "assigned_rate_limits": { 00:36:37.426 "rw_ios_per_sec": 0, 00:36:37.426 "rw_mbytes_per_sec": 0, 00:36:37.426 "r_mbytes_per_sec": 0, 00:36:37.426 "w_mbytes_per_sec": 0 00:36:37.426 }, 00:36:37.426 "claimed": true, 00:36:37.426 "claim_type": "exclusive_write", 00:36:37.426 "zoned": false, 00:36:37.426 "supported_io_types": { 00:36:37.426 "read": true, 00:36:37.426 "write": true, 00:36:37.426 "unmap": true, 00:36:37.426 "flush": true, 00:36:37.426 "reset": true, 00:36:37.426 "nvme_admin": false, 00:36:37.426 "nvme_io": false, 00:36:37.426 "nvme_io_md": false, 00:36:37.426 "write_zeroes": true, 00:36:37.426 "zcopy": true, 00:36:37.426 "get_zone_info": false, 00:36:37.426 "zone_management": false, 00:36:37.426 "zone_append": false, 00:36:37.426 "compare": false, 00:36:37.426 "compare_and_write": false, 00:36:37.426 "abort": true, 00:36:37.426 "seek_hole": false, 00:36:37.426 "seek_data": false, 00:36:37.426 "copy": true, 00:36:37.426 "nvme_iov_md": false 00:36:37.426 }, 00:36:37.426 "memory_domains": [ 00:36:37.426 { 00:36:37.426 "dma_device_id": "system", 00:36:37.426 "dma_device_type": 1 00:36:37.426 }, 00:36:37.426 { 00:36:37.426 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:36:37.426 "dma_device_type": 2 00:36:37.426 } 00:36:37.426 ], 00:36:37.426 "driver_specific": {} 00:36:37.426 } 00:36:37.426 ] 00:36:37.426 17:33:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:37.426 17:33:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:36:37.426 17:33:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 3 00:36:37.426 17:33:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:36:37.426 17:33:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:36:37.426 17:33:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:36:37.426 17:33:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:36:37.426 17:33:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:36:37.426 17:33:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:36:37.426 17:33:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:36:37.426 17:33:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:36:37.426 17:33:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:36:37.426 17:33:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:36:37.426 17:33:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:36:37.426 17:33:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:37.426 17:33:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:36:37.426 17:33:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:37.426 17:33:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:36:37.426 "name": "Existed_Raid", 00:36:37.426 "uuid": "9775e6b9-c763-476b-8eec-e6d6e7b71325", 00:36:37.426 "strip_size_kb": 64, 00:36:37.426 "state": "online", 00:36:37.426 "raid_level": "raid5f", 00:36:37.426 "superblock": true, 00:36:37.426 "num_base_bdevs": 3, 00:36:37.426 "num_base_bdevs_discovered": 3, 00:36:37.426 "num_base_bdevs_operational": 3, 00:36:37.426 "base_bdevs_list": [ 00:36:37.426 { 00:36:37.426 "name": "NewBaseBdev", 00:36:37.426 "uuid": "d103436a-7399-45b6-8a7f-05d47af64f3b", 00:36:37.426 "is_configured": true, 00:36:37.426 "data_offset": 2048, 00:36:37.426 "data_size": 63488 00:36:37.426 }, 00:36:37.426 { 00:36:37.426 "name": "BaseBdev2", 00:36:37.426 "uuid": "4be95279-da08-4083-8507-5203520dde41", 00:36:37.426 "is_configured": true, 00:36:37.426 "data_offset": 2048, 00:36:37.426 "data_size": 63488 00:36:37.426 }, 00:36:37.426 { 00:36:37.426 "name": "BaseBdev3", 00:36:37.426 "uuid": "7fc644a9-fbf1-40c7-bdb3-49c2038f9e12", 00:36:37.426 "is_configured": true, 00:36:37.426 "data_offset": 2048, 00:36:37.426 "data_size": 63488 00:36:37.426 } 00:36:37.426 ] 00:36:37.426 }' 00:36:37.426 17:33:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:36:37.427 17:33:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:36:37.995 17:33:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:36:37.995 17:33:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:36:37.995 17:33:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:36:37.995 17:33:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:36:37.995 17:33:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:36:37.995 17:33:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:36:37.995 17:33:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:36:37.995 17:33:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:36:37.995 17:33:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:37.995 17:33:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:36:37.995 [2024-11-26 17:33:15.162305] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:36:37.995 17:33:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:37.995 17:33:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:36:37.995 "name": "Existed_Raid", 00:36:37.995 "aliases": [ 00:36:37.995 "9775e6b9-c763-476b-8eec-e6d6e7b71325" 00:36:37.995 ], 00:36:37.995 "product_name": "Raid Volume", 00:36:37.995 "block_size": 512, 00:36:37.995 "num_blocks": 126976, 00:36:37.995 "uuid": "9775e6b9-c763-476b-8eec-e6d6e7b71325", 00:36:37.995 "assigned_rate_limits": { 00:36:37.995 "rw_ios_per_sec": 0, 00:36:37.995 "rw_mbytes_per_sec": 0, 00:36:37.995 "r_mbytes_per_sec": 0, 00:36:37.995 "w_mbytes_per_sec": 0 00:36:37.995 }, 00:36:37.995 "claimed": false, 00:36:37.995 "zoned": false, 00:36:37.995 "supported_io_types": { 00:36:37.995 "read": true, 00:36:37.995 "write": true, 00:36:37.995 "unmap": false, 00:36:37.995 "flush": false, 00:36:37.995 "reset": true, 00:36:37.995 "nvme_admin": false, 00:36:37.995 "nvme_io": false, 00:36:37.995 "nvme_io_md": false, 00:36:37.995 "write_zeroes": true, 00:36:37.995 "zcopy": false, 00:36:37.995 "get_zone_info": false, 00:36:37.995 "zone_management": false, 00:36:37.995 "zone_append": false, 00:36:37.995 "compare": false, 00:36:37.995 "compare_and_write": false, 00:36:37.995 "abort": false, 00:36:37.995 "seek_hole": false, 00:36:37.995 "seek_data": false, 00:36:37.995 "copy": false, 00:36:37.995 "nvme_iov_md": false 00:36:37.995 }, 00:36:37.995 "driver_specific": { 00:36:37.995 "raid": { 00:36:37.995 "uuid": "9775e6b9-c763-476b-8eec-e6d6e7b71325", 00:36:37.995 "strip_size_kb": 64, 00:36:37.995 "state": "online", 00:36:37.995 "raid_level": "raid5f", 00:36:37.995 "superblock": true, 00:36:37.995 "num_base_bdevs": 3, 00:36:37.995 "num_base_bdevs_discovered": 3, 00:36:37.995 "num_base_bdevs_operational": 3, 00:36:37.995 "base_bdevs_list": [ 00:36:37.995 { 00:36:37.995 "name": "NewBaseBdev", 00:36:37.995 "uuid": "d103436a-7399-45b6-8a7f-05d47af64f3b", 00:36:37.995 "is_configured": true, 00:36:37.995 "data_offset": 2048, 00:36:37.995 "data_size": 63488 00:36:37.995 }, 00:36:37.995 { 00:36:37.995 "name": "BaseBdev2", 00:36:37.995 "uuid": "4be95279-da08-4083-8507-5203520dde41", 00:36:37.995 "is_configured": true, 00:36:37.995 "data_offset": 2048, 00:36:37.995 "data_size": 63488 00:36:37.995 }, 00:36:37.995 { 00:36:37.995 "name": "BaseBdev3", 00:36:37.995 "uuid": "7fc644a9-fbf1-40c7-bdb3-49c2038f9e12", 00:36:37.995 "is_configured": true, 00:36:37.995 "data_offset": 2048, 00:36:37.995 "data_size": 63488 00:36:37.995 } 00:36:37.995 ] 00:36:37.995 } 00:36:37.995 } 00:36:37.995 }' 00:36:37.995 17:33:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:36:37.995 17:33:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:36:37.995 BaseBdev2 00:36:37.995 BaseBdev3' 00:36:37.995 17:33:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:36:37.995 17:33:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:36:37.995 17:33:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:36:37.995 17:33:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:36:37.995 17:33:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:37.995 17:33:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:36:37.995 17:33:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:36:37.995 17:33:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:37.995 17:33:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:36:37.995 17:33:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:36:37.995 17:33:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:36:37.995 17:33:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:36:37.995 17:33:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:37.995 17:33:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:36:37.995 17:33:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:36:37.995 17:33:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:37.995 17:33:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:36:37.995 17:33:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:36:37.995 17:33:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:36:37.995 17:33:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:36:37.995 17:33:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:36:37.995 17:33:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:37.995 17:33:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:36:37.995 17:33:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:38.254 17:33:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:36:38.254 17:33:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:36:38.254 17:33:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:36:38.254 17:33:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:38.254 17:33:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:36:38.254 [2024-11-26 17:33:15.446132] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:36:38.254 [2024-11-26 17:33:15.446160] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:36:38.254 [2024-11-26 17:33:15.446250] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:36:38.254 [2024-11-26 17:33:15.446523] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:36:38.254 [2024-11-26 17:33:15.446540] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name Existed_Raid, state offline 00:36:38.254 17:33:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:38.254 17:33:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@326 -- # killprocess 80986 00:36:38.254 17:33:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@954 -- # '[' -z 80986 ']' 00:36:38.254 17:33:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@958 -- # kill -0 80986 00:36:38.254 17:33:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@959 -- # uname 00:36:38.254 17:33:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:36:38.254 17:33:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 80986 00:36:38.254 killing process with pid 80986 00:36:38.254 17:33:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:36:38.254 17:33:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:36:38.254 17:33:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@972 -- # echo 'killing process with pid 80986' 00:36:38.254 17:33:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@973 -- # kill 80986 00:36:38.254 [2024-11-26 17:33:15.488481] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:36:38.254 17:33:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@978 -- # wait 80986 00:36:38.556 [2024-11-26 17:33:15.794718] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:36:39.492 17:33:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@328 -- # return 0 00:36:39.492 00:36:39.492 real 0m10.644s 00:36:39.492 user 0m16.968s 00:36:39.492 sys 0m2.026s 00:36:39.492 ************************************ 00:36:39.492 END TEST raid5f_state_function_test_sb 00:36:39.492 ************************************ 00:36:39.492 17:33:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@1130 -- # xtrace_disable 00:36:39.492 17:33:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:36:39.751 17:33:16 bdev_raid -- bdev/bdev_raid.sh@988 -- # run_test raid5f_superblock_test raid_superblock_test raid5f 3 00:36:39.751 17:33:16 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:36:39.751 17:33:16 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:36:39.751 17:33:16 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:36:39.751 ************************************ 00:36:39.751 START TEST raid5f_superblock_test 00:36:39.751 ************************************ 00:36:39.751 17:33:16 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@1129 -- # raid_superblock_test raid5f 3 00:36:39.751 17:33:16 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@393 -- # local raid_level=raid5f 00:36:39.751 17:33:16 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=3 00:36:39.751 17:33:16 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:36:39.751 17:33:16 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:36:39.751 17:33:16 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:36:39.751 17:33:16 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:36:39.751 17:33:16 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:36:39.751 17:33:16 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:36:39.751 17:33:16 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:36:39.751 17:33:16 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size 00:36:39.751 17:33:16 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:36:39.751 17:33:16 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:36:39.751 17:33:16 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:36:39.751 17:33:16 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@404 -- # '[' raid5f '!=' raid1 ']' 00:36:39.751 17:33:16 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@405 -- # strip_size=64 00:36:39.751 17:33:16 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@406 -- # strip_size_create_arg='-z 64' 00:36:39.751 17:33:16 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@412 -- # raid_pid=81605 00:36:39.751 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:36:39.751 17:33:16 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@413 -- # waitforlisten 81605 00:36:39.751 17:33:16 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:36:39.751 17:33:16 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@835 -- # '[' -z 81605 ']' 00:36:39.751 17:33:16 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:36:39.751 17:33:16 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:36:39.751 17:33:16 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:36:39.751 17:33:16 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:36:39.751 17:33:16 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:36:39.751 [2024-11-26 17:33:17.110727] Starting SPDK v25.01-pre git sha1 f7ce15267 / DPDK 24.03.0 initialization... 00:36:39.751 [2024-11-26 17:33:17.110906] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid81605 ] 00:36:40.010 [2024-11-26 17:33:17.301168] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:36:40.010 [2024-11-26 17:33:17.412594] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:36:40.268 [2024-11-26 17:33:17.616928] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:36:40.268 [2024-11-26 17:33:17.616961] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:36:40.836 17:33:18 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:36:40.836 17:33:18 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@868 -- # return 0 00:36:40.836 17:33:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:36:40.836 17:33:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:36:40.836 17:33:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:36:40.836 17:33:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:36:40.836 17:33:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:36:40.836 17:33:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:36:40.836 17:33:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:36:40.836 17:33:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:36:40.836 17:33:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc1 00:36:40.836 17:33:18 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:40.836 17:33:18 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:36:40.836 malloc1 00:36:40.836 17:33:18 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:40.836 17:33:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:36:40.836 17:33:18 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:40.836 17:33:18 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:36:40.836 [2024-11-26 17:33:18.095368] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:36:40.836 [2024-11-26 17:33:18.095583] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:36:40.836 [2024-11-26 17:33:18.095649] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:36:40.836 [2024-11-26 17:33:18.095735] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:36:40.836 [2024-11-26 17:33:18.098247] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:36:40.836 [2024-11-26 17:33:18.098396] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:36:40.836 pt1 00:36:40.836 17:33:18 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:40.836 17:33:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:36:40.836 17:33:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:36:40.836 17:33:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:36:40.836 17:33:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:36:40.836 17:33:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:36:40.836 17:33:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:36:40.836 17:33:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:36:40.836 17:33:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:36:40.836 17:33:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc2 00:36:40.836 17:33:18 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:40.836 17:33:18 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:36:40.836 malloc2 00:36:40.836 17:33:18 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:40.836 17:33:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:36:40.836 17:33:18 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:40.836 17:33:18 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:36:40.836 [2024-11-26 17:33:18.151015] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:36:40.836 [2024-11-26 17:33:18.151092] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:36:40.836 [2024-11-26 17:33:18.151123] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:36:40.836 [2024-11-26 17:33:18.151135] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:36:40.836 [2024-11-26 17:33:18.153489] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:36:40.836 [2024-11-26 17:33:18.153528] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:36:40.836 pt2 00:36:40.836 17:33:18 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:40.836 17:33:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:36:40.836 17:33:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:36:40.836 17:33:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc3 00:36:40.836 17:33:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt3 00:36:40.836 17:33:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000003 00:36:40.836 17:33:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:36:40.836 17:33:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:36:40.836 17:33:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:36:40.836 17:33:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc3 00:36:40.836 17:33:18 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:40.836 17:33:18 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:36:40.836 malloc3 00:36:40.836 17:33:18 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:40.836 17:33:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:36:40.836 17:33:18 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:40.836 17:33:18 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:36:40.836 [2024-11-26 17:33:18.220347] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:36:40.836 [2024-11-26 17:33:18.220406] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:36:40.836 [2024-11-26 17:33:18.220430] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:36:40.836 [2024-11-26 17:33:18.220442] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:36:40.836 [2024-11-26 17:33:18.222805] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:36:40.836 [2024-11-26 17:33:18.222845] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:36:40.836 pt3 00:36:40.836 17:33:18 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:40.836 17:33:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:36:40.836 17:33:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:36:40.836 17:33:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''pt1 pt2 pt3'\''' -n raid_bdev1 -s 00:36:40.836 17:33:18 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:40.836 17:33:18 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:36:40.836 [2024-11-26 17:33:18.232398] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:36:40.836 [2024-11-26 17:33:18.234550] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:36:40.836 [2024-11-26 17:33:18.234717] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:36:40.836 [2024-11-26 17:33:18.234914] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:36:40.836 [2024-11-26 17:33:18.235022] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:36:40.836 [2024-11-26 17:33:18.235317] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:36:40.836 [2024-11-26 17:33:18.241873] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:36:40.836 [2024-11-26 17:33:18.241992] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:36:40.836 [2024-11-26 17:33:18.242301] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:36:40.836 17:33:18 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:40.836 17:33:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:36:40.836 17:33:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:36:40.836 17:33:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:36:40.836 17:33:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:36:40.836 17:33:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:36:40.836 17:33:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:36:40.836 17:33:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:36:40.837 17:33:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:36:40.837 17:33:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:36:40.837 17:33:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:36:40.837 17:33:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:36:40.837 17:33:18 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:40.837 17:33:18 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:36:40.837 17:33:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:36:40.837 17:33:18 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:41.095 17:33:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:36:41.095 "name": "raid_bdev1", 00:36:41.095 "uuid": "a62897ec-6086-495e-a08c-e9d718148fc0", 00:36:41.095 "strip_size_kb": 64, 00:36:41.095 "state": "online", 00:36:41.095 "raid_level": "raid5f", 00:36:41.095 "superblock": true, 00:36:41.095 "num_base_bdevs": 3, 00:36:41.095 "num_base_bdevs_discovered": 3, 00:36:41.095 "num_base_bdevs_operational": 3, 00:36:41.095 "base_bdevs_list": [ 00:36:41.095 { 00:36:41.095 "name": "pt1", 00:36:41.095 "uuid": "00000000-0000-0000-0000-000000000001", 00:36:41.095 "is_configured": true, 00:36:41.095 "data_offset": 2048, 00:36:41.095 "data_size": 63488 00:36:41.095 }, 00:36:41.095 { 00:36:41.095 "name": "pt2", 00:36:41.095 "uuid": "00000000-0000-0000-0000-000000000002", 00:36:41.095 "is_configured": true, 00:36:41.095 "data_offset": 2048, 00:36:41.095 "data_size": 63488 00:36:41.095 }, 00:36:41.095 { 00:36:41.095 "name": "pt3", 00:36:41.095 "uuid": "00000000-0000-0000-0000-000000000003", 00:36:41.095 "is_configured": true, 00:36:41.095 "data_offset": 2048, 00:36:41.095 "data_size": 63488 00:36:41.095 } 00:36:41.095 ] 00:36:41.095 }' 00:36:41.095 17:33:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:36:41.095 17:33:18 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:36:41.354 17:33:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:36:41.354 17:33:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:36:41.354 17:33:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:36:41.354 17:33:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:36:41.354 17:33:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:36:41.354 17:33:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:36:41.354 17:33:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:36:41.354 17:33:18 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:41.354 17:33:18 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:36:41.354 17:33:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:36:41.354 [2024-11-26 17:33:18.693322] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:36:41.354 17:33:18 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:41.354 17:33:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:36:41.354 "name": "raid_bdev1", 00:36:41.354 "aliases": [ 00:36:41.354 "a62897ec-6086-495e-a08c-e9d718148fc0" 00:36:41.354 ], 00:36:41.354 "product_name": "Raid Volume", 00:36:41.354 "block_size": 512, 00:36:41.354 "num_blocks": 126976, 00:36:41.354 "uuid": "a62897ec-6086-495e-a08c-e9d718148fc0", 00:36:41.354 "assigned_rate_limits": { 00:36:41.354 "rw_ios_per_sec": 0, 00:36:41.354 "rw_mbytes_per_sec": 0, 00:36:41.354 "r_mbytes_per_sec": 0, 00:36:41.354 "w_mbytes_per_sec": 0 00:36:41.354 }, 00:36:41.354 "claimed": false, 00:36:41.354 "zoned": false, 00:36:41.354 "supported_io_types": { 00:36:41.354 "read": true, 00:36:41.354 "write": true, 00:36:41.354 "unmap": false, 00:36:41.354 "flush": false, 00:36:41.354 "reset": true, 00:36:41.354 "nvme_admin": false, 00:36:41.354 "nvme_io": false, 00:36:41.354 "nvme_io_md": false, 00:36:41.354 "write_zeroes": true, 00:36:41.354 "zcopy": false, 00:36:41.354 "get_zone_info": false, 00:36:41.354 "zone_management": false, 00:36:41.354 "zone_append": false, 00:36:41.354 "compare": false, 00:36:41.354 "compare_and_write": false, 00:36:41.354 "abort": false, 00:36:41.354 "seek_hole": false, 00:36:41.355 "seek_data": false, 00:36:41.355 "copy": false, 00:36:41.355 "nvme_iov_md": false 00:36:41.355 }, 00:36:41.355 "driver_specific": { 00:36:41.355 "raid": { 00:36:41.355 "uuid": "a62897ec-6086-495e-a08c-e9d718148fc0", 00:36:41.355 "strip_size_kb": 64, 00:36:41.355 "state": "online", 00:36:41.355 "raid_level": "raid5f", 00:36:41.355 "superblock": true, 00:36:41.355 "num_base_bdevs": 3, 00:36:41.355 "num_base_bdevs_discovered": 3, 00:36:41.355 "num_base_bdevs_operational": 3, 00:36:41.355 "base_bdevs_list": [ 00:36:41.355 { 00:36:41.355 "name": "pt1", 00:36:41.355 "uuid": "00000000-0000-0000-0000-000000000001", 00:36:41.355 "is_configured": true, 00:36:41.355 "data_offset": 2048, 00:36:41.355 "data_size": 63488 00:36:41.355 }, 00:36:41.355 { 00:36:41.355 "name": "pt2", 00:36:41.355 "uuid": "00000000-0000-0000-0000-000000000002", 00:36:41.355 "is_configured": true, 00:36:41.355 "data_offset": 2048, 00:36:41.355 "data_size": 63488 00:36:41.355 }, 00:36:41.355 { 00:36:41.355 "name": "pt3", 00:36:41.355 "uuid": "00000000-0000-0000-0000-000000000003", 00:36:41.355 "is_configured": true, 00:36:41.355 "data_offset": 2048, 00:36:41.355 "data_size": 63488 00:36:41.355 } 00:36:41.355 ] 00:36:41.355 } 00:36:41.355 } 00:36:41.355 }' 00:36:41.355 17:33:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:36:41.355 17:33:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:36:41.355 pt2 00:36:41.355 pt3' 00:36:41.355 17:33:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:36:41.614 17:33:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:36:41.614 17:33:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:36:41.614 17:33:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:36:41.614 17:33:18 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:41.614 17:33:18 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:36:41.614 17:33:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:36:41.614 17:33:18 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:41.614 17:33:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:36:41.614 17:33:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:36:41.614 17:33:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:36:41.614 17:33:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:36:41.614 17:33:18 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:41.614 17:33:18 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:36:41.614 17:33:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:36:41.614 17:33:18 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:41.614 17:33:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:36:41.614 17:33:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:36:41.614 17:33:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:36:41.614 17:33:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:36:41.614 17:33:18 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:41.614 17:33:18 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:36:41.614 17:33:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:36:41.614 17:33:18 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:41.614 17:33:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:36:41.614 17:33:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:36:41.614 17:33:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:36:41.614 17:33:18 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:41.614 17:33:18 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:36:41.614 17:33:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:36:41.614 [2024-11-26 17:33:18.977335] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:36:41.614 17:33:18 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:41.614 17:33:19 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=a62897ec-6086-495e-a08c-e9d718148fc0 00:36:41.614 17:33:19 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@436 -- # '[' -z a62897ec-6086-495e-a08c-e9d718148fc0 ']' 00:36:41.614 17:33:19 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:36:41.614 17:33:19 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:41.614 17:33:19 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:36:41.614 [2024-11-26 17:33:19.021143] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:36:41.614 [2024-11-26 17:33:19.021170] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:36:41.614 [2024-11-26 17:33:19.021245] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:36:41.614 [2024-11-26 17:33:19.021320] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:36:41.614 [2024-11-26 17:33:19.021331] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:36:41.614 17:33:19 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:41.614 17:33:19 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:36:41.614 17:33:19 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:36:41.614 17:33:19 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:41.614 17:33:19 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:36:41.614 17:33:19 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:41.874 17:33:19 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:36:41.874 17:33:19 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:36:41.874 17:33:19 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:36:41.874 17:33:19 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:36:41.874 17:33:19 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:41.874 17:33:19 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:36:41.874 17:33:19 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:41.874 17:33:19 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:36:41.874 17:33:19 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:36:41.874 17:33:19 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:41.874 17:33:19 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:36:41.874 17:33:19 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:41.874 17:33:19 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:36:41.874 17:33:19 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt3 00:36:41.874 17:33:19 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:41.874 17:33:19 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:36:41.874 17:33:19 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:41.874 17:33:19 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:36:41.874 17:33:19 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:36:41.874 17:33:19 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:41.874 17:33:19 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:36:41.874 17:33:19 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:41.874 17:33:19 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:36:41.874 17:33:19 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''malloc1 malloc2 malloc3'\''' -n raid_bdev1 00:36:41.874 17:33:19 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@652 -- # local es=0 00:36:41.874 17:33:19 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''malloc1 malloc2 malloc3'\''' -n raid_bdev1 00:36:41.874 17:33:19 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:36:41.874 17:33:19 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:36:41.874 17:33:19 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:36:41.874 17:33:19 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:36:41.874 17:33:19 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''malloc1 malloc2 malloc3'\''' -n raid_bdev1 00:36:41.874 17:33:19 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:41.874 17:33:19 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:36:41.874 [2024-11-26 17:33:19.157232] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:36:41.874 [2024-11-26 17:33:19.159377] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:36:41.874 [2024-11-26 17:33:19.159427] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc3 is claimed 00:36:41.874 [2024-11-26 17:33:19.159482] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:36:41.874 [2024-11-26 17:33:19.159536] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:36:41.874 [2024-11-26 17:33:19.159559] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc3 00:36:41.874 [2024-11-26 17:33:19.159580] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:36:41.874 [2024-11-26 17:33:19.159591] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state configuring 00:36:41.874 request: 00:36:41.874 { 00:36:41.874 "name": "raid_bdev1", 00:36:41.874 "raid_level": "raid5f", 00:36:41.874 "base_bdevs": [ 00:36:41.874 "malloc1", 00:36:41.874 "malloc2", 00:36:41.874 "malloc3" 00:36:41.874 ], 00:36:41.874 "strip_size_kb": 64, 00:36:41.874 "superblock": false, 00:36:41.874 "method": "bdev_raid_create", 00:36:41.874 "req_id": 1 00:36:41.874 } 00:36:41.874 Got JSON-RPC error response 00:36:41.874 response: 00:36:41.874 { 00:36:41.874 "code": -17, 00:36:41.874 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:36:41.874 } 00:36:41.874 17:33:19 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:36:41.874 17:33:19 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@655 -- # es=1 00:36:41.874 17:33:19 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:36:41.874 17:33:19 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:36:41.874 17:33:19 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:36:41.874 17:33:19 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:36:41.874 17:33:19 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:36:41.874 17:33:19 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:41.874 17:33:19 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:36:41.874 17:33:19 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:41.874 17:33:19 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:36:41.874 17:33:19 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:36:41.874 17:33:19 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:36:41.874 17:33:19 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:41.874 17:33:19 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:36:41.874 [2024-11-26 17:33:19.221170] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:36:41.874 [2024-11-26 17:33:19.221226] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:36:41.874 [2024-11-26 17:33:19.221248] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:36:41.874 [2024-11-26 17:33:19.221259] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:36:41.874 [2024-11-26 17:33:19.223716] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:36:41.874 [2024-11-26 17:33:19.223862] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:36:41.874 [2024-11-26 17:33:19.223959] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:36:41.874 [2024-11-26 17:33:19.224015] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:36:41.874 pt1 00:36:41.874 17:33:19 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:41.874 17:33:19 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring raid5f 64 3 00:36:41.874 17:33:19 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:36:41.874 17:33:19 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:36:41.874 17:33:19 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:36:41.874 17:33:19 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:36:41.874 17:33:19 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:36:41.874 17:33:19 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:36:41.874 17:33:19 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:36:41.874 17:33:19 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:36:41.874 17:33:19 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:36:41.874 17:33:19 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:36:41.874 17:33:19 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:36:41.874 17:33:19 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:41.875 17:33:19 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:36:41.875 17:33:19 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:41.875 17:33:19 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:36:41.875 "name": "raid_bdev1", 00:36:41.875 "uuid": "a62897ec-6086-495e-a08c-e9d718148fc0", 00:36:41.875 "strip_size_kb": 64, 00:36:41.875 "state": "configuring", 00:36:41.875 "raid_level": "raid5f", 00:36:41.875 "superblock": true, 00:36:41.875 "num_base_bdevs": 3, 00:36:41.875 "num_base_bdevs_discovered": 1, 00:36:41.875 "num_base_bdevs_operational": 3, 00:36:41.875 "base_bdevs_list": [ 00:36:41.875 { 00:36:41.875 "name": "pt1", 00:36:41.875 "uuid": "00000000-0000-0000-0000-000000000001", 00:36:41.875 "is_configured": true, 00:36:41.875 "data_offset": 2048, 00:36:41.875 "data_size": 63488 00:36:41.875 }, 00:36:41.875 { 00:36:41.875 "name": null, 00:36:41.875 "uuid": "00000000-0000-0000-0000-000000000002", 00:36:41.875 "is_configured": false, 00:36:41.875 "data_offset": 2048, 00:36:41.875 "data_size": 63488 00:36:41.875 }, 00:36:41.875 { 00:36:41.875 "name": null, 00:36:41.875 "uuid": "00000000-0000-0000-0000-000000000003", 00:36:41.875 "is_configured": false, 00:36:41.875 "data_offset": 2048, 00:36:41.875 "data_size": 63488 00:36:41.875 } 00:36:41.875 ] 00:36:41.875 }' 00:36:41.875 17:33:19 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:36:41.875 17:33:19 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:36:42.443 17:33:19 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@470 -- # '[' 3 -gt 2 ']' 00:36:42.443 17:33:19 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@472 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:36:42.443 17:33:19 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:42.443 17:33:19 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:36:42.443 [2024-11-26 17:33:19.669298] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:36:42.443 [2024-11-26 17:33:19.669365] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:36:42.443 [2024-11-26 17:33:19.669390] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009c80 00:36:42.443 [2024-11-26 17:33:19.669402] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:36:42.443 [2024-11-26 17:33:19.669852] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:36:42.443 [2024-11-26 17:33:19.669878] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:36:42.443 [2024-11-26 17:33:19.669966] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:36:42.443 [2024-11-26 17:33:19.669994] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:36:42.443 pt2 00:36:42.443 17:33:19 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:42.443 17:33:19 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@473 -- # rpc_cmd bdev_passthru_delete pt2 00:36:42.443 17:33:19 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:42.443 17:33:19 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:36:42.443 [2024-11-26 17:33:19.677283] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: pt2 00:36:42.443 17:33:19 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:42.443 17:33:19 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@474 -- # verify_raid_bdev_state raid_bdev1 configuring raid5f 64 3 00:36:42.443 17:33:19 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:36:42.443 17:33:19 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:36:42.443 17:33:19 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:36:42.443 17:33:19 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:36:42.443 17:33:19 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:36:42.443 17:33:19 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:36:42.443 17:33:19 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:36:42.443 17:33:19 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:36:42.444 17:33:19 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:36:42.444 17:33:19 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:36:42.444 17:33:19 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:36:42.444 17:33:19 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:42.444 17:33:19 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:36:42.444 17:33:19 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:42.444 17:33:19 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:36:42.444 "name": "raid_bdev1", 00:36:42.444 "uuid": "a62897ec-6086-495e-a08c-e9d718148fc0", 00:36:42.444 "strip_size_kb": 64, 00:36:42.444 "state": "configuring", 00:36:42.444 "raid_level": "raid5f", 00:36:42.444 "superblock": true, 00:36:42.444 "num_base_bdevs": 3, 00:36:42.444 "num_base_bdevs_discovered": 1, 00:36:42.444 "num_base_bdevs_operational": 3, 00:36:42.444 "base_bdevs_list": [ 00:36:42.444 { 00:36:42.444 "name": "pt1", 00:36:42.444 "uuid": "00000000-0000-0000-0000-000000000001", 00:36:42.444 "is_configured": true, 00:36:42.444 "data_offset": 2048, 00:36:42.444 "data_size": 63488 00:36:42.444 }, 00:36:42.444 { 00:36:42.444 "name": null, 00:36:42.444 "uuid": "00000000-0000-0000-0000-000000000002", 00:36:42.444 "is_configured": false, 00:36:42.444 "data_offset": 0, 00:36:42.444 "data_size": 63488 00:36:42.444 }, 00:36:42.444 { 00:36:42.444 "name": null, 00:36:42.444 "uuid": "00000000-0000-0000-0000-000000000003", 00:36:42.444 "is_configured": false, 00:36:42.444 "data_offset": 2048, 00:36:42.444 "data_size": 63488 00:36:42.444 } 00:36:42.444 ] 00:36:42.444 }' 00:36:42.444 17:33:19 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:36:42.444 17:33:19 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:36:42.703 17:33:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:36:42.703 17:33:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:36:42.703 17:33:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:36:42.703 17:33:20 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:42.703 17:33:20 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:36:42.703 [2024-11-26 17:33:20.117364] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:36:42.703 [2024-11-26 17:33:20.117440] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:36:42.703 [2024-11-26 17:33:20.117462] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009f80 00:36:42.703 [2024-11-26 17:33:20.117476] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:36:42.703 [2024-11-26 17:33:20.117957] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:36:42.703 [2024-11-26 17:33:20.117988] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:36:42.703 [2024-11-26 17:33:20.118084] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:36:42.703 [2024-11-26 17:33:20.118114] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:36:42.703 pt2 00:36:42.703 17:33:20 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:42.703 17:33:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:36:42.703 17:33:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:36:42.703 17:33:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:36:42.703 17:33:20 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:42.703 17:33:20 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:36:42.703 [2024-11-26 17:33:20.129346] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:36:42.703 [2024-11-26 17:33:20.129399] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:36:42.703 [2024-11-26 17:33:20.129415] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:36:42.703 [2024-11-26 17:33:20.129428] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:36:42.703 [2024-11-26 17:33:20.129809] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:36:42.703 [2024-11-26 17:33:20.129839] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:36:42.703 [2024-11-26 17:33:20.129896] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:36:42.703 [2024-11-26 17:33:20.129917] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:36:42.703 [2024-11-26 17:33:20.130055] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:36:42.703 [2024-11-26 17:33:20.130083] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:36:42.703 [2024-11-26 17:33:20.130341] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:36:42.703 [2024-11-26 17:33:20.135639] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:36:42.704 [2024-11-26 17:33:20.135661] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:36:42.704 [2024-11-26 17:33:20.135827] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:36:42.704 pt3 00:36:42.704 17:33:20 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:42.704 17:33:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:36:42.704 17:33:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:36:42.704 17:33:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:36:42.704 17:33:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:36:42.704 17:33:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:36:42.704 17:33:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:36:42.704 17:33:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:36:42.704 17:33:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:36:42.704 17:33:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:36:42.704 17:33:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:36:42.704 17:33:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:36:42.704 17:33:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:36:42.704 17:33:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:36:42.704 17:33:20 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:42.704 17:33:20 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:36:42.704 17:33:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:36:42.962 17:33:20 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:42.963 17:33:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:36:42.963 "name": "raid_bdev1", 00:36:42.963 "uuid": "a62897ec-6086-495e-a08c-e9d718148fc0", 00:36:42.963 "strip_size_kb": 64, 00:36:42.963 "state": "online", 00:36:42.963 "raid_level": "raid5f", 00:36:42.963 "superblock": true, 00:36:42.963 "num_base_bdevs": 3, 00:36:42.963 "num_base_bdevs_discovered": 3, 00:36:42.963 "num_base_bdevs_operational": 3, 00:36:42.963 "base_bdevs_list": [ 00:36:42.963 { 00:36:42.963 "name": "pt1", 00:36:42.963 "uuid": "00000000-0000-0000-0000-000000000001", 00:36:42.963 "is_configured": true, 00:36:42.963 "data_offset": 2048, 00:36:42.963 "data_size": 63488 00:36:42.963 }, 00:36:42.963 { 00:36:42.963 "name": "pt2", 00:36:42.963 "uuid": "00000000-0000-0000-0000-000000000002", 00:36:42.963 "is_configured": true, 00:36:42.963 "data_offset": 2048, 00:36:42.963 "data_size": 63488 00:36:42.963 }, 00:36:42.963 { 00:36:42.963 "name": "pt3", 00:36:42.963 "uuid": "00000000-0000-0000-0000-000000000003", 00:36:42.963 "is_configured": true, 00:36:42.963 "data_offset": 2048, 00:36:42.963 "data_size": 63488 00:36:42.963 } 00:36:42.963 ] 00:36:42.963 }' 00:36:42.963 17:33:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:36:42.963 17:33:20 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:36:43.222 17:33:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:36:43.222 17:33:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:36:43.222 17:33:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:36:43.222 17:33:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:36:43.222 17:33:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:36:43.222 17:33:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:36:43.222 17:33:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:36:43.222 17:33:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:36:43.222 17:33:20 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:43.222 17:33:20 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:36:43.222 [2024-11-26 17:33:20.586803] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:36:43.222 17:33:20 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:43.222 17:33:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:36:43.222 "name": "raid_bdev1", 00:36:43.222 "aliases": [ 00:36:43.222 "a62897ec-6086-495e-a08c-e9d718148fc0" 00:36:43.222 ], 00:36:43.222 "product_name": "Raid Volume", 00:36:43.222 "block_size": 512, 00:36:43.222 "num_blocks": 126976, 00:36:43.222 "uuid": "a62897ec-6086-495e-a08c-e9d718148fc0", 00:36:43.222 "assigned_rate_limits": { 00:36:43.222 "rw_ios_per_sec": 0, 00:36:43.222 "rw_mbytes_per_sec": 0, 00:36:43.222 "r_mbytes_per_sec": 0, 00:36:43.222 "w_mbytes_per_sec": 0 00:36:43.222 }, 00:36:43.222 "claimed": false, 00:36:43.222 "zoned": false, 00:36:43.222 "supported_io_types": { 00:36:43.222 "read": true, 00:36:43.222 "write": true, 00:36:43.222 "unmap": false, 00:36:43.222 "flush": false, 00:36:43.222 "reset": true, 00:36:43.222 "nvme_admin": false, 00:36:43.222 "nvme_io": false, 00:36:43.222 "nvme_io_md": false, 00:36:43.222 "write_zeroes": true, 00:36:43.222 "zcopy": false, 00:36:43.222 "get_zone_info": false, 00:36:43.222 "zone_management": false, 00:36:43.222 "zone_append": false, 00:36:43.222 "compare": false, 00:36:43.222 "compare_and_write": false, 00:36:43.222 "abort": false, 00:36:43.222 "seek_hole": false, 00:36:43.222 "seek_data": false, 00:36:43.222 "copy": false, 00:36:43.222 "nvme_iov_md": false 00:36:43.222 }, 00:36:43.222 "driver_specific": { 00:36:43.222 "raid": { 00:36:43.222 "uuid": "a62897ec-6086-495e-a08c-e9d718148fc0", 00:36:43.222 "strip_size_kb": 64, 00:36:43.222 "state": "online", 00:36:43.222 "raid_level": "raid5f", 00:36:43.222 "superblock": true, 00:36:43.222 "num_base_bdevs": 3, 00:36:43.222 "num_base_bdevs_discovered": 3, 00:36:43.222 "num_base_bdevs_operational": 3, 00:36:43.222 "base_bdevs_list": [ 00:36:43.222 { 00:36:43.222 "name": "pt1", 00:36:43.222 "uuid": "00000000-0000-0000-0000-000000000001", 00:36:43.222 "is_configured": true, 00:36:43.222 "data_offset": 2048, 00:36:43.222 "data_size": 63488 00:36:43.222 }, 00:36:43.222 { 00:36:43.222 "name": "pt2", 00:36:43.222 "uuid": "00000000-0000-0000-0000-000000000002", 00:36:43.222 "is_configured": true, 00:36:43.222 "data_offset": 2048, 00:36:43.222 "data_size": 63488 00:36:43.222 }, 00:36:43.222 { 00:36:43.222 "name": "pt3", 00:36:43.222 "uuid": "00000000-0000-0000-0000-000000000003", 00:36:43.222 "is_configured": true, 00:36:43.222 "data_offset": 2048, 00:36:43.222 "data_size": 63488 00:36:43.222 } 00:36:43.222 ] 00:36:43.222 } 00:36:43.222 } 00:36:43.222 }' 00:36:43.222 17:33:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:36:43.482 17:33:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:36:43.482 pt2 00:36:43.482 pt3' 00:36:43.482 17:33:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:36:43.482 17:33:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:36:43.482 17:33:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:36:43.482 17:33:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:36:43.482 17:33:20 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:43.482 17:33:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:36:43.482 17:33:20 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:36:43.482 17:33:20 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:43.482 17:33:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:36:43.482 17:33:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:36:43.482 17:33:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:36:43.482 17:33:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:36:43.482 17:33:20 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:43.482 17:33:20 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:36:43.482 17:33:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:36:43.482 17:33:20 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:43.482 17:33:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:36:43.482 17:33:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:36:43.482 17:33:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:36:43.482 17:33:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:36:43.482 17:33:20 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:43.482 17:33:20 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:36:43.482 17:33:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:36:43.482 17:33:20 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:43.482 17:33:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:36:43.482 17:33:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:36:43.482 17:33:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:36:43.482 17:33:20 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:43.482 17:33:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:36:43.482 17:33:20 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:36:43.482 [2024-11-26 17:33:20.850782] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:36:43.482 17:33:20 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:43.483 17:33:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@487 -- # '[' a62897ec-6086-495e-a08c-e9d718148fc0 '!=' a62897ec-6086-495e-a08c-e9d718148fc0 ']' 00:36:43.483 17:33:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@491 -- # has_redundancy raid5f 00:36:43.483 17:33:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:36:43.483 17:33:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@199 -- # return 0 00:36:43.483 17:33:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@493 -- # rpc_cmd bdev_passthru_delete pt1 00:36:43.483 17:33:20 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:43.483 17:33:20 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:36:43.483 [2024-11-26 17:33:20.890710] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: pt1 00:36:43.483 17:33:20 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:43.483 17:33:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@496 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:36:43.483 17:33:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:36:43.483 17:33:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:36:43.483 17:33:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:36:43.483 17:33:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:36:43.483 17:33:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:36:43.483 17:33:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:36:43.483 17:33:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:36:43.483 17:33:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:36:43.483 17:33:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:36:43.483 17:33:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:36:43.483 17:33:20 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:43.483 17:33:20 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:36:43.483 17:33:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:36:43.483 17:33:20 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:43.742 17:33:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:36:43.742 "name": "raid_bdev1", 00:36:43.742 "uuid": "a62897ec-6086-495e-a08c-e9d718148fc0", 00:36:43.742 "strip_size_kb": 64, 00:36:43.742 "state": "online", 00:36:43.742 "raid_level": "raid5f", 00:36:43.742 "superblock": true, 00:36:43.742 "num_base_bdevs": 3, 00:36:43.742 "num_base_bdevs_discovered": 2, 00:36:43.742 "num_base_bdevs_operational": 2, 00:36:43.742 "base_bdevs_list": [ 00:36:43.742 { 00:36:43.742 "name": null, 00:36:43.742 "uuid": "00000000-0000-0000-0000-000000000000", 00:36:43.742 "is_configured": false, 00:36:43.742 "data_offset": 0, 00:36:43.742 "data_size": 63488 00:36:43.742 }, 00:36:43.742 { 00:36:43.742 "name": "pt2", 00:36:43.742 "uuid": "00000000-0000-0000-0000-000000000002", 00:36:43.742 "is_configured": true, 00:36:43.742 "data_offset": 2048, 00:36:43.742 "data_size": 63488 00:36:43.742 }, 00:36:43.742 { 00:36:43.742 "name": "pt3", 00:36:43.742 "uuid": "00000000-0000-0000-0000-000000000003", 00:36:43.742 "is_configured": true, 00:36:43.742 "data_offset": 2048, 00:36:43.742 "data_size": 63488 00:36:43.742 } 00:36:43.742 ] 00:36:43.742 }' 00:36:43.742 17:33:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:36:43.742 17:33:20 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:36:44.002 17:33:21 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@499 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:36:44.002 17:33:21 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:44.002 17:33:21 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:36:44.002 [2024-11-26 17:33:21.326750] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:36:44.002 [2024-11-26 17:33:21.326907] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:36:44.002 [2024-11-26 17:33:21.327075] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:36:44.002 [2024-11-26 17:33:21.327142] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:36:44.002 [2024-11-26 17:33:21.327161] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:36:44.002 17:33:21 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:44.002 17:33:21 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@500 -- # rpc_cmd bdev_raid_get_bdevs all 00:36:44.002 17:33:21 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@500 -- # jq -r '.[]' 00:36:44.002 17:33:21 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:44.002 17:33:21 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:36:44.002 17:33:21 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:44.002 17:33:21 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@500 -- # raid_bdev= 00:36:44.002 17:33:21 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@501 -- # '[' -n '' ']' 00:36:44.002 17:33:21 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i = 1 )) 00:36:44.002 17:33:21 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:36:44.002 17:33:21 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt2 00:36:44.002 17:33:21 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:44.002 17:33:21 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:36:44.002 17:33:21 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:44.002 17:33:21 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:36:44.002 17:33:21 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:36:44.002 17:33:21 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt3 00:36:44.002 17:33:21 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:44.002 17:33:21 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:36:44.002 17:33:21 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:44.002 17:33:21 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:36:44.002 17:33:21 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:36:44.002 17:33:21 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i = 1 )) 00:36:44.002 17:33:21 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:36:44.002 17:33:21 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@512 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:36:44.002 17:33:21 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:44.002 17:33:21 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:36:44.002 [2024-11-26 17:33:21.406734] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:36:44.002 [2024-11-26 17:33:21.406792] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:36:44.002 [2024-11-26 17:33:21.406812] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a580 00:36:44.002 [2024-11-26 17:33:21.406826] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:36:44.002 [2024-11-26 17:33:21.409291] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:36:44.002 [2024-11-26 17:33:21.409334] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:36:44.002 [2024-11-26 17:33:21.409408] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:36:44.002 [2024-11-26 17:33:21.409465] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:36:44.002 pt2 00:36:44.002 17:33:21 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:44.002 17:33:21 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@515 -- # verify_raid_bdev_state raid_bdev1 configuring raid5f 64 2 00:36:44.002 17:33:21 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:36:44.002 17:33:21 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:36:44.002 17:33:21 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:36:44.002 17:33:21 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:36:44.002 17:33:21 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:36:44.002 17:33:21 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:36:44.002 17:33:21 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:36:44.002 17:33:21 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:36:44.002 17:33:21 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:36:44.002 17:33:21 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:36:44.002 17:33:21 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:36:44.002 17:33:21 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:44.002 17:33:21 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:36:44.002 17:33:21 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:44.261 17:33:21 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:36:44.261 "name": "raid_bdev1", 00:36:44.261 "uuid": "a62897ec-6086-495e-a08c-e9d718148fc0", 00:36:44.261 "strip_size_kb": 64, 00:36:44.261 "state": "configuring", 00:36:44.261 "raid_level": "raid5f", 00:36:44.261 "superblock": true, 00:36:44.261 "num_base_bdevs": 3, 00:36:44.261 "num_base_bdevs_discovered": 1, 00:36:44.261 "num_base_bdevs_operational": 2, 00:36:44.261 "base_bdevs_list": [ 00:36:44.261 { 00:36:44.262 "name": null, 00:36:44.262 "uuid": "00000000-0000-0000-0000-000000000000", 00:36:44.262 "is_configured": false, 00:36:44.262 "data_offset": 2048, 00:36:44.262 "data_size": 63488 00:36:44.262 }, 00:36:44.262 { 00:36:44.262 "name": "pt2", 00:36:44.262 "uuid": "00000000-0000-0000-0000-000000000002", 00:36:44.262 "is_configured": true, 00:36:44.262 "data_offset": 2048, 00:36:44.262 "data_size": 63488 00:36:44.262 }, 00:36:44.262 { 00:36:44.262 "name": null, 00:36:44.262 "uuid": "00000000-0000-0000-0000-000000000003", 00:36:44.262 "is_configured": false, 00:36:44.262 "data_offset": 2048, 00:36:44.262 "data_size": 63488 00:36:44.262 } 00:36:44.262 ] 00:36:44.262 }' 00:36:44.262 17:33:21 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:36:44.262 17:33:21 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:36:44.521 17:33:21 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i++ )) 00:36:44.521 17:33:21 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:36:44.521 17:33:21 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@519 -- # i=2 00:36:44.521 17:33:21 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@520 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:36:44.521 17:33:21 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:44.521 17:33:21 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:36:44.521 [2024-11-26 17:33:21.834850] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:36:44.521 [2024-11-26 17:33:21.834925] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:36:44.521 [2024-11-26 17:33:21.834949] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ab80 00:36:44.521 [2024-11-26 17:33:21.834963] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:36:44.521 [2024-11-26 17:33:21.835472] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:36:44.521 [2024-11-26 17:33:21.835651] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:36:44.521 [2024-11-26 17:33:21.835758] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:36:44.521 [2024-11-26 17:33:21.835801] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:36:44.521 [2024-11-26 17:33:21.835931] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:36:44.521 [2024-11-26 17:33:21.835944] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:36:44.521 [2024-11-26 17:33:21.836241] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:36:44.521 [2024-11-26 17:33:21.842147] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:36:44.521 [2024-11-26 17:33:21.842298] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008200 00:36:44.521 [2024-11-26 17:33:21.842639] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:36:44.521 pt3 00:36:44.521 17:33:21 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:44.521 17:33:21 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@523 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:36:44.521 17:33:21 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:36:44.521 17:33:21 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:36:44.521 17:33:21 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:36:44.521 17:33:21 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:36:44.521 17:33:21 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:36:44.521 17:33:21 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:36:44.521 17:33:21 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:36:44.521 17:33:21 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:36:44.521 17:33:21 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:36:44.521 17:33:21 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:36:44.521 17:33:21 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:44.521 17:33:21 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:36:44.521 17:33:21 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:36:44.521 17:33:21 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:44.521 17:33:21 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:36:44.521 "name": "raid_bdev1", 00:36:44.521 "uuid": "a62897ec-6086-495e-a08c-e9d718148fc0", 00:36:44.521 "strip_size_kb": 64, 00:36:44.521 "state": "online", 00:36:44.521 "raid_level": "raid5f", 00:36:44.521 "superblock": true, 00:36:44.521 "num_base_bdevs": 3, 00:36:44.521 "num_base_bdevs_discovered": 2, 00:36:44.521 "num_base_bdevs_operational": 2, 00:36:44.521 "base_bdevs_list": [ 00:36:44.521 { 00:36:44.521 "name": null, 00:36:44.521 "uuid": "00000000-0000-0000-0000-000000000000", 00:36:44.521 "is_configured": false, 00:36:44.521 "data_offset": 2048, 00:36:44.521 "data_size": 63488 00:36:44.521 }, 00:36:44.521 { 00:36:44.521 "name": "pt2", 00:36:44.521 "uuid": "00000000-0000-0000-0000-000000000002", 00:36:44.521 "is_configured": true, 00:36:44.521 "data_offset": 2048, 00:36:44.521 "data_size": 63488 00:36:44.521 }, 00:36:44.521 { 00:36:44.521 "name": "pt3", 00:36:44.521 "uuid": "00000000-0000-0000-0000-000000000003", 00:36:44.521 "is_configured": true, 00:36:44.521 "data_offset": 2048, 00:36:44.521 "data_size": 63488 00:36:44.521 } 00:36:44.521 ] 00:36:44.521 }' 00:36:44.521 17:33:21 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:36:44.521 17:33:21 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:36:45.090 17:33:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@526 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:36:45.090 17:33:22 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:45.090 17:33:22 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:36:45.090 [2024-11-26 17:33:22.285461] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:36:45.090 [2024-11-26 17:33:22.285497] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:36:45.090 [2024-11-26 17:33:22.285576] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:36:45.090 [2024-11-26 17:33:22.285642] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:36:45.090 [2024-11-26 17:33:22.285654] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name raid_bdev1, state offline 00:36:45.090 17:33:22 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:45.090 17:33:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@527 -- # jq -r '.[]' 00:36:45.090 17:33:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@527 -- # rpc_cmd bdev_raid_get_bdevs all 00:36:45.090 17:33:22 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:45.090 17:33:22 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:36:45.090 17:33:22 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:45.090 17:33:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@527 -- # raid_bdev= 00:36:45.090 17:33:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@528 -- # '[' -n '' ']' 00:36:45.090 17:33:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@532 -- # '[' 3 -gt 2 ']' 00:36:45.090 17:33:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@534 -- # i=2 00:36:45.090 17:33:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@535 -- # rpc_cmd bdev_passthru_delete pt3 00:36:45.090 17:33:22 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:45.090 17:33:22 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:36:45.090 17:33:22 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:45.090 17:33:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@540 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:36:45.090 17:33:22 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:45.090 17:33:22 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:36:45.090 [2024-11-26 17:33:22.337486] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:36:45.090 [2024-11-26 17:33:22.337547] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:36:45.090 [2024-11-26 17:33:22.337568] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ae80 00:36:45.090 [2024-11-26 17:33:22.337580] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:36:45.090 [2024-11-26 17:33:22.340352] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:36:45.090 [2024-11-26 17:33:22.340393] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:36:45.090 [2024-11-26 17:33:22.340477] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:36:45.090 [2024-11-26 17:33:22.340526] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:36:45.090 [2024-11-26 17:33:22.340674] bdev_raid.c:3685:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev pt2 (4) greater than existing raid bdev raid_bdev1 (2) 00:36:45.090 [2024-11-26 17:33:22.340710] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:36:45.090 [2024-11-26 17:33:22.340731] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008580 name raid_bdev1, state configuring 00:36:45.090 [2024-11-26 17:33:22.340795] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:36:45.090 pt1 00:36:45.090 17:33:22 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:45.090 17:33:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@542 -- # '[' 3 -gt 2 ']' 00:36:45.090 17:33:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@545 -- # verify_raid_bdev_state raid_bdev1 configuring raid5f 64 2 00:36:45.090 17:33:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:36:45.090 17:33:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:36:45.090 17:33:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:36:45.090 17:33:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:36:45.090 17:33:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:36:45.090 17:33:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:36:45.090 17:33:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:36:45.090 17:33:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:36:45.090 17:33:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:36:45.090 17:33:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:36:45.091 17:33:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:36:45.091 17:33:22 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:45.091 17:33:22 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:36:45.091 17:33:22 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:45.091 17:33:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:36:45.091 "name": "raid_bdev1", 00:36:45.091 "uuid": "a62897ec-6086-495e-a08c-e9d718148fc0", 00:36:45.091 "strip_size_kb": 64, 00:36:45.091 "state": "configuring", 00:36:45.091 "raid_level": "raid5f", 00:36:45.091 "superblock": true, 00:36:45.091 "num_base_bdevs": 3, 00:36:45.091 "num_base_bdevs_discovered": 1, 00:36:45.091 "num_base_bdevs_operational": 2, 00:36:45.091 "base_bdevs_list": [ 00:36:45.091 { 00:36:45.091 "name": null, 00:36:45.091 "uuid": "00000000-0000-0000-0000-000000000000", 00:36:45.091 "is_configured": false, 00:36:45.091 "data_offset": 2048, 00:36:45.091 "data_size": 63488 00:36:45.091 }, 00:36:45.091 { 00:36:45.091 "name": "pt2", 00:36:45.091 "uuid": "00000000-0000-0000-0000-000000000002", 00:36:45.091 "is_configured": true, 00:36:45.091 "data_offset": 2048, 00:36:45.091 "data_size": 63488 00:36:45.091 }, 00:36:45.091 { 00:36:45.091 "name": null, 00:36:45.091 "uuid": "00000000-0000-0000-0000-000000000003", 00:36:45.091 "is_configured": false, 00:36:45.091 "data_offset": 2048, 00:36:45.091 "data_size": 63488 00:36:45.091 } 00:36:45.091 ] 00:36:45.091 }' 00:36:45.091 17:33:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:36:45.091 17:33:22 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:36:45.350 17:33:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@546 -- # rpc_cmd bdev_raid_get_bdevs configuring 00:36:45.350 17:33:22 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:45.350 17:33:22 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:36:45.350 17:33:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@546 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:36:45.609 17:33:22 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:45.609 17:33:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@546 -- # [[ false == \f\a\l\s\e ]] 00:36:45.609 17:33:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@549 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:36:45.609 17:33:22 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:45.609 17:33:22 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:36:45.609 [2024-11-26 17:33:22.829590] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:36:45.609 [2024-11-26 17:33:22.829656] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:36:45.609 [2024-11-26 17:33:22.829681] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b480 00:36:45.609 [2024-11-26 17:33:22.829693] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:36:45.609 [2024-11-26 17:33:22.830186] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:36:45.609 [2024-11-26 17:33:22.830209] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:36:45.609 [2024-11-26 17:33:22.830309] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:36:45.609 [2024-11-26 17:33:22.830335] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:36:45.609 [2024-11-26 17:33:22.830466] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008900 00:36:45.609 [2024-11-26 17:33:22.830476] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:36:45.609 [2024-11-26 17:33:22.830768] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006080 00:36:45.609 [2024-11-26 17:33:22.836440] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008900 00:36:45.609 [2024-11-26 17:33:22.836478] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008900 00:36:45.609 [2024-11-26 17:33:22.836726] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:36:45.609 pt3 00:36:45.609 17:33:22 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:45.609 17:33:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@554 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:36:45.609 17:33:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:36:45.609 17:33:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:36:45.609 17:33:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:36:45.609 17:33:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:36:45.609 17:33:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:36:45.609 17:33:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:36:45.609 17:33:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:36:45.609 17:33:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:36:45.609 17:33:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:36:45.609 17:33:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:36:45.609 17:33:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:36:45.609 17:33:22 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:45.609 17:33:22 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:36:45.609 17:33:22 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:45.609 17:33:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:36:45.609 "name": "raid_bdev1", 00:36:45.609 "uuid": "a62897ec-6086-495e-a08c-e9d718148fc0", 00:36:45.609 "strip_size_kb": 64, 00:36:45.609 "state": "online", 00:36:45.609 "raid_level": "raid5f", 00:36:45.609 "superblock": true, 00:36:45.609 "num_base_bdevs": 3, 00:36:45.609 "num_base_bdevs_discovered": 2, 00:36:45.609 "num_base_bdevs_operational": 2, 00:36:45.609 "base_bdevs_list": [ 00:36:45.609 { 00:36:45.609 "name": null, 00:36:45.609 "uuid": "00000000-0000-0000-0000-000000000000", 00:36:45.609 "is_configured": false, 00:36:45.609 "data_offset": 2048, 00:36:45.609 "data_size": 63488 00:36:45.609 }, 00:36:45.609 { 00:36:45.609 "name": "pt2", 00:36:45.609 "uuid": "00000000-0000-0000-0000-000000000002", 00:36:45.609 "is_configured": true, 00:36:45.609 "data_offset": 2048, 00:36:45.609 "data_size": 63488 00:36:45.609 }, 00:36:45.609 { 00:36:45.609 "name": "pt3", 00:36:45.609 "uuid": "00000000-0000-0000-0000-000000000003", 00:36:45.609 "is_configured": true, 00:36:45.609 "data_offset": 2048, 00:36:45.609 "data_size": 63488 00:36:45.609 } 00:36:45.609 ] 00:36:45.609 }' 00:36:45.609 17:33:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:36:45.609 17:33:22 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:36:45.868 17:33:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@555 -- # rpc_cmd bdev_raid_get_bdevs online 00:36:45.868 17:33:23 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:45.868 17:33:23 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:36:45.868 17:33:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@555 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:36:45.868 17:33:23 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:46.127 17:33:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@555 -- # [[ false == \f\a\l\s\e ]] 00:36:46.127 17:33:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@558 -- # jq -r '.[] | .uuid' 00:36:46.127 17:33:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@558 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:36:46.127 17:33:23 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:46.127 17:33:23 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:36:46.127 [2024-11-26 17:33:23.339869] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:36:46.127 17:33:23 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:46.127 17:33:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@558 -- # '[' a62897ec-6086-495e-a08c-e9d718148fc0 '!=' a62897ec-6086-495e-a08c-e9d718148fc0 ']' 00:36:46.127 17:33:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@563 -- # killprocess 81605 00:36:46.127 17:33:23 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@954 -- # '[' -z 81605 ']' 00:36:46.127 17:33:23 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@958 -- # kill -0 81605 00:36:46.127 17:33:23 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@959 -- # uname 00:36:46.127 17:33:23 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:36:46.127 17:33:23 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 81605 00:36:46.127 killing process with pid 81605 00:36:46.128 17:33:23 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:36:46.128 17:33:23 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:36:46.128 17:33:23 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 81605' 00:36:46.128 17:33:23 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@973 -- # kill 81605 00:36:46.128 [2024-11-26 17:33:23.417781] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:36:46.128 [2024-11-26 17:33:23.417883] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:36:46.128 17:33:23 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@978 -- # wait 81605 00:36:46.128 [2024-11-26 17:33:23.417948] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:36:46.128 [2024-11-26 17:33:23.417964] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008900 name raid_bdev1, state offline 00:36:46.387 [2024-11-26 17:33:23.724987] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:36:47.764 17:33:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@565 -- # return 0 00:36:47.764 00:36:47.764 real 0m7.905s 00:36:47.764 user 0m12.331s 00:36:47.764 sys 0m1.563s 00:36:47.764 17:33:24 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:36:47.764 17:33:24 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:36:47.764 ************************************ 00:36:47.764 END TEST raid5f_superblock_test 00:36:47.764 ************************************ 00:36:47.764 17:33:24 bdev_raid -- bdev/bdev_raid.sh@989 -- # '[' true = true ']' 00:36:47.764 17:33:24 bdev_raid -- bdev/bdev_raid.sh@990 -- # run_test raid5f_rebuild_test raid_rebuild_test raid5f 3 false false true 00:36:47.764 17:33:24 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 7 -le 1 ']' 00:36:47.764 17:33:24 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:36:47.764 17:33:24 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:36:47.764 ************************************ 00:36:47.764 START TEST raid5f_rebuild_test 00:36:47.764 ************************************ 00:36:47.764 17:33:24 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@1129 -- # raid_rebuild_test raid5f 3 false false true 00:36:47.764 17:33:24 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@569 -- # local raid_level=raid5f 00:36:47.764 17:33:24 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=3 00:36:47.764 17:33:24 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@571 -- # local superblock=false 00:36:47.764 17:33:24 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@572 -- # local background_io=false 00:36:47.764 17:33:24 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@573 -- # local verify=true 00:36:47.764 17:33:24 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:36:47.764 17:33:24 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:36:47.764 17:33:24 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:36:47.764 17:33:24 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:36:47.764 17:33:24 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:36:47.764 17:33:24 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:36:47.764 17:33:24 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:36:47.764 17:33:24 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:36:47.765 17:33:24 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@576 -- # echo BaseBdev3 00:36:47.765 17:33:24 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:36:47.765 17:33:24 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:36:47.765 17:33:24 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:36:47.765 17:33:24 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:36:47.765 17:33:24 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:36:47.765 17:33:24 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@576 -- # local strip_size 00:36:47.765 17:33:24 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@577 -- # local create_arg 00:36:47.765 17:33:24 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:36:47.765 17:33:24 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@579 -- # local data_offset 00:36:47.765 17:33:24 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@581 -- # '[' raid5f '!=' raid1 ']' 00:36:47.765 17:33:24 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@582 -- # '[' false = true ']' 00:36:47.765 17:33:24 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@586 -- # strip_size=64 00:36:47.765 17:33:24 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@587 -- # create_arg+=' -z 64' 00:36:47.765 17:33:24 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@592 -- # '[' false = true ']' 00:36:47.765 17:33:24 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@597 -- # raid_pid=82049 00:36:47.765 17:33:24 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:36:47.765 17:33:24 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@598 -- # waitforlisten 82049 00:36:47.765 17:33:24 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@835 -- # '[' -z 82049 ']' 00:36:47.765 17:33:24 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:36:47.765 17:33:24 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:36:47.765 17:33:24 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:36:47.765 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:36:47.765 17:33:24 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:36:47.765 17:33:24 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:36:47.765 I/O size of 3145728 is greater than zero copy threshold (65536). 00:36:47.765 Zero copy mechanism will not be used. 00:36:47.765 [2024-11-26 17:33:25.087553] Starting SPDK v25.01-pre git sha1 f7ce15267 / DPDK 24.03.0 initialization... 00:36:47.765 [2024-11-26 17:33:25.087721] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid82049 ] 00:36:48.038 [2024-11-26 17:33:25.276422] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:36:48.038 [2024-11-26 17:33:25.392168] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:36:48.344 [2024-11-26 17:33:25.594912] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:36:48.344 [2024-11-26 17:33:25.594971] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:36:48.604 17:33:25 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:36:48.604 17:33:25 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@868 -- # return 0 00:36:48.604 17:33:25 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:36:48.604 17:33:25 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:36:48.604 17:33:25 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:48.604 17:33:25 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:36:48.604 BaseBdev1_malloc 00:36:48.604 17:33:25 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:48.604 17:33:25 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:36:48.604 17:33:25 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:48.604 17:33:25 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:36:48.604 [2024-11-26 17:33:25.986901] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:36:48.604 [2024-11-26 17:33:25.987158] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:36:48.604 [2024-11-26 17:33:25.987196] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:36:48.604 [2024-11-26 17:33:25.987212] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:36:48.604 [2024-11-26 17:33:25.989593] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:36:48.604 [2024-11-26 17:33:25.989639] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:36:48.604 BaseBdev1 00:36:48.604 17:33:25 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:48.604 17:33:25 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:36:48.604 17:33:25 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:36:48.604 17:33:25 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:48.604 17:33:25 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:36:48.604 BaseBdev2_malloc 00:36:48.604 17:33:26 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:48.604 17:33:26 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:36:48.604 17:33:26 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:48.604 17:33:26 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:36:48.604 [2024-11-26 17:33:26.040293] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:36:48.604 [2024-11-26 17:33:26.040362] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:36:48.604 [2024-11-26 17:33:26.040387] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:36:48.604 [2024-11-26 17:33:26.040401] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:36:48.604 [2024-11-26 17:33:26.042789] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:36:48.604 [2024-11-26 17:33:26.042833] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:36:48.604 BaseBdev2 00:36:48.604 17:33:26 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:48.604 17:33:26 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:36:48.604 17:33:26 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:36:48.604 17:33:26 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:48.604 17:33:26 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:36:48.864 BaseBdev3_malloc 00:36:48.864 17:33:26 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:48.864 17:33:26 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev3_malloc -p BaseBdev3 00:36:48.864 17:33:26 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:48.864 17:33:26 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:36:48.864 [2024-11-26 17:33:26.110322] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev3_malloc 00:36:48.864 [2024-11-26 17:33:26.110530] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:36:48.864 [2024-11-26 17:33:26.110594] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:36:48.864 [2024-11-26 17:33:26.110701] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:36:48.864 [2024-11-26 17:33:26.113184] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:36:48.864 [2024-11-26 17:33:26.113324] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:36:48.864 BaseBdev3 00:36:48.864 17:33:26 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:48.864 17:33:26 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 512 -b spare_malloc 00:36:48.864 17:33:26 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:48.864 17:33:26 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:36:48.864 spare_malloc 00:36:48.864 17:33:26 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:48.864 17:33:26 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:36:48.864 17:33:26 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:48.864 17:33:26 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:36:48.864 spare_delay 00:36:48.864 17:33:26 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:48.864 17:33:26 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:36:48.864 17:33:26 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:48.864 17:33:26 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:36:48.864 [2024-11-26 17:33:26.173039] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:36:48.864 [2024-11-26 17:33:26.173110] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:36:48.864 [2024-11-26 17:33:26.173130] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009c80 00:36:48.864 [2024-11-26 17:33:26.173143] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:36:48.864 [2024-11-26 17:33:26.175509] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:36:48.864 [2024-11-26 17:33:26.175555] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:36:48.864 spare 00:36:48.864 17:33:26 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:48.864 17:33:26 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n raid_bdev1 00:36:48.864 17:33:26 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:48.864 17:33:26 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:36:48.864 [2024-11-26 17:33:26.181112] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:36:48.864 [2024-11-26 17:33:26.183274] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:36:48.864 [2024-11-26 17:33:26.183369] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:36:48.864 [2024-11-26 17:33:26.183566] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:36:48.864 [2024-11-26 17:33:26.183614] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 131072, blocklen 512 00:36:48.864 [2024-11-26 17:33:26.183919] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:36:48.864 [2024-11-26 17:33:26.189807] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:36:48.864 [2024-11-26 17:33:26.189946] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:36:48.864 [2024-11-26 17:33:26.190286] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:36:48.864 17:33:26 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:48.864 17:33:26 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:36:48.864 17:33:26 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:36:48.864 17:33:26 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:36:48.864 17:33:26 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:36:48.864 17:33:26 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:36:48.864 17:33:26 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:36:48.864 17:33:26 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:36:48.864 17:33:26 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:36:48.864 17:33:26 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:36:48.864 17:33:26 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:36:48.864 17:33:26 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:36:48.864 17:33:26 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:48.864 17:33:26 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:36:48.864 17:33:26 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:36:48.864 17:33:26 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:48.864 17:33:26 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:36:48.864 "name": "raid_bdev1", 00:36:48.864 "uuid": "c98740c8-c4f7-4893-87d7-5727aa9716f3", 00:36:48.864 "strip_size_kb": 64, 00:36:48.864 "state": "online", 00:36:48.864 "raid_level": "raid5f", 00:36:48.864 "superblock": false, 00:36:48.864 "num_base_bdevs": 3, 00:36:48.864 "num_base_bdevs_discovered": 3, 00:36:48.864 "num_base_bdevs_operational": 3, 00:36:48.864 "base_bdevs_list": [ 00:36:48.864 { 00:36:48.864 "name": "BaseBdev1", 00:36:48.864 "uuid": "e47e75fa-2a17-52b4-9b17-c0cb9945e539", 00:36:48.864 "is_configured": true, 00:36:48.864 "data_offset": 0, 00:36:48.864 "data_size": 65536 00:36:48.864 }, 00:36:48.864 { 00:36:48.864 "name": "BaseBdev2", 00:36:48.864 "uuid": "e1007141-7dbe-5640-be5b-be688a6e64ba", 00:36:48.864 "is_configured": true, 00:36:48.864 "data_offset": 0, 00:36:48.864 "data_size": 65536 00:36:48.864 }, 00:36:48.864 { 00:36:48.864 "name": "BaseBdev3", 00:36:48.864 "uuid": "5b11a2fd-474c-5db3-a596-13fd788f9d54", 00:36:48.864 "is_configured": true, 00:36:48.864 "data_offset": 0, 00:36:48.864 "data_size": 65536 00:36:48.865 } 00:36:48.865 ] 00:36:48.865 }' 00:36:48.865 17:33:26 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:36:48.865 17:33:26 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:36:49.432 17:33:26 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:36:49.433 17:33:26 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:36:49.433 17:33:26 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:49.433 17:33:26 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:36:49.433 [2024-11-26 17:33:26.625533] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:36:49.433 17:33:26 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:49.433 17:33:26 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=131072 00:36:49.433 17:33:26 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:36:49.433 17:33:26 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:49.433 17:33:26 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:36:49.433 17:33:26 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:36:49.433 17:33:26 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:49.433 17:33:26 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@619 -- # data_offset=0 00:36:49.433 17:33:26 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@621 -- # '[' false = true ']' 00:36:49.433 17:33:26 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@624 -- # '[' true = true ']' 00:36:49.433 17:33:26 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@625 -- # local write_unit_size 00:36:49.433 17:33:26 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@628 -- # nbd_start_disks /var/tmp/spdk.sock raid_bdev1 /dev/nbd0 00:36:49.433 17:33:26 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:36:49.433 17:33:26 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@10 -- # bdev_list=('raid_bdev1') 00:36:49.433 17:33:26 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@10 -- # local bdev_list 00:36:49.433 17:33:26 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:36:49.433 17:33:26 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@11 -- # local nbd_list 00:36:49.433 17:33:26 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@12 -- # local i 00:36:49.433 17:33:26 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:36:49.433 17:33:26 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:36:49.433 17:33:26 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk raid_bdev1 /dev/nbd0 00:36:49.692 [2024-11-26 17:33:27.009458] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006080 00:36:49.692 /dev/nbd0 00:36:49.692 17:33:27 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:36:49.692 17:33:27 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:36:49.692 17:33:27 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:36:49.692 17:33:27 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@873 -- # local i 00:36:49.692 17:33:27 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:36:49.692 17:33:27 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:36:49.692 17:33:27 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:36:49.692 17:33:27 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@877 -- # break 00:36:49.692 17:33:27 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:36:49.692 17:33:27 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:36:49.692 17:33:27 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:36:49.692 1+0 records in 00:36:49.692 1+0 records out 00:36:49.692 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000249782 s, 16.4 MB/s 00:36:49.692 17:33:27 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:36:49.692 17:33:27 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@890 -- # size=4096 00:36:49.692 17:33:27 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:36:49.692 17:33:27 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:36:49.692 17:33:27 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@893 -- # return 0 00:36:49.692 17:33:27 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:36:49.692 17:33:27 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:36:49.692 17:33:27 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@629 -- # '[' raid5f = raid5f ']' 00:36:49.692 17:33:27 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@630 -- # write_unit_size=256 00:36:49.692 17:33:27 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@631 -- # echo 128 00:36:49.692 17:33:27 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@635 -- # dd if=/dev/urandom of=/dev/nbd0 bs=131072 count=512 oflag=direct 00:36:50.261 512+0 records in 00:36:50.261 512+0 records out 00:36:50.261 67108864 bytes (67 MB, 64 MiB) copied, 0.426273 s, 157 MB/s 00:36:50.261 17:33:27 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@636 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:36:50.261 17:33:27 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:36:50.261 17:33:27 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:36:50.261 17:33:27 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@50 -- # local nbd_list 00:36:50.261 17:33:27 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@51 -- # local i 00:36:50.261 17:33:27 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:36:50.261 17:33:27 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:36:50.520 17:33:27 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:36:50.520 [2024-11-26 17:33:27.780604] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:36:50.520 17:33:27 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:36:50.520 17:33:27 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:36:50.520 17:33:27 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:36:50.520 17:33:27 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:36:50.520 17:33:27 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:36:50.520 17:33:27 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:36:50.520 17:33:27 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:36:50.520 17:33:27 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:36:50.520 17:33:27 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:50.520 17:33:27 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:36:50.520 [2024-11-26 17:33:27.800483] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:36:50.520 17:33:27 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:50.520 17:33:27 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:36:50.520 17:33:27 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:36:50.520 17:33:27 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:36:50.520 17:33:27 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:36:50.520 17:33:27 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:36:50.520 17:33:27 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:36:50.520 17:33:27 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:36:50.520 17:33:27 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:36:50.520 17:33:27 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:36:50.520 17:33:27 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:36:50.520 17:33:27 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:36:50.520 17:33:27 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:36:50.520 17:33:27 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:50.520 17:33:27 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:36:50.520 17:33:27 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:50.520 17:33:27 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:36:50.520 "name": "raid_bdev1", 00:36:50.520 "uuid": "c98740c8-c4f7-4893-87d7-5727aa9716f3", 00:36:50.520 "strip_size_kb": 64, 00:36:50.520 "state": "online", 00:36:50.520 "raid_level": "raid5f", 00:36:50.520 "superblock": false, 00:36:50.520 "num_base_bdevs": 3, 00:36:50.520 "num_base_bdevs_discovered": 2, 00:36:50.520 "num_base_bdevs_operational": 2, 00:36:50.520 "base_bdevs_list": [ 00:36:50.520 { 00:36:50.520 "name": null, 00:36:50.520 "uuid": "00000000-0000-0000-0000-000000000000", 00:36:50.520 "is_configured": false, 00:36:50.520 "data_offset": 0, 00:36:50.520 "data_size": 65536 00:36:50.520 }, 00:36:50.520 { 00:36:50.520 "name": "BaseBdev2", 00:36:50.520 "uuid": "e1007141-7dbe-5640-be5b-be688a6e64ba", 00:36:50.520 "is_configured": true, 00:36:50.520 "data_offset": 0, 00:36:50.520 "data_size": 65536 00:36:50.520 }, 00:36:50.520 { 00:36:50.520 "name": "BaseBdev3", 00:36:50.520 "uuid": "5b11a2fd-474c-5db3-a596-13fd788f9d54", 00:36:50.520 "is_configured": true, 00:36:50.520 "data_offset": 0, 00:36:50.520 "data_size": 65536 00:36:50.520 } 00:36:50.520 ] 00:36:50.520 }' 00:36:50.520 17:33:27 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:36:50.520 17:33:27 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:36:50.779 17:33:28 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:36:50.779 17:33:28 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:50.779 17:33:28 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:36:50.779 [2024-11-26 17:33:28.180822] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:36:50.779 [2024-11-26 17:33:28.197136] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00002b680 00:36:50.779 17:33:28 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:50.779 17:33:28 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@647 -- # sleep 1 00:36:50.779 [2024-11-26 17:33:28.205082] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:36:52.157 17:33:29 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:36:52.157 17:33:29 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:36:52.157 17:33:29 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:36:52.157 17:33:29 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:36:52.157 17:33:29 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:36:52.157 17:33:29 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:36:52.157 17:33:29 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:36:52.157 17:33:29 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:52.157 17:33:29 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:36:52.157 17:33:29 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:52.157 17:33:29 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:36:52.157 "name": "raid_bdev1", 00:36:52.157 "uuid": "c98740c8-c4f7-4893-87d7-5727aa9716f3", 00:36:52.157 "strip_size_kb": 64, 00:36:52.157 "state": "online", 00:36:52.157 "raid_level": "raid5f", 00:36:52.157 "superblock": false, 00:36:52.157 "num_base_bdevs": 3, 00:36:52.157 "num_base_bdevs_discovered": 3, 00:36:52.157 "num_base_bdevs_operational": 3, 00:36:52.158 "process": { 00:36:52.158 "type": "rebuild", 00:36:52.158 "target": "spare", 00:36:52.158 "progress": { 00:36:52.158 "blocks": 20480, 00:36:52.158 "percent": 15 00:36:52.158 } 00:36:52.158 }, 00:36:52.158 "base_bdevs_list": [ 00:36:52.158 { 00:36:52.158 "name": "spare", 00:36:52.158 "uuid": "0ddf0c36-aafe-5392-a76e-f3ac7260929c", 00:36:52.158 "is_configured": true, 00:36:52.158 "data_offset": 0, 00:36:52.158 "data_size": 65536 00:36:52.158 }, 00:36:52.158 { 00:36:52.158 "name": "BaseBdev2", 00:36:52.158 "uuid": "e1007141-7dbe-5640-be5b-be688a6e64ba", 00:36:52.158 "is_configured": true, 00:36:52.158 "data_offset": 0, 00:36:52.158 "data_size": 65536 00:36:52.158 }, 00:36:52.158 { 00:36:52.158 "name": "BaseBdev3", 00:36:52.158 "uuid": "5b11a2fd-474c-5db3-a596-13fd788f9d54", 00:36:52.158 "is_configured": true, 00:36:52.158 "data_offset": 0, 00:36:52.158 "data_size": 65536 00:36:52.158 } 00:36:52.158 ] 00:36:52.158 }' 00:36:52.158 17:33:29 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:36:52.158 17:33:29 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:36:52.158 17:33:29 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:36:52.158 17:33:29 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:36:52.158 17:33:29 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:36:52.158 17:33:29 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:52.158 17:33:29 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:36:52.158 [2024-11-26 17:33:29.354487] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:36:52.158 [2024-11-26 17:33:29.417443] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:36:52.158 [2024-11-26 17:33:29.417758] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:36:52.158 [2024-11-26 17:33:29.417790] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:36:52.158 [2024-11-26 17:33:29.417802] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:36:52.158 17:33:29 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:52.158 17:33:29 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:36:52.158 17:33:29 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:36:52.158 17:33:29 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:36:52.158 17:33:29 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:36:52.158 17:33:29 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:36:52.158 17:33:29 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:36:52.158 17:33:29 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:36:52.158 17:33:29 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:36:52.158 17:33:29 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:36:52.158 17:33:29 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:36:52.158 17:33:29 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:36:52.158 17:33:29 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:36:52.158 17:33:29 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:52.158 17:33:29 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:36:52.158 17:33:29 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:52.158 17:33:29 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:36:52.158 "name": "raid_bdev1", 00:36:52.158 "uuid": "c98740c8-c4f7-4893-87d7-5727aa9716f3", 00:36:52.158 "strip_size_kb": 64, 00:36:52.158 "state": "online", 00:36:52.158 "raid_level": "raid5f", 00:36:52.158 "superblock": false, 00:36:52.158 "num_base_bdevs": 3, 00:36:52.158 "num_base_bdevs_discovered": 2, 00:36:52.158 "num_base_bdevs_operational": 2, 00:36:52.158 "base_bdevs_list": [ 00:36:52.158 { 00:36:52.158 "name": null, 00:36:52.158 "uuid": "00000000-0000-0000-0000-000000000000", 00:36:52.158 "is_configured": false, 00:36:52.158 "data_offset": 0, 00:36:52.158 "data_size": 65536 00:36:52.158 }, 00:36:52.158 { 00:36:52.158 "name": "BaseBdev2", 00:36:52.158 "uuid": "e1007141-7dbe-5640-be5b-be688a6e64ba", 00:36:52.158 "is_configured": true, 00:36:52.158 "data_offset": 0, 00:36:52.158 "data_size": 65536 00:36:52.158 }, 00:36:52.158 { 00:36:52.158 "name": "BaseBdev3", 00:36:52.158 "uuid": "5b11a2fd-474c-5db3-a596-13fd788f9d54", 00:36:52.158 "is_configured": true, 00:36:52.158 "data_offset": 0, 00:36:52.158 "data_size": 65536 00:36:52.158 } 00:36:52.158 ] 00:36:52.158 }' 00:36:52.158 17:33:29 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:36:52.158 17:33:29 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:36:52.727 17:33:29 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:36:52.727 17:33:29 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:36:52.727 17:33:29 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:36:52.727 17:33:29 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=none 00:36:52.727 17:33:29 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:36:52.727 17:33:29 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:36:52.727 17:33:29 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:52.727 17:33:29 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:36:52.727 17:33:29 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:36:52.727 17:33:29 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:52.727 17:33:29 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:36:52.727 "name": "raid_bdev1", 00:36:52.727 "uuid": "c98740c8-c4f7-4893-87d7-5727aa9716f3", 00:36:52.727 "strip_size_kb": 64, 00:36:52.727 "state": "online", 00:36:52.727 "raid_level": "raid5f", 00:36:52.727 "superblock": false, 00:36:52.727 "num_base_bdevs": 3, 00:36:52.727 "num_base_bdevs_discovered": 2, 00:36:52.727 "num_base_bdevs_operational": 2, 00:36:52.727 "base_bdevs_list": [ 00:36:52.727 { 00:36:52.727 "name": null, 00:36:52.727 "uuid": "00000000-0000-0000-0000-000000000000", 00:36:52.727 "is_configured": false, 00:36:52.727 "data_offset": 0, 00:36:52.727 "data_size": 65536 00:36:52.727 }, 00:36:52.727 { 00:36:52.727 "name": "BaseBdev2", 00:36:52.727 "uuid": "e1007141-7dbe-5640-be5b-be688a6e64ba", 00:36:52.727 "is_configured": true, 00:36:52.727 "data_offset": 0, 00:36:52.727 "data_size": 65536 00:36:52.727 }, 00:36:52.727 { 00:36:52.727 "name": "BaseBdev3", 00:36:52.727 "uuid": "5b11a2fd-474c-5db3-a596-13fd788f9d54", 00:36:52.727 "is_configured": true, 00:36:52.727 "data_offset": 0, 00:36:52.727 "data_size": 65536 00:36:52.727 } 00:36:52.727 ] 00:36:52.727 }' 00:36:52.727 17:33:29 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:36:52.727 17:33:29 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:36:52.727 17:33:30 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:36:52.727 17:33:30 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:36:52.727 17:33:30 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:36:52.727 17:33:30 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:52.727 17:33:30 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:36:52.727 [2024-11-26 17:33:30.051909] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:36:52.727 [2024-11-26 17:33:30.068556] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00002b750 00:36:52.727 17:33:30 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:52.727 17:33:30 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@663 -- # sleep 1 00:36:52.727 [2024-11-26 17:33:30.076301] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:36:53.664 17:33:31 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:36:53.664 17:33:31 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:36:53.665 17:33:31 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:36:53.665 17:33:31 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:36:53.665 17:33:31 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:36:53.665 17:33:31 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:36:53.665 17:33:31 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:53.665 17:33:31 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:36:53.665 17:33:31 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:36:53.665 17:33:31 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:53.924 17:33:31 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:36:53.924 "name": "raid_bdev1", 00:36:53.924 "uuid": "c98740c8-c4f7-4893-87d7-5727aa9716f3", 00:36:53.924 "strip_size_kb": 64, 00:36:53.924 "state": "online", 00:36:53.924 "raid_level": "raid5f", 00:36:53.924 "superblock": false, 00:36:53.924 "num_base_bdevs": 3, 00:36:53.924 "num_base_bdevs_discovered": 3, 00:36:53.924 "num_base_bdevs_operational": 3, 00:36:53.924 "process": { 00:36:53.924 "type": "rebuild", 00:36:53.924 "target": "spare", 00:36:53.924 "progress": { 00:36:53.924 "blocks": 18432, 00:36:53.924 "percent": 14 00:36:53.924 } 00:36:53.924 }, 00:36:53.924 "base_bdevs_list": [ 00:36:53.924 { 00:36:53.924 "name": "spare", 00:36:53.924 "uuid": "0ddf0c36-aafe-5392-a76e-f3ac7260929c", 00:36:53.924 "is_configured": true, 00:36:53.924 "data_offset": 0, 00:36:53.924 "data_size": 65536 00:36:53.924 }, 00:36:53.924 { 00:36:53.924 "name": "BaseBdev2", 00:36:53.924 "uuid": "e1007141-7dbe-5640-be5b-be688a6e64ba", 00:36:53.924 "is_configured": true, 00:36:53.924 "data_offset": 0, 00:36:53.924 "data_size": 65536 00:36:53.924 }, 00:36:53.924 { 00:36:53.924 "name": "BaseBdev3", 00:36:53.924 "uuid": "5b11a2fd-474c-5db3-a596-13fd788f9d54", 00:36:53.924 "is_configured": true, 00:36:53.924 "data_offset": 0, 00:36:53.924 "data_size": 65536 00:36:53.924 } 00:36:53.924 ] 00:36:53.924 }' 00:36:53.924 17:33:31 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:36:53.924 17:33:31 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:36:53.924 17:33:31 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:36:53.924 17:33:31 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:36:53.924 17:33:31 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@666 -- # '[' false = true ']' 00:36:53.924 17:33:31 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=3 00:36:53.924 17:33:31 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@693 -- # '[' raid5f = raid1 ']' 00:36:53.924 17:33:31 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@706 -- # local timeout=565 00:36:53.924 17:33:31 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:36:53.924 17:33:31 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:36:53.924 17:33:31 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:36:53.924 17:33:31 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:36:53.924 17:33:31 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:36:53.924 17:33:31 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:36:53.924 17:33:31 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:36:53.924 17:33:31 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:36:53.924 17:33:31 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:53.924 17:33:31 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:36:53.924 17:33:31 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:53.924 17:33:31 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:36:53.924 "name": "raid_bdev1", 00:36:53.924 "uuid": "c98740c8-c4f7-4893-87d7-5727aa9716f3", 00:36:53.924 "strip_size_kb": 64, 00:36:53.924 "state": "online", 00:36:53.924 "raid_level": "raid5f", 00:36:53.924 "superblock": false, 00:36:53.924 "num_base_bdevs": 3, 00:36:53.924 "num_base_bdevs_discovered": 3, 00:36:53.924 "num_base_bdevs_operational": 3, 00:36:53.924 "process": { 00:36:53.924 "type": "rebuild", 00:36:53.924 "target": "spare", 00:36:53.924 "progress": { 00:36:53.924 "blocks": 22528, 00:36:53.924 "percent": 17 00:36:53.924 } 00:36:53.924 }, 00:36:53.924 "base_bdevs_list": [ 00:36:53.924 { 00:36:53.924 "name": "spare", 00:36:53.924 "uuid": "0ddf0c36-aafe-5392-a76e-f3ac7260929c", 00:36:53.924 "is_configured": true, 00:36:53.924 "data_offset": 0, 00:36:53.924 "data_size": 65536 00:36:53.924 }, 00:36:53.924 { 00:36:53.924 "name": "BaseBdev2", 00:36:53.924 "uuid": "e1007141-7dbe-5640-be5b-be688a6e64ba", 00:36:53.924 "is_configured": true, 00:36:53.924 "data_offset": 0, 00:36:53.924 "data_size": 65536 00:36:53.924 }, 00:36:53.924 { 00:36:53.924 "name": "BaseBdev3", 00:36:53.924 "uuid": "5b11a2fd-474c-5db3-a596-13fd788f9d54", 00:36:53.924 "is_configured": true, 00:36:53.924 "data_offset": 0, 00:36:53.924 "data_size": 65536 00:36:53.924 } 00:36:53.924 ] 00:36:53.924 }' 00:36:53.924 17:33:31 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:36:53.924 17:33:31 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:36:53.924 17:33:31 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:36:53.924 17:33:31 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:36:53.924 17:33:31 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:36:55.302 17:33:32 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:36:55.302 17:33:32 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:36:55.302 17:33:32 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:36:55.302 17:33:32 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:36:55.302 17:33:32 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:36:55.302 17:33:32 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:36:55.302 17:33:32 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:36:55.302 17:33:32 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:55.302 17:33:32 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:36:55.302 17:33:32 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:36:55.302 17:33:32 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:55.302 17:33:32 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:36:55.302 "name": "raid_bdev1", 00:36:55.302 "uuid": "c98740c8-c4f7-4893-87d7-5727aa9716f3", 00:36:55.302 "strip_size_kb": 64, 00:36:55.302 "state": "online", 00:36:55.302 "raid_level": "raid5f", 00:36:55.302 "superblock": false, 00:36:55.302 "num_base_bdevs": 3, 00:36:55.302 "num_base_bdevs_discovered": 3, 00:36:55.302 "num_base_bdevs_operational": 3, 00:36:55.302 "process": { 00:36:55.302 "type": "rebuild", 00:36:55.302 "target": "spare", 00:36:55.302 "progress": { 00:36:55.302 "blocks": 45056, 00:36:55.302 "percent": 34 00:36:55.302 } 00:36:55.302 }, 00:36:55.302 "base_bdevs_list": [ 00:36:55.302 { 00:36:55.302 "name": "spare", 00:36:55.302 "uuid": "0ddf0c36-aafe-5392-a76e-f3ac7260929c", 00:36:55.302 "is_configured": true, 00:36:55.302 "data_offset": 0, 00:36:55.302 "data_size": 65536 00:36:55.302 }, 00:36:55.302 { 00:36:55.302 "name": "BaseBdev2", 00:36:55.302 "uuid": "e1007141-7dbe-5640-be5b-be688a6e64ba", 00:36:55.302 "is_configured": true, 00:36:55.302 "data_offset": 0, 00:36:55.302 "data_size": 65536 00:36:55.302 }, 00:36:55.302 { 00:36:55.302 "name": "BaseBdev3", 00:36:55.302 "uuid": "5b11a2fd-474c-5db3-a596-13fd788f9d54", 00:36:55.302 "is_configured": true, 00:36:55.302 "data_offset": 0, 00:36:55.302 "data_size": 65536 00:36:55.302 } 00:36:55.302 ] 00:36:55.302 }' 00:36:55.302 17:33:32 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:36:55.302 17:33:32 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:36:55.303 17:33:32 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:36:55.303 17:33:32 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:36:55.303 17:33:32 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:36:56.240 17:33:33 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:36:56.240 17:33:33 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:36:56.240 17:33:33 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:36:56.240 17:33:33 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:36:56.240 17:33:33 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:36:56.240 17:33:33 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:36:56.240 17:33:33 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:36:56.240 17:33:33 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:36:56.240 17:33:33 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:56.240 17:33:33 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:36:56.240 17:33:33 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:56.240 17:33:33 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:36:56.240 "name": "raid_bdev1", 00:36:56.240 "uuid": "c98740c8-c4f7-4893-87d7-5727aa9716f3", 00:36:56.240 "strip_size_kb": 64, 00:36:56.240 "state": "online", 00:36:56.240 "raid_level": "raid5f", 00:36:56.240 "superblock": false, 00:36:56.240 "num_base_bdevs": 3, 00:36:56.240 "num_base_bdevs_discovered": 3, 00:36:56.240 "num_base_bdevs_operational": 3, 00:36:56.240 "process": { 00:36:56.240 "type": "rebuild", 00:36:56.240 "target": "spare", 00:36:56.240 "progress": { 00:36:56.240 "blocks": 67584, 00:36:56.240 "percent": 51 00:36:56.240 } 00:36:56.240 }, 00:36:56.240 "base_bdevs_list": [ 00:36:56.240 { 00:36:56.240 "name": "spare", 00:36:56.240 "uuid": "0ddf0c36-aafe-5392-a76e-f3ac7260929c", 00:36:56.240 "is_configured": true, 00:36:56.240 "data_offset": 0, 00:36:56.240 "data_size": 65536 00:36:56.240 }, 00:36:56.240 { 00:36:56.240 "name": "BaseBdev2", 00:36:56.240 "uuid": "e1007141-7dbe-5640-be5b-be688a6e64ba", 00:36:56.240 "is_configured": true, 00:36:56.240 "data_offset": 0, 00:36:56.240 "data_size": 65536 00:36:56.240 }, 00:36:56.240 { 00:36:56.240 "name": "BaseBdev3", 00:36:56.240 "uuid": "5b11a2fd-474c-5db3-a596-13fd788f9d54", 00:36:56.240 "is_configured": true, 00:36:56.240 "data_offset": 0, 00:36:56.240 "data_size": 65536 00:36:56.240 } 00:36:56.240 ] 00:36:56.240 }' 00:36:56.240 17:33:33 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:36:56.240 17:33:33 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:36:56.240 17:33:33 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:36:56.240 17:33:33 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:36:56.240 17:33:33 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:36:57.175 17:33:34 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:36:57.175 17:33:34 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:36:57.175 17:33:34 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:36:57.175 17:33:34 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:36:57.175 17:33:34 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:36:57.175 17:33:34 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:36:57.175 17:33:34 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:36:57.175 17:33:34 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:36:57.175 17:33:34 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:57.175 17:33:34 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:36:57.175 17:33:34 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:57.433 17:33:34 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:36:57.433 "name": "raid_bdev1", 00:36:57.433 "uuid": "c98740c8-c4f7-4893-87d7-5727aa9716f3", 00:36:57.433 "strip_size_kb": 64, 00:36:57.433 "state": "online", 00:36:57.433 "raid_level": "raid5f", 00:36:57.433 "superblock": false, 00:36:57.433 "num_base_bdevs": 3, 00:36:57.433 "num_base_bdevs_discovered": 3, 00:36:57.433 "num_base_bdevs_operational": 3, 00:36:57.433 "process": { 00:36:57.433 "type": "rebuild", 00:36:57.433 "target": "spare", 00:36:57.433 "progress": { 00:36:57.433 "blocks": 90112, 00:36:57.433 "percent": 68 00:36:57.433 } 00:36:57.433 }, 00:36:57.433 "base_bdevs_list": [ 00:36:57.433 { 00:36:57.433 "name": "spare", 00:36:57.433 "uuid": "0ddf0c36-aafe-5392-a76e-f3ac7260929c", 00:36:57.433 "is_configured": true, 00:36:57.433 "data_offset": 0, 00:36:57.433 "data_size": 65536 00:36:57.433 }, 00:36:57.433 { 00:36:57.433 "name": "BaseBdev2", 00:36:57.433 "uuid": "e1007141-7dbe-5640-be5b-be688a6e64ba", 00:36:57.433 "is_configured": true, 00:36:57.433 "data_offset": 0, 00:36:57.433 "data_size": 65536 00:36:57.433 }, 00:36:57.433 { 00:36:57.433 "name": "BaseBdev3", 00:36:57.433 "uuid": "5b11a2fd-474c-5db3-a596-13fd788f9d54", 00:36:57.433 "is_configured": true, 00:36:57.433 "data_offset": 0, 00:36:57.433 "data_size": 65536 00:36:57.433 } 00:36:57.433 ] 00:36:57.433 }' 00:36:57.433 17:33:34 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:36:57.433 17:33:34 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:36:57.433 17:33:34 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:36:57.433 17:33:34 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:36:57.433 17:33:34 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:36:58.372 17:33:35 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:36:58.372 17:33:35 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:36:58.372 17:33:35 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:36:58.372 17:33:35 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:36:58.372 17:33:35 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:36:58.372 17:33:35 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:36:58.372 17:33:35 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:36:58.372 17:33:35 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:36:58.372 17:33:35 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:58.372 17:33:35 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:36:58.372 17:33:35 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:58.372 17:33:35 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:36:58.372 "name": "raid_bdev1", 00:36:58.372 "uuid": "c98740c8-c4f7-4893-87d7-5727aa9716f3", 00:36:58.372 "strip_size_kb": 64, 00:36:58.372 "state": "online", 00:36:58.372 "raid_level": "raid5f", 00:36:58.372 "superblock": false, 00:36:58.372 "num_base_bdevs": 3, 00:36:58.372 "num_base_bdevs_discovered": 3, 00:36:58.372 "num_base_bdevs_operational": 3, 00:36:58.372 "process": { 00:36:58.372 "type": "rebuild", 00:36:58.372 "target": "spare", 00:36:58.372 "progress": { 00:36:58.372 "blocks": 114688, 00:36:58.372 "percent": 87 00:36:58.372 } 00:36:58.372 }, 00:36:58.372 "base_bdevs_list": [ 00:36:58.372 { 00:36:58.372 "name": "spare", 00:36:58.372 "uuid": "0ddf0c36-aafe-5392-a76e-f3ac7260929c", 00:36:58.372 "is_configured": true, 00:36:58.372 "data_offset": 0, 00:36:58.372 "data_size": 65536 00:36:58.372 }, 00:36:58.372 { 00:36:58.372 "name": "BaseBdev2", 00:36:58.372 "uuid": "e1007141-7dbe-5640-be5b-be688a6e64ba", 00:36:58.372 "is_configured": true, 00:36:58.372 "data_offset": 0, 00:36:58.372 "data_size": 65536 00:36:58.372 }, 00:36:58.372 { 00:36:58.372 "name": "BaseBdev3", 00:36:58.372 "uuid": "5b11a2fd-474c-5db3-a596-13fd788f9d54", 00:36:58.372 "is_configured": true, 00:36:58.372 "data_offset": 0, 00:36:58.372 "data_size": 65536 00:36:58.372 } 00:36:58.372 ] 00:36:58.372 }' 00:36:58.372 17:33:35 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:36:58.631 17:33:35 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:36:58.631 17:33:35 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:36:58.631 17:33:35 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:36:58.631 17:33:35 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:36:59.199 [2024-11-26 17:33:36.540786] bdev_raid.c:2900:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:36:59.199 [2024-11-26 17:33:36.540891] bdev_raid.c:2562:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:36:59.199 [2024-11-26 17:33:36.540941] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:36:59.458 17:33:36 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:36:59.458 17:33:36 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:36:59.458 17:33:36 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:36:59.458 17:33:36 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:36:59.458 17:33:36 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:36:59.458 17:33:36 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:36:59.458 17:33:36 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:36:59.458 17:33:36 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:59.458 17:33:36 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:36:59.458 17:33:36 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:36:59.717 17:33:36 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:59.717 17:33:36 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:36:59.717 "name": "raid_bdev1", 00:36:59.717 "uuid": "c98740c8-c4f7-4893-87d7-5727aa9716f3", 00:36:59.717 "strip_size_kb": 64, 00:36:59.717 "state": "online", 00:36:59.717 "raid_level": "raid5f", 00:36:59.717 "superblock": false, 00:36:59.717 "num_base_bdevs": 3, 00:36:59.717 "num_base_bdevs_discovered": 3, 00:36:59.717 "num_base_bdevs_operational": 3, 00:36:59.717 "base_bdevs_list": [ 00:36:59.717 { 00:36:59.717 "name": "spare", 00:36:59.717 "uuid": "0ddf0c36-aafe-5392-a76e-f3ac7260929c", 00:36:59.717 "is_configured": true, 00:36:59.717 "data_offset": 0, 00:36:59.717 "data_size": 65536 00:36:59.717 }, 00:36:59.717 { 00:36:59.717 "name": "BaseBdev2", 00:36:59.717 "uuid": "e1007141-7dbe-5640-be5b-be688a6e64ba", 00:36:59.717 "is_configured": true, 00:36:59.717 "data_offset": 0, 00:36:59.717 "data_size": 65536 00:36:59.717 }, 00:36:59.717 { 00:36:59.717 "name": "BaseBdev3", 00:36:59.717 "uuid": "5b11a2fd-474c-5db3-a596-13fd788f9d54", 00:36:59.717 "is_configured": true, 00:36:59.717 "data_offset": 0, 00:36:59.717 "data_size": 65536 00:36:59.717 } 00:36:59.717 ] 00:36:59.717 }' 00:36:59.717 17:33:36 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:36:59.717 17:33:36 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:36:59.717 17:33:36 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:36:59.717 17:33:37 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:36:59.717 17:33:37 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@709 -- # break 00:36:59.717 17:33:37 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:36:59.717 17:33:37 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:36:59.717 17:33:37 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:36:59.717 17:33:37 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=none 00:36:59.717 17:33:37 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:36:59.717 17:33:37 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:36:59.717 17:33:37 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:36:59.717 17:33:37 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:59.717 17:33:37 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:36:59.717 17:33:37 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:59.717 17:33:37 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:36:59.717 "name": "raid_bdev1", 00:36:59.717 "uuid": "c98740c8-c4f7-4893-87d7-5727aa9716f3", 00:36:59.717 "strip_size_kb": 64, 00:36:59.717 "state": "online", 00:36:59.717 "raid_level": "raid5f", 00:36:59.717 "superblock": false, 00:36:59.717 "num_base_bdevs": 3, 00:36:59.717 "num_base_bdevs_discovered": 3, 00:36:59.717 "num_base_bdevs_operational": 3, 00:36:59.717 "base_bdevs_list": [ 00:36:59.717 { 00:36:59.717 "name": "spare", 00:36:59.717 "uuid": "0ddf0c36-aafe-5392-a76e-f3ac7260929c", 00:36:59.717 "is_configured": true, 00:36:59.717 "data_offset": 0, 00:36:59.717 "data_size": 65536 00:36:59.717 }, 00:36:59.717 { 00:36:59.717 "name": "BaseBdev2", 00:36:59.717 "uuid": "e1007141-7dbe-5640-be5b-be688a6e64ba", 00:36:59.717 "is_configured": true, 00:36:59.717 "data_offset": 0, 00:36:59.717 "data_size": 65536 00:36:59.717 }, 00:36:59.718 { 00:36:59.718 "name": "BaseBdev3", 00:36:59.718 "uuid": "5b11a2fd-474c-5db3-a596-13fd788f9d54", 00:36:59.718 "is_configured": true, 00:36:59.718 "data_offset": 0, 00:36:59.718 "data_size": 65536 00:36:59.718 } 00:36:59.718 ] 00:36:59.718 }' 00:36:59.718 17:33:37 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:36:59.718 17:33:37 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:36:59.718 17:33:37 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:36:59.977 17:33:37 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:36:59.977 17:33:37 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:36:59.977 17:33:37 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:36:59.977 17:33:37 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:36:59.977 17:33:37 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:36:59.977 17:33:37 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:36:59.977 17:33:37 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:36:59.977 17:33:37 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:36:59.977 17:33:37 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:36:59.977 17:33:37 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:36:59.977 17:33:37 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:36:59.977 17:33:37 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:36:59.977 17:33:37 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:36:59.977 17:33:37 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:59.977 17:33:37 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:36:59.977 17:33:37 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:59.977 17:33:37 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:36:59.977 "name": "raid_bdev1", 00:36:59.977 "uuid": "c98740c8-c4f7-4893-87d7-5727aa9716f3", 00:36:59.977 "strip_size_kb": 64, 00:36:59.977 "state": "online", 00:36:59.977 "raid_level": "raid5f", 00:36:59.977 "superblock": false, 00:36:59.977 "num_base_bdevs": 3, 00:36:59.977 "num_base_bdevs_discovered": 3, 00:36:59.977 "num_base_bdevs_operational": 3, 00:36:59.977 "base_bdevs_list": [ 00:36:59.977 { 00:36:59.977 "name": "spare", 00:36:59.977 "uuid": "0ddf0c36-aafe-5392-a76e-f3ac7260929c", 00:36:59.977 "is_configured": true, 00:36:59.977 "data_offset": 0, 00:36:59.977 "data_size": 65536 00:36:59.977 }, 00:36:59.977 { 00:36:59.977 "name": "BaseBdev2", 00:36:59.977 "uuid": "e1007141-7dbe-5640-be5b-be688a6e64ba", 00:36:59.977 "is_configured": true, 00:36:59.977 "data_offset": 0, 00:36:59.977 "data_size": 65536 00:36:59.977 }, 00:36:59.977 { 00:36:59.977 "name": "BaseBdev3", 00:36:59.977 "uuid": "5b11a2fd-474c-5db3-a596-13fd788f9d54", 00:36:59.977 "is_configured": true, 00:36:59.977 "data_offset": 0, 00:36:59.977 "data_size": 65536 00:36:59.977 } 00:36:59.977 ] 00:36:59.977 }' 00:36:59.977 17:33:37 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:36:59.977 17:33:37 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:37:00.236 17:33:37 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:37:00.236 17:33:37 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:00.236 17:33:37 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:37:00.236 [2024-11-26 17:33:37.593630] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:37:00.236 [2024-11-26 17:33:37.593785] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:37:00.236 [2024-11-26 17:33:37.593954] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:37:00.236 [2024-11-26 17:33:37.594077] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:37:00.236 [2024-11-26 17:33:37.594305] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:37:00.236 17:33:37 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:00.236 17:33:37 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:37:00.236 17:33:37 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:00.236 17:33:37 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:37:00.236 17:33:37 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@720 -- # jq length 00:37:00.236 17:33:37 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:00.236 17:33:37 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:37:00.236 17:33:37 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:37:00.236 17:33:37 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@723 -- # '[' false = true ']' 00:37:00.236 17:33:37 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@737 -- # nbd_start_disks /var/tmp/spdk.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 00:37:00.236 17:33:37 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:37:00.236 17:33:37 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev1' 'spare') 00:37:00.236 17:33:37 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@10 -- # local bdev_list 00:37:00.236 17:33:37 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:37:00.236 17:33:37 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@11 -- # local nbd_list 00:37:00.236 17:33:37 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@12 -- # local i 00:37:00.236 17:33:37 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:37:00.236 17:33:37 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:37:00.236 17:33:37 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev1 /dev/nbd0 00:37:00.496 /dev/nbd0 00:37:00.496 17:33:37 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:37:00.496 17:33:37 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:37:00.496 17:33:37 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:37:00.496 17:33:37 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@873 -- # local i 00:37:00.496 17:33:37 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:37:00.496 17:33:37 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:37:00.496 17:33:37 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:37:00.496 17:33:37 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@877 -- # break 00:37:00.496 17:33:37 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:37:00.496 17:33:37 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:37:00.496 17:33:37 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:37:00.496 1+0 records in 00:37:00.496 1+0 records out 00:37:00.496 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000552463 s, 7.4 MB/s 00:37:00.496 17:33:37 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:37:00.496 17:33:37 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@890 -- # size=4096 00:37:00.496 17:33:37 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:37:00.496 17:33:37 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:37:00.496 17:33:37 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@893 -- # return 0 00:37:00.496 17:33:37 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:37:00.496 17:33:37 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:37:00.496 17:33:37 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd1 00:37:00.780 /dev/nbd1 00:37:00.780 17:33:38 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:37:00.780 17:33:38 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:37:00.780 17:33:38 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:37:00.780 17:33:38 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@873 -- # local i 00:37:00.780 17:33:38 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:37:00.780 17:33:38 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:37:00.780 17:33:38 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:37:00.780 17:33:38 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@877 -- # break 00:37:00.780 17:33:38 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:37:00.780 17:33:38 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:37:00.780 17:33:38 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:37:00.780 1+0 records in 00:37:00.780 1+0 records out 00:37:00.780 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000400037 s, 10.2 MB/s 00:37:00.780 17:33:38 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:37:00.780 17:33:38 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@890 -- # size=4096 00:37:00.780 17:33:38 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:37:00.780 17:33:38 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:37:00.780 17:33:38 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@893 -- # return 0 00:37:00.780 17:33:38 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:37:00.780 17:33:38 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:37:00.780 17:33:38 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@738 -- # cmp -i 0 /dev/nbd0 /dev/nbd1 00:37:01.049 17:33:38 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@739 -- # nbd_stop_disks /var/tmp/spdk.sock '/dev/nbd0 /dev/nbd1' 00:37:01.049 17:33:38 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:37:01.049 17:33:38 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:37:01.049 17:33:38 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@50 -- # local nbd_list 00:37:01.049 17:33:38 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@51 -- # local i 00:37:01.049 17:33:38 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:37:01.049 17:33:38 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:37:01.308 17:33:38 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:37:01.309 17:33:38 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:37:01.309 17:33:38 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:37:01.309 17:33:38 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:37:01.309 17:33:38 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:37:01.309 17:33:38 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:37:01.309 17:33:38 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:37:01.309 17:33:38 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:37:01.309 17:33:38 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:37:01.309 17:33:38 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:37:01.567 17:33:38 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:37:01.567 17:33:38 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:37:01.567 17:33:38 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:37:01.567 17:33:38 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:37:01.567 17:33:38 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:37:01.567 17:33:38 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:37:01.567 17:33:38 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:37:01.567 17:33:38 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:37:01.567 17:33:38 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@743 -- # '[' false = true ']' 00:37:01.567 17:33:38 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@784 -- # killprocess 82049 00:37:01.567 17:33:38 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@954 -- # '[' -z 82049 ']' 00:37:01.567 17:33:38 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@958 -- # kill -0 82049 00:37:01.567 17:33:38 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@959 -- # uname 00:37:01.567 17:33:39 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:37:01.567 17:33:39 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 82049 00:37:01.826 killing process with pid 82049 00:37:01.826 Received shutdown signal, test time was about 60.000000 seconds 00:37:01.826 00:37:01.826 Latency(us) 00:37:01.826 [2024-11-26T17:33:39.273Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:37:01.826 [2024-11-26T17:33:39.273Z] =================================================================================================================== 00:37:01.826 [2024-11-26T17:33:39.273Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:37:01.826 17:33:39 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:37:01.826 17:33:39 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:37:01.826 17:33:39 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 82049' 00:37:01.826 17:33:39 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@973 -- # kill 82049 00:37:01.826 [2024-11-26 17:33:39.037909] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:37:01.826 17:33:39 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@978 -- # wait 82049 00:37:02.084 [2024-11-26 17:33:39.438748] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:37:03.462 17:33:40 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@786 -- # return 0 00:37:03.462 00:37:03.462 real 0m15.591s 00:37:03.462 user 0m19.036s 00:37:03.462 sys 0m2.402s 00:37:03.463 17:33:40 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:37:03.463 ************************************ 00:37:03.463 END TEST raid5f_rebuild_test 00:37:03.463 ************************************ 00:37:03.463 17:33:40 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:37:03.463 17:33:40 bdev_raid -- bdev/bdev_raid.sh@991 -- # run_test raid5f_rebuild_test_sb raid_rebuild_test raid5f 3 true false true 00:37:03.463 17:33:40 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 7 -le 1 ']' 00:37:03.463 17:33:40 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:37:03.463 17:33:40 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:37:03.463 ************************************ 00:37:03.463 START TEST raid5f_rebuild_test_sb 00:37:03.463 ************************************ 00:37:03.463 17:33:40 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@1129 -- # raid_rebuild_test raid5f 3 true false true 00:37:03.463 17:33:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@569 -- # local raid_level=raid5f 00:37:03.463 17:33:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=3 00:37:03.463 17:33:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@571 -- # local superblock=true 00:37:03.463 17:33:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@572 -- # local background_io=false 00:37:03.463 17:33:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@573 -- # local verify=true 00:37:03.463 17:33:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:37:03.463 17:33:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:37:03.463 17:33:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:37:03.463 17:33:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:37:03.463 17:33:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:37:03.463 17:33:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:37:03.463 17:33:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:37:03.463 17:33:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:37:03.463 17:33:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # echo BaseBdev3 00:37:03.463 17:33:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:37:03.463 17:33:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:37:03.463 17:33:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:37:03.463 17:33:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:37:03.463 17:33:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:37:03.463 17:33:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # local strip_size 00:37:03.463 17:33:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@577 -- # local create_arg 00:37:03.463 17:33:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:37:03.463 17:33:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@579 -- # local data_offset 00:37:03.463 17:33:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@581 -- # '[' raid5f '!=' raid1 ']' 00:37:03.463 17:33:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@582 -- # '[' false = true ']' 00:37:03.463 17:33:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@586 -- # strip_size=64 00:37:03.463 17:33:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@587 -- # create_arg+=' -z 64' 00:37:03.463 17:33:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@592 -- # '[' true = true ']' 00:37:03.463 17:33:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@593 -- # create_arg+=' -s' 00:37:03.463 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:37:03.463 17:33:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@597 -- # raid_pid=82489 00:37:03.463 17:33:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@598 -- # waitforlisten 82489 00:37:03.463 17:33:40 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@835 -- # '[' -z 82489 ']' 00:37:03.463 17:33:40 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:37:03.463 17:33:40 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@840 -- # local max_retries=100 00:37:03.463 17:33:40 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:37:03.463 17:33:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:37:03.463 17:33:40 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@844 -- # xtrace_disable 00:37:03.463 17:33:40 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:37:03.463 [2024-11-26 17:33:40.754717] Starting SPDK v25.01-pre git sha1 f7ce15267 / DPDK 24.03.0 initialization... 00:37:03.463 [2024-11-26 17:33:40.754995] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid82489 ] 00:37:03.463 I/O size of 3145728 is greater than zero copy threshold (65536). 00:37:03.463 Zero copy mechanism will not be used. 00:37:03.722 [2024-11-26 17:33:40.949948] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:37:03.722 [2024-11-26 17:33:41.064561] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:37:03.980 [2024-11-26 17:33:41.261331] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:37:03.980 [2024-11-26 17:33:41.261573] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:37:04.239 17:33:41 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:37:04.239 17:33:41 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@868 -- # return 0 00:37:04.239 17:33:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:37:04.239 17:33:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:37:04.239 17:33:41 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:04.239 17:33:41 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:37:04.239 BaseBdev1_malloc 00:37:04.239 17:33:41 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:04.239 17:33:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:37:04.239 17:33:41 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:04.239 17:33:41 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:37:04.239 [2024-11-26 17:33:41.638314] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:37:04.239 [2024-11-26 17:33:41.638386] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:37:04.239 [2024-11-26 17:33:41.638411] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:37:04.239 [2024-11-26 17:33:41.638426] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:37:04.239 [2024-11-26 17:33:41.640889] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:37:04.239 [2024-11-26 17:33:41.640935] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:37:04.239 BaseBdev1 00:37:04.239 17:33:41 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:04.239 17:33:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:37:04.239 17:33:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:37:04.239 17:33:41 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:04.239 17:33:41 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:37:04.498 BaseBdev2_malloc 00:37:04.498 17:33:41 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:04.498 17:33:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:37:04.498 17:33:41 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:04.498 17:33:41 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:37:04.498 [2024-11-26 17:33:41.692258] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:37:04.498 [2024-11-26 17:33:41.692322] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:37:04.498 [2024-11-26 17:33:41.692347] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:37:04.498 [2024-11-26 17:33:41.692361] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:37:04.498 [2024-11-26 17:33:41.694730] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:37:04.498 [2024-11-26 17:33:41.694770] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:37:04.498 BaseBdev2 00:37:04.498 17:33:41 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:04.498 17:33:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:37:04.498 17:33:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:37:04.498 17:33:41 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:04.498 17:33:41 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:37:04.498 BaseBdev3_malloc 00:37:04.498 17:33:41 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:04.498 17:33:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev3_malloc -p BaseBdev3 00:37:04.498 17:33:41 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:04.498 17:33:41 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:37:04.498 [2024-11-26 17:33:41.761992] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev3_malloc 00:37:04.498 [2024-11-26 17:33:41.762064] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:37:04.498 [2024-11-26 17:33:41.762088] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:37:04.498 [2024-11-26 17:33:41.762102] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:37:04.498 [2024-11-26 17:33:41.764471] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:37:04.498 [2024-11-26 17:33:41.764512] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:37:04.498 BaseBdev3 00:37:04.498 17:33:41 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:04.498 17:33:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 512 -b spare_malloc 00:37:04.498 17:33:41 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:04.498 17:33:41 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:37:04.498 spare_malloc 00:37:04.498 17:33:41 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:04.498 17:33:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:37:04.498 17:33:41 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:04.498 17:33:41 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:37:04.498 spare_delay 00:37:04.498 17:33:41 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:04.498 17:33:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:37:04.498 17:33:41 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:04.498 17:33:41 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:37:04.498 [2024-11-26 17:33:41.820386] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:37:04.498 [2024-11-26 17:33:41.820437] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:37:04.498 [2024-11-26 17:33:41.820456] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009c80 00:37:04.498 [2024-11-26 17:33:41.820469] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:37:04.498 [2024-11-26 17:33:41.822850] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:37:04.498 [2024-11-26 17:33:41.822893] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:37:04.498 spare 00:37:04.498 17:33:41 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:04.498 17:33:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n raid_bdev1 00:37:04.498 17:33:41 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:04.498 17:33:41 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:37:04.498 [2024-11-26 17:33:41.828468] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:37:04.498 [2024-11-26 17:33:41.830535] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:37:04.498 [2024-11-26 17:33:41.830604] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:37:04.498 [2024-11-26 17:33:41.830789] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:37:04.498 [2024-11-26 17:33:41.830808] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:37:04.498 [2024-11-26 17:33:41.831081] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:37:04.498 [2024-11-26 17:33:41.837393] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:37:04.498 [2024-11-26 17:33:41.837423] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:37:04.498 [2024-11-26 17:33:41.837619] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:37:04.498 17:33:41 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:04.498 17:33:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:37:04.498 17:33:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:37:04.498 17:33:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:37:04.498 17:33:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:37:04.498 17:33:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:37:04.498 17:33:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:37:04.498 17:33:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:37:04.498 17:33:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:37:04.498 17:33:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:37:04.498 17:33:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:37:04.498 17:33:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:37:04.498 17:33:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:37:04.498 17:33:41 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:04.498 17:33:41 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:37:04.498 17:33:41 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:04.498 17:33:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:37:04.498 "name": "raid_bdev1", 00:37:04.498 "uuid": "4a8cd08b-9f01-44ad-8e12-cebfb8748ee5", 00:37:04.498 "strip_size_kb": 64, 00:37:04.498 "state": "online", 00:37:04.498 "raid_level": "raid5f", 00:37:04.498 "superblock": true, 00:37:04.498 "num_base_bdevs": 3, 00:37:04.498 "num_base_bdevs_discovered": 3, 00:37:04.498 "num_base_bdevs_operational": 3, 00:37:04.498 "base_bdevs_list": [ 00:37:04.498 { 00:37:04.498 "name": "BaseBdev1", 00:37:04.498 "uuid": "018e53f8-c592-50b2-a3e4-c4b0bbd8ca32", 00:37:04.498 "is_configured": true, 00:37:04.498 "data_offset": 2048, 00:37:04.498 "data_size": 63488 00:37:04.498 }, 00:37:04.498 { 00:37:04.498 "name": "BaseBdev2", 00:37:04.498 "uuid": "a47d0352-cff8-5963-aaf2-a6f194e60ef8", 00:37:04.498 "is_configured": true, 00:37:04.499 "data_offset": 2048, 00:37:04.499 "data_size": 63488 00:37:04.499 }, 00:37:04.499 { 00:37:04.499 "name": "BaseBdev3", 00:37:04.499 "uuid": "2f722e5c-1404-529f-a75b-d0064eaa0fac", 00:37:04.499 "is_configured": true, 00:37:04.499 "data_offset": 2048, 00:37:04.499 "data_size": 63488 00:37:04.499 } 00:37:04.499 ] 00:37:04.499 }' 00:37:04.499 17:33:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:37:04.499 17:33:41 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:37:05.066 17:33:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:37:05.066 17:33:42 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:05.066 17:33:42 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:37:05.066 17:33:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:37:05.066 [2024-11-26 17:33:42.300659] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:37:05.066 17:33:42 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:05.066 17:33:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=126976 00:37:05.066 17:33:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:37:05.066 17:33:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:37:05.066 17:33:42 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:05.066 17:33:42 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:37:05.066 17:33:42 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:05.066 17:33:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@619 -- # data_offset=2048 00:37:05.066 17:33:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@621 -- # '[' false = true ']' 00:37:05.066 17:33:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@624 -- # '[' true = true ']' 00:37:05.066 17:33:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@625 -- # local write_unit_size 00:37:05.066 17:33:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@628 -- # nbd_start_disks /var/tmp/spdk.sock raid_bdev1 /dev/nbd0 00:37:05.066 17:33:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:37:05.066 17:33:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # bdev_list=('raid_bdev1') 00:37:05.066 17:33:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # local bdev_list 00:37:05.066 17:33:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:37:05.066 17:33:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # local nbd_list 00:37:05.066 17:33:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@12 -- # local i 00:37:05.066 17:33:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:37:05.066 17:33:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:37:05.066 17:33:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk raid_bdev1 /dev/nbd0 00:37:05.324 [2024-11-26 17:33:42.548806] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006080 00:37:05.324 /dev/nbd0 00:37:05.325 17:33:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:37:05.325 17:33:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:37:05.325 17:33:42 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:37:05.325 17:33:42 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@873 -- # local i 00:37:05.325 17:33:42 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:37:05.325 17:33:42 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:37:05.325 17:33:42 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:37:05.325 17:33:42 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@877 -- # break 00:37:05.325 17:33:42 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:37:05.325 17:33:42 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:37:05.325 17:33:42 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:37:05.325 1+0 records in 00:37:05.325 1+0 records out 00:37:05.325 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000306184 s, 13.4 MB/s 00:37:05.325 17:33:42 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:37:05.325 17:33:42 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@890 -- # size=4096 00:37:05.325 17:33:42 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:37:05.325 17:33:42 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:37:05.325 17:33:42 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@893 -- # return 0 00:37:05.325 17:33:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:37:05.325 17:33:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:37:05.325 17:33:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@629 -- # '[' raid5f = raid5f ']' 00:37:05.325 17:33:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@630 -- # write_unit_size=256 00:37:05.325 17:33:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@631 -- # echo 128 00:37:05.325 17:33:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@635 -- # dd if=/dev/urandom of=/dev/nbd0 bs=131072 count=496 oflag=direct 00:37:05.891 496+0 records in 00:37:05.891 496+0 records out 00:37:05.891 65011712 bytes (65 MB, 62 MiB) copied, 0.428969 s, 152 MB/s 00:37:05.891 17:33:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@636 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:37:05.891 17:33:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:37:05.891 17:33:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:37:05.891 17:33:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # local nbd_list 00:37:05.891 17:33:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@51 -- # local i 00:37:05.891 17:33:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:37:05.891 17:33:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:37:05.891 17:33:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:37:05.891 [2024-11-26 17:33:43.284903] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:37:05.891 17:33:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:37:05.891 17:33:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:37:05.891 17:33:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:37:05.891 17:33:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:37:05.891 17:33:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:37:05.891 17:33:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:37:05.891 17:33:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:37:05.891 17:33:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:37:05.891 17:33:43 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:05.891 17:33:43 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:37:05.891 [2024-11-26 17:33:43.300275] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:37:05.891 17:33:43 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:05.891 17:33:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:37:05.891 17:33:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:37:05.891 17:33:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:37:05.891 17:33:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:37:05.891 17:33:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:37:05.891 17:33:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:37:05.891 17:33:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:37:05.891 17:33:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:37:05.891 17:33:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:37:05.891 17:33:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:37:05.891 17:33:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:37:05.891 17:33:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:37:05.891 17:33:43 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:05.891 17:33:43 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:37:05.892 17:33:43 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:06.149 17:33:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:37:06.149 "name": "raid_bdev1", 00:37:06.149 "uuid": "4a8cd08b-9f01-44ad-8e12-cebfb8748ee5", 00:37:06.149 "strip_size_kb": 64, 00:37:06.149 "state": "online", 00:37:06.149 "raid_level": "raid5f", 00:37:06.149 "superblock": true, 00:37:06.149 "num_base_bdevs": 3, 00:37:06.149 "num_base_bdevs_discovered": 2, 00:37:06.149 "num_base_bdevs_operational": 2, 00:37:06.149 "base_bdevs_list": [ 00:37:06.149 { 00:37:06.149 "name": null, 00:37:06.149 "uuid": "00000000-0000-0000-0000-000000000000", 00:37:06.149 "is_configured": false, 00:37:06.149 "data_offset": 0, 00:37:06.149 "data_size": 63488 00:37:06.149 }, 00:37:06.149 { 00:37:06.149 "name": "BaseBdev2", 00:37:06.149 "uuid": "a47d0352-cff8-5963-aaf2-a6f194e60ef8", 00:37:06.149 "is_configured": true, 00:37:06.149 "data_offset": 2048, 00:37:06.149 "data_size": 63488 00:37:06.149 }, 00:37:06.149 { 00:37:06.149 "name": "BaseBdev3", 00:37:06.149 "uuid": "2f722e5c-1404-529f-a75b-d0064eaa0fac", 00:37:06.149 "is_configured": true, 00:37:06.149 "data_offset": 2048, 00:37:06.149 "data_size": 63488 00:37:06.149 } 00:37:06.149 ] 00:37:06.149 }' 00:37:06.149 17:33:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:37:06.149 17:33:43 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:37:06.408 17:33:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:37:06.408 17:33:43 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:06.408 17:33:43 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:37:06.408 [2024-11-26 17:33:43.716451] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:37:06.408 [2024-11-26 17:33:43.733214] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000028f80 00:37:06.408 17:33:43 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:06.408 17:33:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@647 -- # sleep 1 00:37:06.408 [2024-11-26 17:33:43.741089] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:37:07.345 17:33:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:37:07.345 17:33:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:37:07.345 17:33:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:37:07.345 17:33:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:37:07.345 17:33:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:37:07.345 17:33:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:37:07.345 17:33:44 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:07.345 17:33:44 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:37:07.345 17:33:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:37:07.345 17:33:44 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:07.604 17:33:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:37:07.604 "name": "raid_bdev1", 00:37:07.604 "uuid": "4a8cd08b-9f01-44ad-8e12-cebfb8748ee5", 00:37:07.604 "strip_size_kb": 64, 00:37:07.604 "state": "online", 00:37:07.604 "raid_level": "raid5f", 00:37:07.604 "superblock": true, 00:37:07.604 "num_base_bdevs": 3, 00:37:07.604 "num_base_bdevs_discovered": 3, 00:37:07.604 "num_base_bdevs_operational": 3, 00:37:07.604 "process": { 00:37:07.604 "type": "rebuild", 00:37:07.604 "target": "spare", 00:37:07.604 "progress": { 00:37:07.604 "blocks": 18432, 00:37:07.604 "percent": 14 00:37:07.604 } 00:37:07.604 }, 00:37:07.604 "base_bdevs_list": [ 00:37:07.604 { 00:37:07.604 "name": "spare", 00:37:07.604 "uuid": "8b7ea0ab-5401-5670-a29a-4f8c478439f2", 00:37:07.604 "is_configured": true, 00:37:07.604 "data_offset": 2048, 00:37:07.604 "data_size": 63488 00:37:07.604 }, 00:37:07.604 { 00:37:07.604 "name": "BaseBdev2", 00:37:07.604 "uuid": "a47d0352-cff8-5963-aaf2-a6f194e60ef8", 00:37:07.604 "is_configured": true, 00:37:07.604 "data_offset": 2048, 00:37:07.604 "data_size": 63488 00:37:07.604 }, 00:37:07.604 { 00:37:07.604 "name": "BaseBdev3", 00:37:07.604 "uuid": "2f722e5c-1404-529f-a75b-d0064eaa0fac", 00:37:07.604 "is_configured": true, 00:37:07.604 "data_offset": 2048, 00:37:07.604 "data_size": 63488 00:37:07.604 } 00:37:07.604 ] 00:37:07.604 }' 00:37:07.604 17:33:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:37:07.604 17:33:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:37:07.604 17:33:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:37:07.604 17:33:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:37:07.604 17:33:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:37:07.604 17:33:44 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:07.604 17:33:44 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:37:07.604 [2024-11-26 17:33:44.882561] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:37:07.604 [2024-11-26 17:33:44.953388] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:37:07.604 [2024-11-26 17:33:44.953460] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:37:07.604 [2024-11-26 17:33:44.953481] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:37:07.604 [2024-11-26 17:33:44.953490] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:37:07.604 17:33:44 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:07.604 17:33:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:37:07.604 17:33:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:37:07.604 17:33:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:37:07.604 17:33:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:37:07.604 17:33:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:37:07.604 17:33:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:37:07.604 17:33:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:37:07.604 17:33:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:37:07.604 17:33:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:37:07.604 17:33:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:37:07.604 17:33:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:37:07.604 17:33:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:37:07.604 17:33:45 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:07.604 17:33:45 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:37:07.604 17:33:45 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:07.604 17:33:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:37:07.604 "name": "raid_bdev1", 00:37:07.604 "uuid": "4a8cd08b-9f01-44ad-8e12-cebfb8748ee5", 00:37:07.604 "strip_size_kb": 64, 00:37:07.604 "state": "online", 00:37:07.604 "raid_level": "raid5f", 00:37:07.604 "superblock": true, 00:37:07.604 "num_base_bdevs": 3, 00:37:07.604 "num_base_bdevs_discovered": 2, 00:37:07.604 "num_base_bdevs_operational": 2, 00:37:07.604 "base_bdevs_list": [ 00:37:07.604 { 00:37:07.604 "name": null, 00:37:07.604 "uuid": "00000000-0000-0000-0000-000000000000", 00:37:07.604 "is_configured": false, 00:37:07.604 "data_offset": 0, 00:37:07.604 "data_size": 63488 00:37:07.604 }, 00:37:07.604 { 00:37:07.604 "name": "BaseBdev2", 00:37:07.604 "uuid": "a47d0352-cff8-5963-aaf2-a6f194e60ef8", 00:37:07.604 "is_configured": true, 00:37:07.604 "data_offset": 2048, 00:37:07.604 "data_size": 63488 00:37:07.604 }, 00:37:07.604 { 00:37:07.604 "name": "BaseBdev3", 00:37:07.604 "uuid": "2f722e5c-1404-529f-a75b-d0064eaa0fac", 00:37:07.604 "is_configured": true, 00:37:07.604 "data_offset": 2048, 00:37:07.604 "data_size": 63488 00:37:07.604 } 00:37:07.604 ] 00:37:07.604 }' 00:37:07.604 17:33:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:37:07.604 17:33:45 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:37:08.172 17:33:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:37:08.172 17:33:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:37:08.172 17:33:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:37:08.172 17:33:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:37:08.172 17:33:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:37:08.172 17:33:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:37:08.172 17:33:45 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:08.172 17:33:45 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:37:08.172 17:33:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:37:08.172 17:33:45 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:08.172 17:33:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:37:08.172 "name": "raid_bdev1", 00:37:08.172 "uuid": "4a8cd08b-9f01-44ad-8e12-cebfb8748ee5", 00:37:08.172 "strip_size_kb": 64, 00:37:08.172 "state": "online", 00:37:08.172 "raid_level": "raid5f", 00:37:08.172 "superblock": true, 00:37:08.172 "num_base_bdevs": 3, 00:37:08.172 "num_base_bdevs_discovered": 2, 00:37:08.172 "num_base_bdevs_operational": 2, 00:37:08.172 "base_bdevs_list": [ 00:37:08.172 { 00:37:08.172 "name": null, 00:37:08.172 "uuid": "00000000-0000-0000-0000-000000000000", 00:37:08.172 "is_configured": false, 00:37:08.172 "data_offset": 0, 00:37:08.172 "data_size": 63488 00:37:08.172 }, 00:37:08.172 { 00:37:08.172 "name": "BaseBdev2", 00:37:08.172 "uuid": "a47d0352-cff8-5963-aaf2-a6f194e60ef8", 00:37:08.172 "is_configured": true, 00:37:08.172 "data_offset": 2048, 00:37:08.172 "data_size": 63488 00:37:08.172 }, 00:37:08.172 { 00:37:08.172 "name": "BaseBdev3", 00:37:08.172 "uuid": "2f722e5c-1404-529f-a75b-d0064eaa0fac", 00:37:08.172 "is_configured": true, 00:37:08.172 "data_offset": 2048, 00:37:08.172 "data_size": 63488 00:37:08.172 } 00:37:08.172 ] 00:37:08.172 }' 00:37:08.172 17:33:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:37:08.173 17:33:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:37:08.173 17:33:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:37:08.173 17:33:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:37:08.173 17:33:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:37:08.173 17:33:45 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:08.173 17:33:45 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:37:08.173 [2024-11-26 17:33:45.596989] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:37:08.173 [2024-11-26 17:33:45.613248] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000029050 00:37:08.173 17:33:45 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:08.173 17:33:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@663 -- # sleep 1 00:37:08.431 [2024-11-26 17:33:45.621281] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:37:09.368 17:33:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:37:09.368 17:33:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:37:09.368 17:33:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:37:09.368 17:33:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:37:09.368 17:33:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:37:09.368 17:33:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:37:09.368 17:33:46 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:09.368 17:33:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:37:09.368 17:33:46 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:37:09.368 17:33:46 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:09.368 17:33:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:37:09.368 "name": "raid_bdev1", 00:37:09.368 "uuid": "4a8cd08b-9f01-44ad-8e12-cebfb8748ee5", 00:37:09.368 "strip_size_kb": 64, 00:37:09.368 "state": "online", 00:37:09.368 "raid_level": "raid5f", 00:37:09.368 "superblock": true, 00:37:09.368 "num_base_bdevs": 3, 00:37:09.368 "num_base_bdevs_discovered": 3, 00:37:09.368 "num_base_bdevs_operational": 3, 00:37:09.368 "process": { 00:37:09.368 "type": "rebuild", 00:37:09.368 "target": "spare", 00:37:09.368 "progress": { 00:37:09.368 "blocks": 18432, 00:37:09.368 "percent": 14 00:37:09.368 } 00:37:09.368 }, 00:37:09.368 "base_bdevs_list": [ 00:37:09.368 { 00:37:09.368 "name": "spare", 00:37:09.368 "uuid": "8b7ea0ab-5401-5670-a29a-4f8c478439f2", 00:37:09.368 "is_configured": true, 00:37:09.368 "data_offset": 2048, 00:37:09.368 "data_size": 63488 00:37:09.368 }, 00:37:09.368 { 00:37:09.368 "name": "BaseBdev2", 00:37:09.369 "uuid": "a47d0352-cff8-5963-aaf2-a6f194e60ef8", 00:37:09.369 "is_configured": true, 00:37:09.369 "data_offset": 2048, 00:37:09.369 "data_size": 63488 00:37:09.369 }, 00:37:09.369 { 00:37:09.369 "name": "BaseBdev3", 00:37:09.369 "uuid": "2f722e5c-1404-529f-a75b-d0064eaa0fac", 00:37:09.369 "is_configured": true, 00:37:09.369 "data_offset": 2048, 00:37:09.369 "data_size": 63488 00:37:09.369 } 00:37:09.369 ] 00:37:09.369 }' 00:37:09.369 17:33:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:37:09.369 17:33:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:37:09.369 17:33:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:37:09.369 17:33:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:37:09.369 17:33:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@666 -- # '[' true = true ']' 00:37:09.369 17:33:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@666 -- # '[' = false ']' 00:37:09.369 /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh: line 666: [: =: unary operator expected 00:37:09.369 17:33:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=3 00:37:09.369 17:33:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@693 -- # '[' raid5f = raid1 ']' 00:37:09.369 17:33:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@706 -- # local timeout=580 00:37:09.369 17:33:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:37:09.369 17:33:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:37:09.369 17:33:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:37:09.369 17:33:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:37:09.369 17:33:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:37:09.369 17:33:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:37:09.369 17:33:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:37:09.369 17:33:46 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:09.369 17:33:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:37:09.369 17:33:46 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:37:09.369 17:33:46 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:09.369 17:33:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:37:09.369 "name": "raid_bdev1", 00:37:09.369 "uuid": "4a8cd08b-9f01-44ad-8e12-cebfb8748ee5", 00:37:09.369 "strip_size_kb": 64, 00:37:09.369 "state": "online", 00:37:09.369 "raid_level": "raid5f", 00:37:09.369 "superblock": true, 00:37:09.369 "num_base_bdevs": 3, 00:37:09.369 "num_base_bdevs_discovered": 3, 00:37:09.369 "num_base_bdevs_operational": 3, 00:37:09.369 "process": { 00:37:09.369 "type": "rebuild", 00:37:09.369 "target": "spare", 00:37:09.369 "progress": { 00:37:09.369 "blocks": 22528, 00:37:09.369 "percent": 17 00:37:09.369 } 00:37:09.369 }, 00:37:09.369 "base_bdevs_list": [ 00:37:09.369 { 00:37:09.369 "name": "spare", 00:37:09.369 "uuid": "8b7ea0ab-5401-5670-a29a-4f8c478439f2", 00:37:09.369 "is_configured": true, 00:37:09.369 "data_offset": 2048, 00:37:09.369 "data_size": 63488 00:37:09.369 }, 00:37:09.369 { 00:37:09.369 "name": "BaseBdev2", 00:37:09.369 "uuid": "a47d0352-cff8-5963-aaf2-a6f194e60ef8", 00:37:09.369 "is_configured": true, 00:37:09.369 "data_offset": 2048, 00:37:09.369 "data_size": 63488 00:37:09.369 }, 00:37:09.369 { 00:37:09.369 "name": "BaseBdev3", 00:37:09.369 "uuid": "2f722e5c-1404-529f-a75b-d0064eaa0fac", 00:37:09.369 "is_configured": true, 00:37:09.369 "data_offset": 2048, 00:37:09.369 "data_size": 63488 00:37:09.369 } 00:37:09.369 ] 00:37:09.369 }' 00:37:09.369 17:33:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:37:09.628 17:33:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:37:09.628 17:33:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:37:09.628 17:33:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:37:09.628 17:33:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:37:10.564 17:33:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:37:10.564 17:33:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:37:10.564 17:33:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:37:10.564 17:33:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:37:10.564 17:33:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:37:10.564 17:33:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:37:10.564 17:33:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:37:10.564 17:33:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:37:10.564 17:33:47 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:10.564 17:33:47 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:37:10.564 17:33:47 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:10.564 17:33:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:37:10.564 "name": "raid_bdev1", 00:37:10.564 "uuid": "4a8cd08b-9f01-44ad-8e12-cebfb8748ee5", 00:37:10.564 "strip_size_kb": 64, 00:37:10.564 "state": "online", 00:37:10.564 "raid_level": "raid5f", 00:37:10.564 "superblock": true, 00:37:10.564 "num_base_bdevs": 3, 00:37:10.564 "num_base_bdevs_discovered": 3, 00:37:10.564 "num_base_bdevs_operational": 3, 00:37:10.564 "process": { 00:37:10.564 "type": "rebuild", 00:37:10.564 "target": "spare", 00:37:10.564 "progress": { 00:37:10.564 "blocks": 45056, 00:37:10.564 "percent": 35 00:37:10.564 } 00:37:10.564 }, 00:37:10.564 "base_bdevs_list": [ 00:37:10.564 { 00:37:10.564 "name": "spare", 00:37:10.564 "uuid": "8b7ea0ab-5401-5670-a29a-4f8c478439f2", 00:37:10.564 "is_configured": true, 00:37:10.564 "data_offset": 2048, 00:37:10.564 "data_size": 63488 00:37:10.564 }, 00:37:10.564 { 00:37:10.564 "name": "BaseBdev2", 00:37:10.564 "uuid": "a47d0352-cff8-5963-aaf2-a6f194e60ef8", 00:37:10.564 "is_configured": true, 00:37:10.564 "data_offset": 2048, 00:37:10.564 "data_size": 63488 00:37:10.564 }, 00:37:10.564 { 00:37:10.564 "name": "BaseBdev3", 00:37:10.564 "uuid": "2f722e5c-1404-529f-a75b-d0064eaa0fac", 00:37:10.564 "is_configured": true, 00:37:10.564 "data_offset": 2048, 00:37:10.564 "data_size": 63488 00:37:10.564 } 00:37:10.564 ] 00:37:10.564 }' 00:37:10.564 17:33:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:37:10.564 17:33:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:37:10.564 17:33:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:37:10.823 17:33:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:37:10.823 17:33:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:37:11.759 17:33:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:37:11.759 17:33:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:37:11.759 17:33:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:37:11.759 17:33:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:37:11.759 17:33:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:37:11.759 17:33:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:37:11.759 17:33:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:37:11.759 17:33:49 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:11.759 17:33:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:37:11.759 17:33:49 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:37:11.759 17:33:49 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:11.759 17:33:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:37:11.759 "name": "raid_bdev1", 00:37:11.759 "uuid": "4a8cd08b-9f01-44ad-8e12-cebfb8748ee5", 00:37:11.759 "strip_size_kb": 64, 00:37:11.759 "state": "online", 00:37:11.759 "raid_level": "raid5f", 00:37:11.759 "superblock": true, 00:37:11.759 "num_base_bdevs": 3, 00:37:11.759 "num_base_bdevs_discovered": 3, 00:37:11.759 "num_base_bdevs_operational": 3, 00:37:11.759 "process": { 00:37:11.759 "type": "rebuild", 00:37:11.759 "target": "spare", 00:37:11.759 "progress": { 00:37:11.759 "blocks": 67584, 00:37:11.759 "percent": 53 00:37:11.759 } 00:37:11.759 }, 00:37:11.759 "base_bdevs_list": [ 00:37:11.759 { 00:37:11.759 "name": "spare", 00:37:11.759 "uuid": "8b7ea0ab-5401-5670-a29a-4f8c478439f2", 00:37:11.759 "is_configured": true, 00:37:11.759 "data_offset": 2048, 00:37:11.759 "data_size": 63488 00:37:11.759 }, 00:37:11.759 { 00:37:11.759 "name": "BaseBdev2", 00:37:11.759 "uuid": "a47d0352-cff8-5963-aaf2-a6f194e60ef8", 00:37:11.759 "is_configured": true, 00:37:11.759 "data_offset": 2048, 00:37:11.759 "data_size": 63488 00:37:11.759 }, 00:37:11.759 { 00:37:11.759 "name": "BaseBdev3", 00:37:11.759 "uuid": "2f722e5c-1404-529f-a75b-d0064eaa0fac", 00:37:11.759 "is_configured": true, 00:37:11.759 "data_offset": 2048, 00:37:11.759 "data_size": 63488 00:37:11.759 } 00:37:11.759 ] 00:37:11.759 }' 00:37:11.759 17:33:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:37:11.760 17:33:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:37:11.760 17:33:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:37:11.760 17:33:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:37:11.760 17:33:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:37:13.138 17:33:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:37:13.139 17:33:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:37:13.139 17:33:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:37:13.139 17:33:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:37:13.139 17:33:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:37:13.139 17:33:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:37:13.139 17:33:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:37:13.139 17:33:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:37:13.139 17:33:50 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:13.139 17:33:50 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:37:13.139 17:33:50 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:13.139 17:33:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:37:13.139 "name": "raid_bdev1", 00:37:13.139 "uuid": "4a8cd08b-9f01-44ad-8e12-cebfb8748ee5", 00:37:13.139 "strip_size_kb": 64, 00:37:13.139 "state": "online", 00:37:13.139 "raid_level": "raid5f", 00:37:13.139 "superblock": true, 00:37:13.139 "num_base_bdevs": 3, 00:37:13.139 "num_base_bdevs_discovered": 3, 00:37:13.139 "num_base_bdevs_operational": 3, 00:37:13.139 "process": { 00:37:13.139 "type": "rebuild", 00:37:13.139 "target": "spare", 00:37:13.139 "progress": { 00:37:13.139 "blocks": 92160, 00:37:13.139 "percent": 72 00:37:13.139 } 00:37:13.139 }, 00:37:13.139 "base_bdevs_list": [ 00:37:13.139 { 00:37:13.139 "name": "spare", 00:37:13.139 "uuid": "8b7ea0ab-5401-5670-a29a-4f8c478439f2", 00:37:13.139 "is_configured": true, 00:37:13.139 "data_offset": 2048, 00:37:13.139 "data_size": 63488 00:37:13.139 }, 00:37:13.139 { 00:37:13.139 "name": "BaseBdev2", 00:37:13.139 "uuid": "a47d0352-cff8-5963-aaf2-a6f194e60ef8", 00:37:13.139 "is_configured": true, 00:37:13.139 "data_offset": 2048, 00:37:13.139 "data_size": 63488 00:37:13.139 }, 00:37:13.139 { 00:37:13.139 "name": "BaseBdev3", 00:37:13.139 "uuid": "2f722e5c-1404-529f-a75b-d0064eaa0fac", 00:37:13.139 "is_configured": true, 00:37:13.139 "data_offset": 2048, 00:37:13.139 "data_size": 63488 00:37:13.139 } 00:37:13.139 ] 00:37:13.139 }' 00:37:13.139 17:33:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:37:13.139 17:33:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:37:13.139 17:33:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:37:13.139 17:33:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:37:13.139 17:33:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:37:14.133 17:33:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:37:14.133 17:33:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:37:14.133 17:33:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:37:14.133 17:33:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:37:14.133 17:33:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:37:14.134 17:33:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:37:14.134 17:33:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:37:14.134 17:33:51 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:14.134 17:33:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:37:14.134 17:33:51 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:37:14.134 17:33:51 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:14.134 17:33:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:37:14.134 "name": "raid_bdev1", 00:37:14.134 "uuid": "4a8cd08b-9f01-44ad-8e12-cebfb8748ee5", 00:37:14.134 "strip_size_kb": 64, 00:37:14.134 "state": "online", 00:37:14.134 "raid_level": "raid5f", 00:37:14.134 "superblock": true, 00:37:14.134 "num_base_bdevs": 3, 00:37:14.134 "num_base_bdevs_discovered": 3, 00:37:14.134 "num_base_bdevs_operational": 3, 00:37:14.134 "process": { 00:37:14.134 "type": "rebuild", 00:37:14.134 "target": "spare", 00:37:14.134 "progress": { 00:37:14.134 "blocks": 114688, 00:37:14.134 "percent": 90 00:37:14.134 } 00:37:14.134 }, 00:37:14.134 "base_bdevs_list": [ 00:37:14.134 { 00:37:14.134 "name": "spare", 00:37:14.134 "uuid": "8b7ea0ab-5401-5670-a29a-4f8c478439f2", 00:37:14.134 "is_configured": true, 00:37:14.134 "data_offset": 2048, 00:37:14.134 "data_size": 63488 00:37:14.134 }, 00:37:14.134 { 00:37:14.134 "name": "BaseBdev2", 00:37:14.134 "uuid": "a47d0352-cff8-5963-aaf2-a6f194e60ef8", 00:37:14.134 "is_configured": true, 00:37:14.134 "data_offset": 2048, 00:37:14.134 "data_size": 63488 00:37:14.134 }, 00:37:14.134 { 00:37:14.134 "name": "BaseBdev3", 00:37:14.134 "uuid": "2f722e5c-1404-529f-a75b-d0064eaa0fac", 00:37:14.134 "is_configured": true, 00:37:14.134 "data_offset": 2048, 00:37:14.134 "data_size": 63488 00:37:14.134 } 00:37:14.134 ] 00:37:14.134 }' 00:37:14.134 17:33:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:37:14.134 17:33:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:37:14.134 17:33:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:37:14.134 17:33:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:37:14.134 17:33:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:37:14.715 [2024-11-26 17:33:51.881664] bdev_raid.c:2900:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:37:14.715 [2024-11-26 17:33:51.881763] bdev_raid.c:2562:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:37:14.715 [2024-11-26 17:33:51.881867] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:37:15.285 17:33:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:37:15.285 17:33:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:37:15.285 17:33:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:37:15.285 17:33:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:37:15.285 17:33:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:37:15.285 17:33:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:37:15.285 17:33:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:37:15.285 17:33:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:37:15.285 17:33:52 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:15.285 17:33:52 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:37:15.285 17:33:52 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:15.285 17:33:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:37:15.285 "name": "raid_bdev1", 00:37:15.285 "uuid": "4a8cd08b-9f01-44ad-8e12-cebfb8748ee5", 00:37:15.285 "strip_size_kb": 64, 00:37:15.285 "state": "online", 00:37:15.285 "raid_level": "raid5f", 00:37:15.285 "superblock": true, 00:37:15.285 "num_base_bdevs": 3, 00:37:15.285 "num_base_bdevs_discovered": 3, 00:37:15.285 "num_base_bdevs_operational": 3, 00:37:15.285 "base_bdevs_list": [ 00:37:15.285 { 00:37:15.285 "name": "spare", 00:37:15.285 "uuid": "8b7ea0ab-5401-5670-a29a-4f8c478439f2", 00:37:15.285 "is_configured": true, 00:37:15.285 "data_offset": 2048, 00:37:15.285 "data_size": 63488 00:37:15.285 }, 00:37:15.285 { 00:37:15.285 "name": "BaseBdev2", 00:37:15.285 "uuid": "a47d0352-cff8-5963-aaf2-a6f194e60ef8", 00:37:15.285 "is_configured": true, 00:37:15.285 "data_offset": 2048, 00:37:15.285 "data_size": 63488 00:37:15.285 }, 00:37:15.285 { 00:37:15.285 "name": "BaseBdev3", 00:37:15.285 "uuid": "2f722e5c-1404-529f-a75b-d0064eaa0fac", 00:37:15.285 "is_configured": true, 00:37:15.285 "data_offset": 2048, 00:37:15.285 "data_size": 63488 00:37:15.285 } 00:37:15.285 ] 00:37:15.285 }' 00:37:15.285 17:33:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:37:15.285 17:33:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:37:15.285 17:33:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:37:15.285 17:33:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:37:15.285 17:33:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@709 -- # break 00:37:15.285 17:33:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:37:15.285 17:33:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:37:15.285 17:33:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:37:15.285 17:33:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:37:15.285 17:33:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:37:15.285 17:33:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:37:15.285 17:33:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:37:15.285 17:33:52 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:15.285 17:33:52 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:37:15.285 17:33:52 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:15.285 17:33:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:37:15.285 "name": "raid_bdev1", 00:37:15.285 "uuid": "4a8cd08b-9f01-44ad-8e12-cebfb8748ee5", 00:37:15.285 "strip_size_kb": 64, 00:37:15.285 "state": "online", 00:37:15.285 "raid_level": "raid5f", 00:37:15.285 "superblock": true, 00:37:15.285 "num_base_bdevs": 3, 00:37:15.285 "num_base_bdevs_discovered": 3, 00:37:15.285 "num_base_bdevs_operational": 3, 00:37:15.285 "base_bdevs_list": [ 00:37:15.285 { 00:37:15.285 "name": "spare", 00:37:15.285 "uuid": "8b7ea0ab-5401-5670-a29a-4f8c478439f2", 00:37:15.285 "is_configured": true, 00:37:15.285 "data_offset": 2048, 00:37:15.285 "data_size": 63488 00:37:15.285 }, 00:37:15.285 { 00:37:15.285 "name": "BaseBdev2", 00:37:15.285 "uuid": "a47d0352-cff8-5963-aaf2-a6f194e60ef8", 00:37:15.285 "is_configured": true, 00:37:15.285 "data_offset": 2048, 00:37:15.285 "data_size": 63488 00:37:15.285 }, 00:37:15.285 { 00:37:15.285 "name": "BaseBdev3", 00:37:15.285 "uuid": "2f722e5c-1404-529f-a75b-d0064eaa0fac", 00:37:15.285 "is_configured": true, 00:37:15.285 "data_offset": 2048, 00:37:15.285 "data_size": 63488 00:37:15.285 } 00:37:15.285 ] 00:37:15.285 }' 00:37:15.285 17:33:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:37:15.285 17:33:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:37:15.285 17:33:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:37:15.546 17:33:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:37:15.546 17:33:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:37:15.546 17:33:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:37:15.546 17:33:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:37:15.546 17:33:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:37:15.546 17:33:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:37:15.546 17:33:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:37:15.546 17:33:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:37:15.546 17:33:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:37:15.546 17:33:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:37:15.546 17:33:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:37:15.546 17:33:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:37:15.546 17:33:52 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:15.546 17:33:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:37:15.546 17:33:52 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:37:15.546 17:33:52 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:15.546 17:33:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:37:15.546 "name": "raid_bdev1", 00:37:15.546 "uuid": "4a8cd08b-9f01-44ad-8e12-cebfb8748ee5", 00:37:15.546 "strip_size_kb": 64, 00:37:15.546 "state": "online", 00:37:15.546 "raid_level": "raid5f", 00:37:15.546 "superblock": true, 00:37:15.546 "num_base_bdevs": 3, 00:37:15.546 "num_base_bdevs_discovered": 3, 00:37:15.546 "num_base_bdevs_operational": 3, 00:37:15.546 "base_bdevs_list": [ 00:37:15.546 { 00:37:15.546 "name": "spare", 00:37:15.546 "uuid": "8b7ea0ab-5401-5670-a29a-4f8c478439f2", 00:37:15.546 "is_configured": true, 00:37:15.546 "data_offset": 2048, 00:37:15.546 "data_size": 63488 00:37:15.546 }, 00:37:15.546 { 00:37:15.546 "name": "BaseBdev2", 00:37:15.546 "uuid": "a47d0352-cff8-5963-aaf2-a6f194e60ef8", 00:37:15.546 "is_configured": true, 00:37:15.546 "data_offset": 2048, 00:37:15.546 "data_size": 63488 00:37:15.546 }, 00:37:15.546 { 00:37:15.546 "name": "BaseBdev3", 00:37:15.546 "uuid": "2f722e5c-1404-529f-a75b-d0064eaa0fac", 00:37:15.546 "is_configured": true, 00:37:15.546 "data_offset": 2048, 00:37:15.546 "data_size": 63488 00:37:15.546 } 00:37:15.546 ] 00:37:15.546 }' 00:37:15.546 17:33:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:37:15.546 17:33:52 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:37:15.806 17:33:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:37:15.806 17:33:53 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:15.806 17:33:53 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:37:15.806 [2024-11-26 17:33:53.165183] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:37:15.806 [2024-11-26 17:33:53.165216] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:37:15.806 [2024-11-26 17:33:53.165307] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:37:15.806 [2024-11-26 17:33:53.165386] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:37:15.806 [2024-11-26 17:33:53.165405] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:37:15.806 17:33:53 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:15.806 17:33:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:37:15.806 17:33:53 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:15.806 17:33:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@720 -- # jq length 00:37:15.806 17:33:53 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:37:15.806 17:33:53 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:15.806 17:33:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:37:15.806 17:33:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:37:15.806 17:33:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@723 -- # '[' false = true ']' 00:37:15.806 17:33:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@737 -- # nbd_start_disks /var/tmp/spdk.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 00:37:15.806 17:33:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:37:15.806 17:33:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev1' 'spare') 00:37:15.806 17:33:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # local bdev_list 00:37:15.806 17:33:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:37:15.806 17:33:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # local nbd_list 00:37:15.806 17:33:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@12 -- # local i 00:37:15.806 17:33:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:37:15.806 17:33:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:37:15.806 17:33:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev1 /dev/nbd0 00:37:16.374 /dev/nbd0 00:37:16.374 17:33:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:37:16.374 17:33:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:37:16.374 17:33:53 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:37:16.374 17:33:53 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@873 -- # local i 00:37:16.374 17:33:53 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:37:16.374 17:33:53 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:37:16.374 17:33:53 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:37:16.374 17:33:53 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@877 -- # break 00:37:16.374 17:33:53 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:37:16.374 17:33:53 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:37:16.374 17:33:53 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:37:16.374 1+0 records in 00:37:16.375 1+0 records out 00:37:16.375 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000263156 s, 15.6 MB/s 00:37:16.375 17:33:53 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:37:16.375 17:33:53 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@890 -- # size=4096 00:37:16.375 17:33:53 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:37:16.375 17:33:53 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:37:16.375 17:33:53 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@893 -- # return 0 00:37:16.375 17:33:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:37:16.375 17:33:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:37:16.375 17:33:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd1 00:37:16.634 /dev/nbd1 00:37:16.634 17:33:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:37:16.634 17:33:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:37:16.634 17:33:53 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:37:16.634 17:33:53 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@873 -- # local i 00:37:16.634 17:33:53 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:37:16.634 17:33:53 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:37:16.634 17:33:53 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:37:16.634 17:33:53 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@877 -- # break 00:37:16.634 17:33:53 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:37:16.634 17:33:53 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:37:16.634 17:33:53 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:37:16.634 1+0 records in 00:37:16.634 1+0 records out 00:37:16.634 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000290919 s, 14.1 MB/s 00:37:16.634 17:33:53 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:37:16.634 17:33:53 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@890 -- # size=4096 00:37:16.634 17:33:53 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:37:16.634 17:33:53 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:37:16.634 17:33:53 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@893 -- # return 0 00:37:16.634 17:33:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:37:16.634 17:33:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:37:16.634 17:33:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@738 -- # cmp -i 1048576 /dev/nbd0 /dev/nbd1 00:37:16.893 17:33:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@739 -- # nbd_stop_disks /var/tmp/spdk.sock '/dev/nbd0 /dev/nbd1' 00:37:16.893 17:33:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:37:16.893 17:33:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:37:16.893 17:33:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # local nbd_list 00:37:16.893 17:33:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@51 -- # local i 00:37:16.893 17:33:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:37:16.893 17:33:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:37:17.152 17:33:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:37:17.152 17:33:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:37:17.152 17:33:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:37:17.152 17:33:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:37:17.152 17:33:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:37:17.152 17:33:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:37:17.152 17:33:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:37:17.152 17:33:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:37:17.152 17:33:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:37:17.152 17:33:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:37:17.152 17:33:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:37:17.152 17:33:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:37:17.152 17:33:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:37:17.152 17:33:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:37:17.152 17:33:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:37:17.152 17:33:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:37:17.152 17:33:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:37:17.152 17:33:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:37:17.152 17:33:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@743 -- # '[' true = true ']' 00:37:17.152 17:33:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@745 -- # rpc_cmd bdev_passthru_delete spare 00:37:17.152 17:33:54 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:17.152 17:33:54 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:37:17.152 17:33:54 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:17.152 17:33:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@746 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:37:17.152 17:33:54 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:17.152 17:33:54 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:37:17.152 [2024-11-26 17:33:54.594510] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:37:17.152 [2024-11-26 17:33:54.594580] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:37:17.152 [2024-11-26 17:33:54.594606] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ae80 00:37:17.152 [2024-11-26 17:33:54.594622] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:37:17.152 [2024-11-26 17:33:54.597463] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:37:17.152 [2024-11-26 17:33:54.597509] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:37:17.153 [2024-11-26 17:33:54.597585] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:37:17.153 [2024-11-26 17:33:54.597640] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:37:17.153 [2024-11-26 17:33:54.597810] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:37:17.153 [2024-11-26 17:33:54.597923] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:37:17.412 spare 00:37:17.412 17:33:54 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:17.412 17:33:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@747 -- # rpc_cmd bdev_wait_for_examine 00:37:17.412 17:33:54 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:17.412 17:33:54 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:37:17.412 [2024-11-26 17:33:54.698026] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007b00 00:37:17.412 [2024-11-26 17:33:54.698085] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:37:17.412 [2024-11-26 17:33:54.698433] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000047700 00:37:17.412 [2024-11-26 17:33:54.704643] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007b00 00:37:17.412 [2024-11-26 17:33:54.704683] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007b00 00:37:17.412 [2024-11-26 17:33:54.704912] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:37:17.412 17:33:54 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:17.412 17:33:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@749 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:37:17.412 17:33:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:37:17.412 17:33:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:37:17.412 17:33:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:37:17.412 17:33:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:37:17.412 17:33:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:37:17.412 17:33:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:37:17.412 17:33:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:37:17.412 17:33:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:37:17.412 17:33:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:37:17.412 17:33:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:37:17.412 17:33:54 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:17.412 17:33:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:37:17.412 17:33:54 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:37:17.412 17:33:54 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:17.412 17:33:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:37:17.412 "name": "raid_bdev1", 00:37:17.412 "uuid": "4a8cd08b-9f01-44ad-8e12-cebfb8748ee5", 00:37:17.412 "strip_size_kb": 64, 00:37:17.412 "state": "online", 00:37:17.412 "raid_level": "raid5f", 00:37:17.412 "superblock": true, 00:37:17.412 "num_base_bdevs": 3, 00:37:17.412 "num_base_bdevs_discovered": 3, 00:37:17.412 "num_base_bdevs_operational": 3, 00:37:17.412 "base_bdevs_list": [ 00:37:17.412 { 00:37:17.412 "name": "spare", 00:37:17.412 "uuid": "8b7ea0ab-5401-5670-a29a-4f8c478439f2", 00:37:17.412 "is_configured": true, 00:37:17.412 "data_offset": 2048, 00:37:17.412 "data_size": 63488 00:37:17.412 }, 00:37:17.412 { 00:37:17.412 "name": "BaseBdev2", 00:37:17.412 "uuid": "a47d0352-cff8-5963-aaf2-a6f194e60ef8", 00:37:17.412 "is_configured": true, 00:37:17.412 "data_offset": 2048, 00:37:17.412 "data_size": 63488 00:37:17.412 }, 00:37:17.412 { 00:37:17.412 "name": "BaseBdev3", 00:37:17.412 "uuid": "2f722e5c-1404-529f-a75b-d0064eaa0fac", 00:37:17.412 "is_configured": true, 00:37:17.412 "data_offset": 2048, 00:37:17.412 "data_size": 63488 00:37:17.412 } 00:37:17.412 ] 00:37:17.412 }' 00:37:17.412 17:33:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:37:17.412 17:33:54 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:37:17.981 17:33:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@750 -- # verify_raid_bdev_process raid_bdev1 none none 00:37:17.981 17:33:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:37:17.981 17:33:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:37:17.981 17:33:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:37:17.981 17:33:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:37:17.981 17:33:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:37:17.981 17:33:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:37:17.981 17:33:55 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:17.981 17:33:55 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:37:17.981 17:33:55 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:17.981 17:33:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:37:17.981 "name": "raid_bdev1", 00:37:17.981 "uuid": "4a8cd08b-9f01-44ad-8e12-cebfb8748ee5", 00:37:17.981 "strip_size_kb": 64, 00:37:17.981 "state": "online", 00:37:17.981 "raid_level": "raid5f", 00:37:17.981 "superblock": true, 00:37:17.981 "num_base_bdevs": 3, 00:37:17.981 "num_base_bdevs_discovered": 3, 00:37:17.981 "num_base_bdevs_operational": 3, 00:37:17.981 "base_bdevs_list": [ 00:37:17.981 { 00:37:17.981 "name": "spare", 00:37:17.981 "uuid": "8b7ea0ab-5401-5670-a29a-4f8c478439f2", 00:37:17.981 "is_configured": true, 00:37:17.981 "data_offset": 2048, 00:37:17.981 "data_size": 63488 00:37:17.981 }, 00:37:17.981 { 00:37:17.981 "name": "BaseBdev2", 00:37:17.981 "uuid": "a47d0352-cff8-5963-aaf2-a6f194e60ef8", 00:37:17.981 "is_configured": true, 00:37:17.981 "data_offset": 2048, 00:37:17.981 "data_size": 63488 00:37:17.981 }, 00:37:17.981 { 00:37:17.981 "name": "BaseBdev3", 00:37:17.981 "uuid": "2f722e5c-1404-529f-a75b-d0064eaa0fac", 00:37:17.981 "is_configured": true, 00:37:17.981 "data_offset": 2048, 00:37:17.981 "data_size": 63488 00:37:17.981 } 00:37:17.981 ] 00:37:17.981 }' 00:37:17.981 17:33:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:37:17.981 17:33:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:37:17.981 17:33:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:37:17.981 17:33:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:37:17.981 17:33:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@751 -- # rpc_cmd bdev_raid_get_bdevs all 00:37:17.981 17:33:55 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:17.981 17:33:55 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:37:17.982 17:33:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@751 -- # jq -r '.[].base_bdevs_list[0].name' 00:37:17.982 17:33:55 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:17.982 17:33:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@751 -- # [[ spare == \s\p\a\r\e ]] 00:37:17.982 17:33:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@754 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:37:17.982 17:33:55 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:17.982 17:33:55 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:37:17.982 [2024-11-26 17:33:55.311698] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:37:17.982 17:33:55 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:17.982 17:33:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@755 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:37:17.982 17:33:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:37:17.982 17:33:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:37:17.982 17:33:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:37:17.982 17:33:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:37:17.982 17:33:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:37:17.982 17:33:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:37:17.982 17:33:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:37:17.982 17:33:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:37:17.982 17:33:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:37:17.982 17:33:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:37:17.982 17:33:55 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:17.982 17:33:55 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:37:17.982 17:33:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:37:17.982 17:33:55 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:17.982 17:33:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:37:17.982 "name": "raid_bdev1", 00:37:17.982 "uuid": "4a8cd08b-9f01-44ad-8e12-cebfb8748ee5", 00:37:17.982 "strip_size_kb": 64, 00:37:17.982 "state": "online", 00:37:17.982 "raid_level": "raid5f", 00:37:17.982 "superblock": true, 00:37:17.982 "num_base_bdevs": 3, 00:37:17.982 "num_base_bdevs_discovered": 2, 00:37:17.982 "num_base_bdevs_operational": 2, 00:37:17.982 "base_bdevs_list": [ 00:37:17.982 { 00:37:17.982 "name": null, 00:37:17.982 "uuid": "00000000-0000-0000-0000-000000000000", 00:37:17.982 "is_configured": false, 00:37:17.982 "data_offset": 0, 00:37:17.982 "data_size": 63488 00:37:17.982 }, 00:37:17.982 { 00:37:17.982 "name": "BaseBdev2", 00:37:17.982 "uuid": "a47d0352-cff8-5963-aaf2-a6f194e60ef8", 00:37:17.982 "is_configured": true, 00:37:17.982 "data_offset": 2048, 00:37:17.982 "data_size": 63488 00:37:17.982 }, 00:37:17.982 { 00:37:17.982 "name": "BaseBdev3", 00:37:17.982 "uuid": "2f722e5c-1404-529f-a75b-d0064eaa0fac", 00:37:17.982 "is_configured": true, 00:37:17.982 "data_offset": 2048, 00:37:17.982 "data_size": 63488 00:37:17.982 } 00:37:17.982 ] 00:37:17.982 }' 00:37:17.982 17:33:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:37:17.982 17:33:55 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:37:18.550 17:33:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@756 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:37:18.550 17:33:55 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:18.550 17:33:55 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:37:18.550 [2024-11-26 17:33:55.731864] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:37:18.550 [2024-11-26 17:33:55.732089] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:37:18.550 [2024-11-26 17:33:55.732115] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:37:18.550 [2024-11-26 17:33:55.732161] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:37:18.550 [2024-11-26 17:33:55.749754] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000477d0 00:37:18.550 17:33:55 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:18.550 17:33:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@757 -- # sleep 1 00:37:18.550 [2024-11-26 17:33:55.758713] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:37:19.486 17:33:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@758 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:37:19.486 17:33:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:37:19.486 17:33:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:37:19.486 17:33:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:37:19.486 17:33:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:37:19.486 17:33:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:37:19.486 17:33:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:37:19.486 17:33:56 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:19.486 17:33:56 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:37:19.486 17:33:56 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:19.486 17:33:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:37:19.486 "name": "raid_bdev1", 00:37:19.486 "uuid": "4a8cd08b-9f01-44ad-8e12-cebfb8748ee5", 00:37:19.486 "strip_size_kb": 64, 00:37:19.486 "state": "online", 00:37:19.486 "raid_level": "raid5f", 00:37:19.486 "superblock": true, 00:37:19.486 "num_base_bdevs": 3, 00:37:19.486 "num_base_bdevs_discovered": 3, 00:37:19.486 "num_base_bdevs_operational": 3, 00:37:19.486 "process": { 00:37:19.486 "type": "rebuild", 00:37:19.486 "target": "spare", 00:37:19.486 "progress": { 00:37:19.486 "blocks": 18432, 00:37:19.487 "percent": 14 00:37:19.487 } 00:37:19.487 }, 00:37:19.487 "base_bdevs_list": [ 00:37:19.487 { 00:37:19.487 "name": "spare", 00:37:19.487 "uuid": "8b7ea0ab-5401-5670-a29a-4f8c478439f2", 00:37:19.487 "is_configured": true, 00:37:19.487 "data_offset": 2048, 00:37:19.487 "data_size": 63488 00:37:19.487 }, 00:37:19.487 { 00:37:19.487 "name": "BaseBdev2", 00:37:19.487 "uuid": "a47d0352-cff8-5963-aaf2-a6f194e60ef8", 00:37:19.487 "is_configured": true, 00:37:19.487 "data_offset": 2048, 00:37:19.487 "data_size": 63488 00:37:19.487 }, 00:37:19.487 { 00:37:19.487 "name": "BaseBdev3", 00:37:19.487 "uuid": "2f722e5c-1404-529f-a75b-d0064eaa0fac", 00:37:19.487 "is_configured": true, 00:37:19.487 "data_offset": 2048, 00:37:19.487 "data_size": 63488 00:37:19.487 } 00:37:19.487 ] 00:37:19.487 }' 00:37:19.487 17:33:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:37:19.487 17:33:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:37:19.487 17:33:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:37:19.487 17:33:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:37:19.487 17:33:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@761 -- # rpc_cmd bdev_passthru_delete spare 00:37:19.487 17:33:56 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:19.487 17:33:56 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:37:19.487 [2024-11-26 17:33:56.900876] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:37:19.746 [2024-11-26 17:33:56.970702] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:37:19.746 [2024-11-26 17:33:56.970784] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:37:19.746 [2024-11-26 17:33:56.970803] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:37:19.746 [2024-11-26 17:33:56.970815] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:37:19.746 17:33:57 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:19.746 17:33:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@762 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:37:19.746 17:33:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:37:19.746 17:33:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:37:19.746 17:33:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:37:19.746 17:33:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:37:19.746 17:33:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:37:19.746 17:33:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:37:19.746 17:33:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:37:19.746 17:33:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:37:19.746 17:33:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:37:19.746 17:33:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:37:19.746 17:33:57 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:19.746 17:33:57 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:37:19.746 17:33:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:37:19.746 17:33:57 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:19.746 17:33:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:37:19.746 "name": "raid_bdev1", 00:37:19.746 "uuid": "4a8cd08b-9f01-44ad-8e12-cebfb8748ee5", 00:37:19.746 "strip_size_kb": 64, 00:37:19.746 "state": "online", 00:37:19.746 "raid_level": "raid5f", 00:37:19.746 "superblock": true, 00:37:19.746 "num_base_bdevs": 3, 00:37:19.746 "num_base_bdevs_discovered": 2, 00:37:19.746 "num_base_bdevs_operational": 2, 00:37:19.746 "base_bdevs_list": [ 00:37:19.746 { 00:37:19.746 "name": null, 00:37:19.746 "uuid": "00000000-0000-0000-0000-000000000000", 00:37:19.746 "is_configured": false, 00:37:19.746 "data_offset": 0, 00:37:19.746 "data_size": 63488 00:37:19.746 }, 00:37:19.746 { 00:37:19.746 "name": "BaseBdev2", 00:37:19.746 "uuid": "a47d0352-cff8-5963-aaf2-a6f194e60ef8", 00:37:19.746 "is_configured": true, 00:37:19.746 "data_offset": 2048, 00:37:19.746 "data_size": 63488 00:37:19.746 }, 00:37:19.746 { 00:37:19.746 "name": "BaseBdev3", 00:37:19.746 "uuid": "2f722e5c-1404-529f-a75b-d0064eaa0fac", 00:37:19.746 "is_configured": true, 00:37:19.746 "data_offset": 2048, 00:37:19.746 "data_size": 63488 00:37:19.746 } 00:37:19.746 ] 00:37:19.746 }' 00:37:19.746 17:33:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:37:19.746 17:33:57 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:37:20.005 17:33:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@763 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:37:20.005 17:33:57 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:20.005 17:33:57 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:37:20.264 [2024-11-26 17:33:57.453968] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:37:20.264 [2024-11-26 17:33:57.454042] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:37:20.264 [2024-11-26 17:33:57.454077] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b780 00:37:20.264 [2024-11-26 17:33:57.454095] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:37:20.264 [2024-11-26 17:33:57.454647] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:37:20.264 [2024-11-26 17:33:57.454681] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:37:20.264 [2024-11-26 17:33:57.454789] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:37:20.264 [2024-11-26 17:33:57.454810] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:37:20.264 [2024-11-26 17:33:57.454823] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:37:20.264 [2024-11-26 17:33:57.454853] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:37:20.264 [2024-11-26 17:33:57.471142] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000478a0 00:37:20.264 spare 00:37:20.264 17:33:57 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:20.264 17:33:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@764 -- # sleep 1 00:37:20.264 [2024-11-26 17:33:57.478988] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:37:21.201 17:33:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@765 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:37:21.201 17:33:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:37:21.201 17:33:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:37:21.201 17:33:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:37:21.201 17:33:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:37:21.201 17:33:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:37:21.201 17:33:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:37:21.201 17:33:58 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:21.201 17:33:58 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:37:21.201 17:33:58 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:21.201 17:33:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:37:21.201 "name": "raid_bdev1", 00:37:21.201 "uuid": "4a8cd08b-9f01-44ad-8e12-cebfb8748ee5", 00:37:21.201 "strip_size_kb": 64, 00:37:21.201 "state": "online", 00:37:21.201 "raid_level": "raid5f", 00:37:21.201 "superblock": true, 00:37:21.201 "num_base_bdevs": 3, 00:37:21.201 "num_base_bdevs_discovered": 3, 00:37:21.201 "num_base_bdevs_operational": 3, 00:37:21.201 "process": { 00:37:21.201 "type": "rebuild", 00:37:21.201 "target": "spare", 00:37:21.201 "progress": { 00:37:21.201 "blocks": 18432, 00:37:21.201 "percent": 14 00:37:21.201 } 00:37:21.201 }, 00:37:21.201 "base_bdevs_list": [ 00:37:21.201 { 00:37:21.201 "name": "spare", 00:37:21.201 "uuid": "8b7ea0ab-5401-5670-a29a-4f8c478439f2", 00:37:21.201 "is_configured": true, 00:37:21.201 "data_offset": 2048, 00:37:21.201 "data_size": 63488 00:37:21.201 }, 00:37:21.201 { 00:37:21.201 "name": "BaseBdev2", 00:37:21.201 "uuid": "a47d0352-cff8-5963-aaf2-a6f194e60ef8", 00:37:21.201 "is_configured": true, 00:37:21.201 "data_offset": 2048, 00:37:21.201 "data_size": 63488 00:37:21.201 }, 00:37:21.201 { 00:37:21.201 "name": "BaseBdev3", 00:37:21.201 "uuid": "2f722e5c-1404-529f-a75b-d0064eaa0fac", 00:37:21.201 "is_configured": true, 00:37:21.201 "data_offset": 2048, 00:37:21.201 "data_size": 63488 00:37:21.201 } 00:37:21.201 ] 00:37:21.201 }' 00:37:21.201 17:33:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:37:21.201 17:33:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:37:21.201 17:33:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:37:21.201 17:33:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:37:21.201 17:33:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@768 -- # rpc_cmd bdev_passthru_delete spare 00:37:21.201 17:33:58 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:21.201 17:33:58 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:37:21.201 [2024-11-26 17:33:58.608857] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:37:21.462 [2024-11-26 17:33:58.690881] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:37:21.462 [2024-11-26 17:33:58.690948] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:37:21.462 [2024-11-26 17:33:58.690968] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:37:21.462 [2024-11-26 17:33:58.690977] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:37:21.462 17:33:58 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:21.462 17:33:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@769 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:37:21.462 17:33:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:37:21.462 17:33:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:37:21.462 17:33:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:37:21.462 17:33:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:37:21.462 17:33:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:37:21.462 17:33:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:37:21.462 17:33:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:37:21.462 17:33:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:37:21.462 17:33:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:37:21.462 17:33:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:37:21.462 17:33:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:37:21.462 17:33:58 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:21.462 17:33:58 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:37:21.462 17:33:58 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:21.462 17:33:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:37:21.462 "name": "raid_bdev1", 00:37:21.462 "uuid": "4a8cd08b-9f01-44ad-8e12-cebfb8748ee5", 00:37:21.462 "strip_size_kb": 64, 00:37:21.462 "state": "online", 00:37:21.462 "raid_level": "raid5f", 00:37:21.462 "superblock": true, 00:37:21.462 "num_base_bdevs": 3, 00:37:21.462 "num_base_bdevs_discovered": 2, 00:37:21.462 "num_base_bdevs_operational": 2, 00:37:21.462 "base_bdevs_list": [ 00:37:21.462 { 00:37:21.462 "name": null, 00:37:21.462 "uuid": "00000000-0000-0000-0000-000000000000", 00:37:21.462 "is_configured": false, 00:37:21.462 "data_offset": 0, 00:37:21.462 "data_size": 63488 00:37:21.462 }, 00:37:21.462 { 00:37:21.462 "name": "BaseBdev2", 00:37:21.462 "uuid": "a47d0352-cff8-5963-aaf2-a6f194e60ef8", 00:37:21.462 "is_configured": true, 00:37:21.462 "data_offset": 2048, 00:37:21.462 "data_size": 63488 00:37:21.462 }, 00:37:21.462 { 00:37:21.462 "name": "BaseBdev3", 00:37:21.462 "uuid": "2f722e5c-1404-529f-a75b-d0064eaa0fac", 00:37:21.462 "is_configured": true, 00:37:21.462 "data_offset": 2048, 00:37:21.462 "data_size": 63488 00:37:21.462 } 00:37:21.462 ] 00:37:21.462 }' 00:37:21.462 17:33:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:37:21.462 17:33:58 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:37:22.028 17:33:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@770 -- # verify_raid_bdev_process raid_bdev1 none none 00:37:22.028 17:33:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:37:22.028 17:33:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:37:22.028 17:33:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:37:22.028 17:33:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:37:22.028 17:33:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:37:22.028 17:33:59 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:22.028 17:33:59 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:37:22.028 17:33:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:37:22.028 17:33:59 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:22.028 17:33:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:37:22.028 "name": "raid_bdev1", 00:37:22.028 "uuid": "4a8cd08b-9f01-44ad-8e12-cebfb8748ee5", 00:37:22.028 "strip_size_kb": 64, 00:37:22.028 "state": "online", 00:37:22.028 "raid_level": "raid5f", 00:37:22.028 "superblock": true, 00:37:22.028 "num_base_bdevs": 3, 00:37:22.028 "num_base_bdevs_discovered": 2, 00:37:22.028 "num_base_bdevs_operational": 2, 00:37:22.028 "base_bdevs_list": [ 00:37:22.028 { 00:37:22.028 "name": null, 00:37:22.028 "uuid": "00000000-0000-0000-0000-000000000000", 00:37:22.028 "is_configured": false, 00:37:22.028 "data_offset": 0, 00:37:22.028 "data_size": 63488 00:37:22.028 }, 00:37:22.028 { 00:37:22.028 "name": "BaseBdev2", 00:37:22.028 "uuid": "a47d0352-cff8-5963-aaf2-a6f194e60ef8", 00:37:22.028 "is_configured": true, 00:37:22.028 "data_offset": 2048, 00:37:22.028 "data_size": 63488 00:37:22.028 }, 00:37:22.028 { 00:37:22.028 "name": "BaseBdev3", 00:37:22.028 "uuid": "2f722e5c-1404-529f-a75b-d0064eaa0fac", 00:37:22.028 "is_configured": true, 00:37:22.028 "data_offset": 2048, 00:37:22.028 "data_size": 63488 00:37:22.028 } 00:37:22.028 ] 00:37:22.028 }' 00:37:22.028 17:33:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:37:22.028 17:33:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:37:22.028 17:33:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:37:22.028 17:33:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:37:22.028 17:33:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@773 -- # rpc_cmd bdev_passthru_delete BaseBdev1 00:37:22.028 17:33:59 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:22.028 17:33:59 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:37:22.028 17:33:59 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:22.028 17:33:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@774 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:37:22.028 17:33:59 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:22.028 17:33:59 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:37:22.028 [2024-11-26 17:33:59.322626] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:37:22.028 [2024-11-26 17:33:59.322689] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:37:22.028 [2024-11-26 17:33:59.322719] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000bd80 00:37:22.028 [2024-11-26 17:33:59.322732] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:37:22.028 [2024-11-26 17:33:59.323267] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:37:22.028 [2024-11-26 17:33:59.323298] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:37:22.028 [2024-11-26 17:33:59.323388] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev BaseBdev1 00:37:22.028 [2024-11-26 17:33:59.323410] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:37:22.028 [2024-11-26 17:33:59.323434] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:37:22.028 [2024-11-26 17:33:59.323447] bdev_raid.c:3894:raid_bdev_examine_done: *ERROR*: Failed to examine bdev BaseBdev1: Invalid argument 00:37:22.028 BaseBdev1 00:37:22.028 17:33:59 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:22.028 17:33:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@775 -- # sleep 1 00:37:22.964 17:34:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@776 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:37:22.964 17:34:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:37:22.964 17:34:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:37:22.964 17:34:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:37:22.964 17:34:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:37:22.964 17:34:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:37:22.964 17:34:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:37:22.964 17:34:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:37:22.964 17:34:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:37:22.964 17:34:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:37:22.965 17:34:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:37:22.965 17:34:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:37:22.965 17:34:00 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:22.965 17:34:00 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:37:22.965 17:34:00 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:22.965 17:34:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:37:22.965 "name": "raid_bdev1", 00:37:22.965 "uuid": "4a8cd08b-9f01-44ad-8e12-cebfb8748ee5", 00:37:22.965 "strip_size_kb": 64, 00:37:22.965 "state": "online", 00:37:22.965 "raid_level": "raid5f", 00:37:22.965 "superblock": true, 00:37:22.965 "num_base_bdevs": 3, 00:37:22.965 "num_base_bdevs_discovered": 2, 00:37:22.965 "num_base_bdevs_operational": 2, 00:37:22.965 "base_bdevs_list": [ 00:37:22.965 { 00:37:22.965 "name": null, 00:37:22.965 "uuid": "00000000-0000-0000-0000-000000000000", 00:37:22.965 "is_configured": false, 00:37:22.965 "data_offset": 0, 00:37:22.965 "data_size": 63488 00:37:22.965 }, 00:37:22.965 { 00:37:22.965 "name": "BaseBdev2", 00:37:22.965 "uuid": "a47d0352-cff8-5963-aaf2-a6f194e60ef8", 00:37:22.965 "is_configured": true, 00:37:22.965 "data_offset": 2048, 00:37:22.965 "data_size": 63488 00:37:22.965 }, 00:37:22.965 { 00:37:22.965 "name": "BaseBdev3", 00:37:22.965 "uuid": "2f722e5c-1404-529f-a75b-d0064eaa0fac", 00:37:22.965 "is_configured": true, 00:37:22.965 "data_offset": 2048, 00:37:22.965 "data_size": 63488 00:37:22.965 } 00:37:22.965 ] 00:37:22.965 }' 00:37:22.965 17:34:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:37:22.965 17:34:00 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:37:23.533 17:34:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@777 -- # verify_raid_bdev_process raid_bdev1 none none 00:37:23.533 17:34:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:37:23.533 17:34:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:37:23.533 17:34:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:37:23.533 17:34:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:37:23.533 17:34:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:37:23.533 17:34:00 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:23.533 17:34:00 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:37:23.533 17:34:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:37:23.533 17:34:00 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:23.533 17:34:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:37:23.533 "name": "raid_bdev1", 00:37:23.533 "uuid": "4a8cd08b-9f01-44ad-8e12-cebfb8748ee5", 00:37:23.533 "strip_size_kb": 64, 00:37:23.533 "state": "online", 00:37:23.533 "raid_level": "raid5f", 00:37:23.533 "superblock": true, 00:37:23.533 "num_base_bdevs": 3, 00:37:23.533 "num_base_bdevs_discovered": 2, 00:37:23.533 "num_base_bdevs_operational": 2, 00:37:23.533 "base_bdevs_list": [ 00:37:23.533 { 00:37:23.533 "name": null, 00:37:23.533 "uuid": "00000000-0000-0000-0000-000000000000", 00:37:23.533 "is_configured": false, 00:37:23.533 "data_offset": 0, 00:37:23.533 "data_size": 63488 00:37:23.533 }, 00:37:23.534 { 00:37:23.534 "name": "BaseBdev2", 00:37:23.534 "uuid": "a47d0352-cff8-5963-aaf2-a6f194e60ef8", 00:37:23.534 "is_configured": true, 00:37:23.534 "data_offset": 2048, 00:37:23.534 "data_size": 63488 00:37:23.534 }, 00:37:23.534 { 00:37:23.534 "name": "BaseBdev3", 00:37:23.534 "uuid": "2f722e5c-1404-529f-a75b-d0064eaa0fac", 00:37:23.534 "is_configured": true, 00:37:23.534 "data_offset": 2048, 00:37:23.534 "data_size": 63488 00:37:23.534 } 00:37:23.534 ] 00:37:23.534 }' 00:37:23.534 17:34:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:37:23.534 17:34:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:37:23.534 17:34:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:37:23.534 17:34:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:37:23.534 17:34:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@778 -- # NOT rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:37:23.534 17:34:00 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@652 -- # local es=0 00:37:23.534 17:34:00 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:37:23.534 17:34:00 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:37:23.534 17:34:00 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:37:23.534 17:34:00 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:37:23.534 17:34:00 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:37:23.534 17:34:00 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:37:23.534 17:34:00 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:23.534 17:34:00 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:37:23.534 [2024-11-26 17:34:00.895012] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:37:23.534 [2024-11-26 17:34:00.895191] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:37:23.534 [2024-11-26 17:34:00.895213] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:37:23.534 request: 00:37:23.534 { 00:37:23.534 "base_bdev": "BaseBdev1", 00:37:23.534 "raid_bdev": "raid_bdev1", 00:37:23.534 "method": "bdev_raid_add_base_bdev", 00:37:23.534 "req_id": 1 00:37:23.534 } 00:37:23.534 Got JSON-RPC error response 00:37:23.534 response: 00:37:23.534 { 00:37:23.534 "code": -22, 00:37:23.534 "message": "Failed to add base bdev to RAID bdev: Invalid argument" 00:37:23.534 } 00:37:23.534 17:34:00 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:37:23.534 17:34:00 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@655 -- # es=1 00:37:23.534 17:34:00 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:37:23.534 17:34:00 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:37:23.534 17:34:00 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:37:23.534 17:34:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@779 -- # sleep 1 00:37:24.468 17:34:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@780 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:37:24.468 17:34:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:37:24.468 17:34:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:37:24.468 17:34:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:37:24.468 17:34:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:37:24.468 17:34:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:37:24.468 17:34:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:37:24.468 17:34:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:37:24.468 17:34:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:37:24.468 17:34:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:37:24.468 17:34:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:37:24.748 17:34:01 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:24.748 17:34:01 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:37:24.748 17:34:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:37:24.748 17:34:01 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:24.748 17:34:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:37:24.748 "name": "raid_bdev1", 00:37:24.748 "uuid": "4a8cd08b-9f01-44ad-8e12-cebfb8748ee5", 00:37:24.748 "strip_size_kb": 64, 00:37:24.748 "state": "online", 00:37:24.748 "raid_level": "raid5f", 00:37:24.748 "superblock": true, 00:37:24.748 "num_base_bdevs": 3, 00:37:24.748 "num_base_bdevs_discovered": 2, 00:37:24.748 "num_base_bdevs_operational": 2, 00:37:24.748 "base_bdevs_list": [ 00:37:24.748 { 00:37:24.748 "name": null, 00:37:24.748 "uuid": "00000000-0000-0000-0000-000000000000", 00:37:24.748 "is_configured": false, 00:37:24.748 "data_offset": 0, 00:37:24.748 "data_size": 63488 00:37:24.748 }, 00:37:24.748 { 00:37:24.748 "name": "BaseBdev2", 00:37:24.748 "uuid": "a47d0352-cff8-5963-aaf2-a6f194e60ef8", 00:37:24.748 "is_configured": true, 00:37:24.748 "data_offset": 2048, 00:37:24.748 "data_size": 63488 00:37:24.748 }, 00:37:24.748 { 00:37:24.748 "name": "BaseBdev3", 00:37:24.748 "uuid": "2f722e5c-1404-529f-a75b-d0064eaa0fac", 00:37:24.748 "is_configured": true, 00:37:24.748 "data_offset": 2048, 00:37:24.748 "data_size": 63488 00:37:24.749 } 00:37:24.749 ] 00:37:24.749 }' 00:37:24.749 17:34:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:37:24.749 17:34:01 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:37:25.007 17:34:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@781 -- # verify_raid_bdev_process raid_bdev1 none none 00:37:25.007 17:34:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:37:25.007 17:34:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:37:25.007 17:34:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:37:25.007 17:34:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:37:25.007 17:34:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:37:25.007 17:34:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:37:25.007 17:34:02 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:25.007 17:34:02 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:37:25.007 17:34:02 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:25.007 17:34:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:37:25.007 "name": "raid_bdev1", 00:37:25.007 "uuid": "4a8cd08b-9f01-44ad-8e12-cebfb8748ee5", 00:37:25.007 "strip_size_kb": 64, 00:37:25.007 "state": "online", 00:37:25.007 "raid_level": "raid5f", 00:37:25.007 "superblock": true, 00:37:25.007 "num_base_bdevs": 3, 00:37:25.007 "num_base_bdevs_discovered": 2, 00:37:25.007 "num_base_bdevs_operational": 2, 00:37:25.007 "base_bdevs_list": [ 00:37:25.007 { 00:37:25.007 "name": null, 00:37:25.007 "uuid": "00000000-0000-0000-0000-000000000000", 00:37:25.007 "is_configured": false, 00:37:25.007 "data_offset": 0, 00:37:25.007 "data_size": 63488 00:37:25.007 }, 00:37:25.007 { 00:37:25.007 "name": "BaseBdev2", 00:37:25.007 "uuid": "a47d0352-cff8-5963-aaf2-a6f194e60ef8", 00:37:25.007 "is_configured": true, 00:37:25.007 "data_offset": 2048, 00:37:25.007 "data_size": 63488 00:37:25.007 }, 00:37:25.007 { 00:37:25.007 "name": "BaseBdev3", 00:37:25.007 "uuid": "2f722e5c-1404-529f-a75b-d0064eaa0fac", 00:37:25.007 "is_configured": true, 00:37:25.007 "data_offset": 2048, 00:37:25.007 "data_size": 63488 00:37:25.007 } 00:37:25.007 ] 00:37:25.007 }' 00:37:25.007 17:34:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:37:25.007 17:34:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:37:25.007 17:34:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:37:25.265 17:34:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:37:25.265 17:34:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@784 -- # killprocess 82489 00:37:25.265 17:34:02 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@954 -- # '[' -z 82489 ']' 00:37:25.265 17:34:02 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@958 -- # kill -0 82489 00:37:25.265 17:34:02 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@959 -- # uname 00:37:25.265 17:34:02 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:37:25.265 17:34:02 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 82489 00:37:25.265 killing process with pid 82489 00:37:25.265 Received shutdown signal, test time was about 60.000000 seconds 00:37:25.265 00:37:25.265 Latency(us) 00:37:25.265 [2024-11-26T17:34:02.712Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:37:25.265 [2024-11-26T17:34:02.712Z] =================================================================================================================== 00:37:25.265 [2024-11-26T17:34:02.712Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:37:25.265 17:34:02 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:37:25.265 17:34:02 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:37:25.265 17:34:02 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@972 -- # echo 'killing process with pid 82489' 00:37:25.265 17:34:02 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@973 -- # kill 82489 00:37:25.265 [2024-11-26 17:34:02.511578] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:37:25.265 17:34:02 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@978 -- # wait 82489 00:37:25.265 [2024-11-26 17:34:02.511698] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:37:25.265 [2024-11-26 17:34:02.511761] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:37:25.265 [2024-11-26 17:34:02.511776] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state offline 00:37:25.524 [2024-11-26 17:34:02.920113] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:37:26.934 17:34:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@786 -- # return 0 00:37:26.934 00:37:26.934 real 0m23.406s 00:37:26.934 user 0m29.918s 00:37:26.934 sys 0m2.968s 00:37:26.934 17:34:04 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@1130 -- # xtrace_disable 00:37:26.934 ************************************ 00:37:26.934 END TEST raid5f_rebuild_test_sb 00:37:26.934 ************************************ 00:37:26.934 17:34:04 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:37:26.934 17:34:04 bdev_raid -- bdev/bdev_raid.sh@985 -- # for n in {3..4} 00:37:26.934 17:34:04 bdev_raid -- bdev/bdev_raid.sh@986 -- # run_test raid5f_state_function_test raid_state_function_test raid5f 4 false 00:37:26.934 17:34:04 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:37:26.934 17:34:04 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:37:26.934 17:34:04 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:37:26.934 ************************************ 00:37:26.934 START TEST raid5f_state_function_test 00:37:26.934 ************************************ 00:37:26.934 17:34:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@1129 -- # raid_state_function_test raid5f 4 false 00:37:26.934 17:34:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@205 -- # local raid_level=raid5f 00:37:26.934 17:34:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=4 00:37:26.934 17:34:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@207 -- # local superblock=false 00:37:26.934 17:34:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:37:26.934 17:34:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:37:26.934 17:34:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:37:26.934 17:34:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:37:26.934 17:34:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:37:26.934 17:34:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:37:26.934 17:34:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:37:26.934 17:34:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:37:26.934 17:34:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:37:26.934 17:34:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:37:26.934 17:34:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:37:26.934 17:34:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:37:26.934 17:34:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev4 00:37:26.934 17:34:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:37:26.935 17:34:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:37:26.935 17:34:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:37:26.935 17:34:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:37:26.935 17:34:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:37:26.935 17:34:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@211 -- # local strip_size 00:37:26.935 17:34:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:37:26.935 17:34:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:37:26.935 17:34:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@215 -- # '[' raid5f '!=' raid1 ']' 00:37:26.935 17:34:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:37:26.935 17:34:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:37:26.935 17:34:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@222 -- # '[' false = true ']' 00:37:26.935 17:34:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@225 -- # superblock_create_arg= 00:37:26.935 17:34:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@229 -- # raid_pid=83243 00:37:26.935 Process raid pid: 83243 00:37:26.935 17:34:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 83243' 00:37:26.935 17:34:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:37:26.935 17:34:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@231 -- # waitforlisten 83243 00:37:26.935 17:34:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@835 -- # '[' -z 83243 ']' 00:37:26.935 17:34:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:37:26.935 17:34:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:37:26.935 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:37:26.935 17:34:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:37:26.935 17:34:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:37:26.935 17:34:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:37:26.935 [2024-11-26 17:34:04.217685] Starting SPDK v25.01-pre git sha1 f7ce15267 / DPDK 24.03.0 initialization... 00:37:26.935 [2024-11-26 17:34:04.217865] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:37:27.193 [2024-11-26 17:34:04.410197] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:37:27.193 [2024-11-26 17:34:04.526837] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:37:27.451 [2024-11-26 17:34:04.740819] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:37:27.451 [2024-11-26 17:34:04.740864] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:37:27.709 17:34:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:37:27.709 17:34:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@868 -- # return 0 00:37:27.709 17:34:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:37:27.709 17:34:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:27.709 17:34:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:37:27.709 [2024-11-26 17:34:05.122174] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:37:27.709 [2024-11-26 17:34:05.122232] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:37:27.709 [2024-11-26 17:34:05.122243] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:37:27.709 [2024-11-26 17:34:05.122265] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:37:27.709 [2024-11-26 17:34:05.122273] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:37:27.710 [2024-11-26 17:34:05.122286] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:37:27.710 [2024-11-26 17:34:05.122294] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:37:27.710 [2024-11-26 17:34:05.122306] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:37:27.710 17:34:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:27.710 17:34:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:37:27.710 17:34:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:37:27.710 17:34:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:37:27.710 17:34:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:37:27.710 17:34:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:37:27.710 17:34:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:37:27.710 17:34:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:37:27.710 17:34:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:37:27.710 17:34:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:37:27.710 17:34:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:37:27.710 17:34:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:37:27.710 17:34:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:37:27.710 17:34:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:27.710 17:34:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:37:27.710 17:34:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:27.969 17:34:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:37:27.969 "name": "Existed_Raid", 00:37:27.969 "uuid": "00000000-0000-0000-0000-000000000000", 00:37:27.969 "strip_size_kb": 64, 00:37:27.969 "state": "configuring", 00:37:27.969 "raid_level": "raid5f", 00:37:27.969 "superblock": false, 00:37:27.969 "num_base_bdevs": 4, 00:37:27.969 "num_base_bdevs_discovered": 0, 00:37:27.969 "num_base_bdevs_operational": 4, 00:37:27.969 "base_bdevs_list": [ 00:37:27.969 { 00:37:27.969 "name": "BaseBdev1", 00:37:27.969 "uuid": "00000000-0000-0000-0000-000000000000", 00:37:27.969 "is_configured": false, 00:37:27.969 "data_offset": 0, 00:37:27.969 "data_size": 0 00:37:27.969 }, 00:37:27.969 { 00:37:27.969 "name": "BaseBdev2", 00:37:27.969 "uuid": "00000000-0000-0000-0000-000000000000", 00:37:27.969 "is_configured": false, 00:37:27.969 "data_offset": 0, 00:37:27.969 "data_size": 0 00:37:27.969 }, 00:37:27.969 { 00:37:27.969 "name": "BaseBdev3", 00:37:27.969 "uuid": "00000000-0000-0000-0000-000000000000", 00:37:27.969 "is_configured": false, 00:37:27.969 "data_offset": 0, 00:37:27.969 "data_size": 0 00:37:27.969 }, 00:37:27.969 { 00:37:27.969 "name": "BaseBdev4", 00:37:27.969 "uuid": "00000000-0000-0000-0000-000000000000", 00:37:27.969 "is_configured": false, 00:37:27.969 "data_offset": 0, 00:37:27.969 "data_size": 0 00:37:27.969 } 00:37:27.969 ] 00:37:27.969 }' 00:37:27.969 17:34:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:37:27.969 17:34:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:37:28.228 17:34:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:37:28.228 17:34:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:28.228 17:34:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:37:28.228 [2024-11-26 17:34:05.546199] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:37:28.228 [2024-11-26 17:34:05.546244] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:37:28.228 17:34:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:28.228 17:34:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:37:28.228 17:34:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:28.228 17:34:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:37:28.228 [2024-11-26 17:34:05.554193] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:37:28.228 [2024-11-26 17:34:05.554238] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:37:28.228 [2024-11-26 17:34:05.554247] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:37:28.228 [2024-11-26 17:34:05.554270] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:37:28.228 [2024-11-26 17:34:05.554278] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:37:28.228 [2024-11-26 17:34:05.554291] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:37:28.228 [2024-11-26 17:34:05.554298] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:37:28.228 [2024-11-26 17:34:05.554310] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:37:28.228 17:34:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:28.228 17:34:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:37:28.228 17:34:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:28.228 17:34:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:37:28.228 [2024-11-26 17:34:05.600319] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:37:28.228 BaseBdev1 00:37:28.228 17:34:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:28.228 17:34:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:37:28.228 17:34:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:37:28.228 17:34:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:37:28.228 17:34:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@905 -- # local i 00:37:28.228 17:34:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:37:28.228 17:34:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:37:28.228 17:34:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:37:28.228 17:34:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:28.228 17:34:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:37:28.228 17:34:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:28.228 17:34:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:37:28.228 17:34:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:28.228 17:34:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:37:28.228 [ 00:37:28.228 { 00:37:28.228 "name": "BaseBdev1", 00:37:28.228 "aliases": [ 00:37:28.228 "0701edc5-2527-4e38-bcb9-e125114fd271" 00:37:28.228 ], 00:37:28.228 "product_name": "Malloc disk", 00:37:28.228 "block_size": 512, 00:37:28.228 "num_blocks": 65536, 00:37:28.228 "uuid": "0701edc5-2527-4e38-bcb9-e125114fd271", 00:37:28.228 "assigned_rate_limits": { 00:37:28.228 "rw_ios_per_sec": 0, 00:37:28.228 "rw_mbytes_per_sec": 0, 00:37:28.228 "r_mbytes_per_sec": 0, 00:37:28.228 "w_mbytes_per_sec": 0 00:37:28.228 }, 00:37:28.228 "claimed": true, 00:37:28.228 "claim_type": "exclusive_write", 00:37:28.228 "zoned": false, 00:37:28.228 "supported_io_types": { 00:37:28.228 "read": true, 00:37:28.228 "write": true, 00:37:28.228 "unmap": true, 00:37:28.228 "flush": true, 00:37:28.228 "reset": true, 00:37:28.228 "nvme_admin": false, 00:37:28.228 "nvme_io": false, 00:37:28.228 "nvme_io_md": false, 00:37:28.228 "write_zeroes": true, 00:37:28.228 "zcopy": true, 00:37:28.228 "get_zone_info": false, 00:37:28.228 "zone_management": false, 00:37:28.228 "zone_append": false, 00:37:28.228 "compare": false, 00:37:28.228 "compare_and_write": false, 00:37:28.228 "abort": true, 00:37:28.228 "seek_hole": false, 00:37:28.228 "seek_data": false, 00:37:28.228 "copy": true, 00:37:28.228 "nvme_iov_md": false 00:37:28.228 }, 00:37:28.228 "memory_domains": [ 00:37:28.228 { 00:37:28.228 "dma_device_id": "system", 00:37:28.229 "dma_device_type": 1 00:37:28.229 }, 00:37:28.229 { 00:37:28.229 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:37:28.229 "dma_device_type": 2 00:37:28.229 } 00:37:28.229 ], 00:37:28.229 "driver_specific": {} 00:37:28.229 } 00:37:28.229 ] 00:37:28.229 17:34:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:28.229 17:34:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:37:28.229 17:34:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:37:28.229 17:34:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:37:28.229 17:34:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:37:28.229 17:34:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:37:28.229 17:34:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:37:28.229 17:34:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:37:28.229 17:34:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:37:28.229 17:34:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:37:28.229 17:34:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:37:28.229 17:34:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:37:28.229 17:34:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:37:28.229 17:34:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:37:28.229 17:34:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:28.229 17:34:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:37:28.229 17:34:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:28.487 17:34:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:37:28.487 "name": "Existed_Raid", 00:37:28.487 "uuid": "00000000-0000-0000-0000-000000000000", 00:37:28.487 "strip_size_kb": 64, 00:37:28.487 "state": "configuring", 00:37:28.487 "raid_level": "raid5f", 00:37:28.487 "superblock": false, 00:37:28.487 "num_base_bdevs": 4, 00:37:28.487 "num_base_bdevs_discovered": 1, 00:37:28.487 "num_base_bdevs_operational": 4, 00:37:28.487 "base_bdevs_list": [ 00:37:28.487 { 00:37:28.487 "name": "BaseBdev1", 00:37:28.487 "uuid": "0701edc5-2527-4e38-bcb9-e125114fd271", 00:37:28.487 "is_configured": true, 00:37:28.487 "data_offset": 0, 00:37:28.487 "data_size": 65536 00:37:28.487 }, 00:37:28.487 { 00:37:28.487 "name": "BaseBdev2", 00:37:28.487 "uuid": "00000000-0000-0000-0000-000000000000", 00:37:28.487 "is_configured": false, 00:37:28.487 "data_offset": 0, 00:37:28.487 "data_size": 0 00:37:28.487 }, 00:37:28.487 { 00:37:28.487 "name": "BaseBdev3", 00:37:28.487 "uuid": "00000000-0000-0000-0000-000000000000", 00:37:28.487 "is_configured": false, 00:37:28.487 "data_offset": 0, 00:37:28.487 "data_size": 0 00:37:28.487 }, 00:37:28.487 { 00:37:28.487 "name": "BaseBdev4", 00:37:28.487 "uuid": "00000000-0000-0000-0000-000000000000", 00:37:28.487 "is_configured": false, 00:37:28.487 "data_offset": 0, 00:37:28.487 "data_size": 0 00:37:28.487 } 00:37:28.487 ] 00:37:28.487 }' 00:37:28.487 17:34:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:37:28.487 17:34:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:37:28.746 17:34:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:37:28.746 17:34:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:28.746 17:34:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:37:28.746 [2024-11-26 17:34:06.088611] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:37:28.746 [2024-11-26 17:34:06.088718] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:37:28.746 17:34:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:28.746 17:34:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:37:28.746 17:34:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:28.746 17:34:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:37:28.746 [2024-11-26 17:34:06.096655] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:37:28.746 [2024-11-26 17:34:06.100178] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:37:28.746 [2024-11-26 17:34:06.100261] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:37:28.746 [2024-11-26 17:34:06.100280] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:37:28.746 [2024-11-26 17:34:06.100305] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:37:28.746 [2024-11-26 17:34:06.100318] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:37:28.746 [2024-11-26 17:34:06.100338] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:37:28.746 17:34:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:28.746 17:34:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:37:28.746 17:34:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:37:28.746 17:34:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:37:28.746 17:34:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:37:28.746 17:34:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:37:28.746 17:34:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:37:28.746 17:34:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:37:28.746 17:34:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:37:28.746 17:34:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:37:28.746 17:34:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:37:28.746 17:34:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:37:28.746 17:34:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:37:28.746 17:34:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:37:28.746 17:34:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:28.746 17:34:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:37:28.746 17:34:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:37:28.746 17:34:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:28.746 17:34:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:37:28.746 "name": "Existed_Raid", 00:37:28.746 "uuid": "00000000-0000-0000-0000-000000000000", 00:37:28.746 "strip_size_kb": 64, 00:37:28.746 "state": "configuring", 00:37:28.746 "raid_level": "raid5f", 00:37:28.746 "superblock": false, 00:37:28.746 "num_base_bdevs": 4, 00:37:28.746 "num_base_bdevs_discovered": 1, 00:37:28.746 "num_base_bdevs_operational": 4, 00:37:28.746 "base_bdevs_list": [ 00:37:28.746 { 00:37:28.746 "name": "BaseBdev1", 00:37:28.746 "uuid": "0701edc5-2527-4e38-bcb9-e125114fd271", 00:37:28.746 "is_configured": true, 00:37:28.746 "data_offset": 0, 00:37:28.746 "data_size": 65536 00:37:28.746 }, 00:37:28.746 { 00:37:28.746 "name": "BaseBdev2", 00:37:28.746 "uuid": "00000000-0000-0000-0000-000000000000", 00:37:28.746 "is_configured": false, 00:37:28.746 "data_offset": 0, 00:37:28.746 "data_size": 0 00:37:28.746 }, 00:37:28.746 { 00:37:28.746 "name": "BaseBdev3", 00:37:28.746 "uuid": "00000000-0000-0000-0000-000000000000", 00:37:28.746 "is_configured": false, 00:37:28.746 "data_offset": 0, 00:37:28.746 "data_size": 0 00:37:28.746 }, 00:37:28.746 { 00:37:28.746 "name": "BaseBdev4", 00:37:28.746 "uuid": "00000000-0000-0000-0000-000000000000", 00:37:28.746 "is_configured": false, 00:37:28.746 "data_offset": 0, 00:37:28.746 "data_size": 0 00:37:28.746 } 00:37:28.746 ] 00:37:28.746 }' 00:37:28.746 17:34:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:37:28.746 17:34:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:37:29.315 17:34:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:37:29.315 17:34:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:29.315 17:34:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:37:29.315 [2024-11-26 17:34:06.587191] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:37:29.315 BaseBdev2 00:37:29.315 17:34:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:29.315 17:34:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:37:29.315 17:34:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:37:29.315 17:34:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:37:29.315 17:34:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@905 -- # local i 00:37:29.315 17:34:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:37:29.315 17:34:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:37:29.315 17:34:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:37:29.315 17:34:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:29.315 17:34:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:37:29.315 17:34:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:29.315 17:34:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:37:29.315 17:34:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:29.315 17:34:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:37:29.315 [ 00:37:29.315 { 00:37:29.315 "name": "BaseBdev2", 00:37:29.315 "aliases": [ 00:37:29.315 "da532f28-da6a-41a2-9455-328548923dd9" 00:37:29.315 ], 00:37:29.315 "product_name": "Malloc disk", 00:37:29.315 "block_size": 512, 00:37:29.315 "num_blocks": 65536, 00:37:29.315 "uuid": "da532f28-da6a-41a2-9455-328548923dd9", 00:37:29.315 "assigned_rate_limits": { 00:37:29.315 "rw_ios_per_sec": 0, 00:37:29.315 "rw_mbytes_per_sec": 0, 00:37:29.315 "r_mbytes_per_sec": 0, 00:37:29.315 "w_mbytes_per_sec": 0 00:37:29.315 }, 00:37:29.315 "claimed": true, 00:37:29.315 "claim_type": "exclusive_write", 00:37:29.315 "zoned": false, 00:37:29.315 "supported_io_types": { 00:37:29.315 "read": true, 00:37:29.315 "write": true, 00:37:29.315 "unmap": true, 00:37:29.315 "flush": true, 00:37:29.315 "reset": true, 00:37:29.315 "nvme_admin": false, 00:37:29.315 "nvme_io": false, 00:37:29.315 "nvme_io_md": false, 00:37:29.315 "write_zeroes": true, 00:37:29.315 "zcopy": true, 00:37:29.315 "get_zone_info": false, 00:37:29.315 "zone_management": false, 00:37:29.315 "zone_append": false, 00:37:29.315 "compare": false, 00:37:29.315 "compare_and_write": false, 00:37:29.315 "abort": true, 00:37:29.315 "seek_hole": false, 00:37:29.315 "seek_data": false, 00:37:29.315 "copy": true, 00:37:29.315 "nvme_iov_md": false 00:37:29.315 }, 00:37:29.315 "memory_domains": [ 00:37:29.315 { 00:37:29.315 "dma_device_id": "system", 00:37:29.315 "dma_device_type": 1 00:37:29.315 }, 00:37:29.315 { 00:37:29.315 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:37:29.315 "dma_device_type": 2 00:37:29.315 } 00:37:29.315 ], 00:37:29.315 "driver_specific": {} 00:37:29.315 } 00:37:29.315 ] 00:37:29.315 17:34:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:29.315 17:34:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:37:29.315 17:34:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:37:29.315 17:34:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:37:29.315 17:34:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:37:29.315 17:34:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:37:29.315 17:34:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:37:29.315 17:34:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:37:29.315 17:34:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:37:29.315 17:34:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:37:29.315 17:34:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:37:29.315 17:34:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:37:29.315 17:34:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:37:29.315 17:34:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:37:29.315 17:34:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:37:29.315 17:34:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:29.315 17:34:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:37:29.315 17:34:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:37:29.315 17:34:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:29.315 17:34:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:37:29.315 "name": "Existed_Raid", 00:37:29.315 "uuid": "00000000-0000-0000-0000-000000000000", 00:37:29.315 "strip_size_kb": 64, 00:37:29.315 "state": "configuring", 00:37:29.315 "raid_level": "raid5f", 00:37:29.315 "superblock": false, 00:37:29.315 "num_base_bdevs": 4, 00:37:29.315 "num_base_bdevs_discovered": 2, 00:37:29.315 "num_base_bdevs_operational": 4, 00:37:29.315 "base_bdevs_list": [ 00:37:29.315 { 00:37:29.315 "name": "BaseBdev1", 00:37:29.315 "uuid": "0701edc5-2527-4e38-bcb9-e125114fd271", 00:37:29.315 "is_configured": true, 00:37:29.315 "data_offset": 0, 00:37:29.315 "data_size": 65536 00:37:29.315 }, 00:37:29.315 { 00:37:29.315 "name": "BaseBdev2", 00:37:29.315 "uuid": "da532f28-da6a-41a2-9455-328548923dd9", 00:37:29.315 "is_configured": true, 00:37:29.315 "data_offset": 0, 00:37:29.315 "data_size": 65536 00:37:29.315 }, 00:37:29.315 { 00:37:29.315 "name": "BaseBdev3", 00:37:29.315 "uuid": "00000000-0000-0000-0000-000000000000", 00:37:29.315 "is_configured": false, 00:37:29.315 "data_offset": 0, 00:37:29.315 "data_size": 0 00:37:29.315 }, 00:37:29.315 { 00:37:29.315 "name": "BaseBdev4", 00:37:29.315 "uuid": "00000000-0000-0000-0000-000000000000", 00:37:29.315 "is_configured": false, 00:37:29.315 "data_offset": 0, 00:37:29.315 "data_size": 0 00:37:29.315 } 00:37:29.315 ] 00:37:29.315 }' 00:37:29.315 17:34:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:37:29.315 17:34:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:37:29.882 17:34:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:37:29.883 17:34:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:29.883 17:34:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:37:29.883 [2024-11-26 17:34:07.138947] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:37:29.883 BaseBdev3 00:37:29.883 17:34:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:29.883 17:34:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:37:29.883 17:34:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:37:29.883 17:34:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:37:29.883 17:34:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@905 -- # local i 00:37:29.883 17:34:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:37:29.883 17:34:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:37:29.883 17:34:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:37:29.883 17:34:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:29.883 17:34:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:37:29.883 17:34:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:29.883 17:34:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:37:29.883 17:34:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:29.883 17:34:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:37:29.883 [ 00:37:29.883 { 00:37:29.883 "name": "BaseBdev3", 00:37:29.883 "aliases": [ 00:37:29.883 "50064bc5-fd67-434f-9128-9188c437d87f" 00:37:29.883 ], 00:37:29.883 "product_name": "Malloc disk", 00:37:29.883 "block_size": 512, 00:37:29.883 "num_blocks": 65536, 00:37:29.883 "uuid": "50064bc5-fd67-434f-9128-9188c437d87f", 00:37:29.883 "assigned_rate_limits": { 00:37:29.883 "rw_ios_per_sec": 0, 00:37:29.883 "rw_mbytes_per_sec": 0, 00:37:29.883 "r_mbytes_per_sec": 0, 00:37:29.883 "w_mbytes_per_sec": 0 00:37:29.883 }, 00:37:29.883 "claimed": true, 00:37:29.883 "claim_type": "exclusive_write", 00:37:29.883 "zoned": false, 00:37:29.883 "supported_io_types": { 00:37:29.883 "read": true, 00:37:29.883 "write": true, 00:37:29.883 "unmap": true, 00:37:29.883 "flush": true, 00:37:29.883 "reset": true, 00:37:29.883 "nvme_admin": false, 00:37:29.883 "nvme_io": false, 00:37:29.883 "nvme_io_md": false, 00:37:29.883 "write_zeroes": true, 00:37:29.883 "zcopy": true, 00:37:29.883 "get_zone_info": false, 00:37:29.883 "zone_management": false, 00:37:29.883 "zone_append": false, 00:37:29.883 "compare": false, 00:37:29.883 "compare_and_write": false, 00:37:29.883 "abort": true, 00:37:29.883 "seek_hole": false, 00:37:29.883 "seek_data": false, 00:37:29.883 "copy": true, 00:37:29.883 "nvme_iov_md": false 00:37:29.883 }, 00:37:29.883 "memory_domains": [ 00:37:29.883 { 00:37:29.883 "dma_device_id": "system", 00:37:29.883 "dma_device_type": 1 00:37:29.883 }, 00:37:29.883 { 00:37:29.883 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:37:29.883 "dma_device_type": 2 00:37:29.883 } 00:37:29.883 ], 00:37:29.883 "driver_specific": {} 00:37:29.883 } 00:37:29.883 ] 00:37:29.883 17:34:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:29.883 17:34:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:37:29.883 17:34:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:37:29.883 17:34:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:37:29.883 17:34:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:37:29.883 17:34:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:37:29.883 17:34:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:37:29.883 17:34:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:37:29.883 17:34:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:37:29.883 17:34:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:37:29.883 17:34:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:37:29.883 17:34:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:37:29.883 17:34:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:37:29.883 17:34:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:37:29.883 17:34:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:37:29.883 17:34:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:37:29.883 17:34:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:29.883 17:34:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:37:29.883 17:34:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:29.883 17:34:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:37:29.883 "name": "Existed_Raid", 00:37:29.883 "uuid": "00000000-0000-0000-0000-000000000000", 00:37:29.883 "strip_size_kb": 64, 00:37:29.883 "state": "configuring", 00:37:29.883 "raid_level": "raid5f", 00:37:29.883 "superblock": false, 00:37:29.883 "num_base_bdevs": 4, 00:37:29.883 "num_base_bdevs_discovered": 3, 00:37:29.883 "num_base_bdevs_operational": 4, 00:37:29.883 "base_bdevs_list": [ 00:37:29.883 { 00:37:29.883 "name": "BaseBdev1", 00:37:29.883 "uuid": "0701edc5-2527-4e38-bcb9-e125114fd271", 00:37:29.883 "is_configured": true, 00:37:29.883 "data_offset": 0, 00:37:29.883 "data_size": 65536 00:37:29.883 }, 00:37:29.883 { 00:37:29.883 "name": "BaseBdev2", 00:37:29.883 "uuid": "da532f28-da6a-41a2-9455-328548923dd9", 00:37:29.883 "is_configured": true, 00:37:29.883 "data_offset": 0, 00:37:29.883 "data_size": 65536 00:37:29.883 }, 00:37:29.883 { 00:37:29.883 "name": "BaseBdev3", 00:37:29.883 "uuid": "50064bc5-fd67-434f-9128-9188c437d87f", 00:37:29.883 "is_configured": true, 00:37:29.883 "data_offset": 0, 00:37:29.883 "data_size": 65536 00:37:29.883 }, 00:37:29.883 { 00:37:29.883 "name": "BaseBdev4", 00:37:29.883 "uuid": "00000000-0000-0000-0000-000000000000", 00:37:29.883 "is_configured": false, 00:37:29.883 "data_offset": 0, 00:37:29.883 "data_size": 0 00:37:29.883 } 00:37:29.883 ] 00:37:29.883 }' 00:37:29.883 17:34:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:37:29.883 17:34:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:37:30.451 17:34:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:37:30.451 17:34:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:30.451 17:34:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:37:30.451 [2024-11-26 17:34:07.683753] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:37:30.451 [2024-11-26 17:34:07.684064] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:37:30.451 [2024-11-26 17:34:07.684085] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 196608, blocklen 512 00:37:30.451 [2024-11-26 17:34:07.684403] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:37:30.451 [2024-11-26 17:34:07.693145] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:37:30.451 [2024-11-26 17:34:07.693300] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:37:30.451 [2024-11-26 17:34:07.693653] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:37:30.451 BaseBdev4 00:37:30.451 17:34:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:30.451 17:34:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev4 00:37:30.451 17:34:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev4 00:37:30.451 17:34:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:37:30.451 17:34:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@905 -- # local i 00:37:30.451 17:34:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:37:30.451 17:34:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:37:30.451 17:34:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:37:30.451 17:34:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:30.451 17:34:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:37:30.451 17:34:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:30.451 17:34:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:37:30.451 17:34:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:30.451 17:34:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:37:30.451 [ 00:37:30.451 { 00:37:30.451 "name": "BaseBdev4", 00:37:30.451 "aliases": [ 00:37:30.451 "a7705387-d12e-4b9d-a144-01eb4dedf761" 00:37:30.451 ], 00:37:30.451 "product_name": "Malloc disk", 00:37:30.451 "block_size": 512, 00:37:30.451 "num_blocks": 65536, 00:37:30.451 "uuid": "a7705387-d12e-4b9d-a144-01eb4dedf761", 00:37:30.451 "assigned_rate_limits": { 00:37:30.451 "rw_ios_per_sec": 0, 00:37:30.451 "rw_mbytes_per_sec": 0, 00:37:30.451 "r_mbytes_per_sec": 0, 00:37:30.451 "w_mbytes_per_sec": 0 00:37:30.451 }, 00:37:30.451 "claimed": true, 00:37:30.451 "claim_type": "exclusive_write", 00:37:30.451 "zoned": false, 00:37:30.451 "supported_io_types": { 00:37:30.451 "read": true, 00:37:30.451 "write": true, 00:37:30.451 "unmap": true, 00:37:30.451 "flush": true, 00:37:30.451 "reset": true, 00:37:30.451 "nvme_admin": false, 00:37:30.451 "nvme_io": false, 00:37:30.451 "nvme_io_md": false, 00:37:30.451 "write_zeroes": true, 00:37:30.451 "zcopy": true, 00:37:30.451 "get_zone_info": false, 00:37:30.451 "zone_management": false, 00:37:30.451 "zone_append": false, 00:37:30.451 "compare": false, 00:37:30.451 "compare_and_write": false, 00:37:30.451 "abort": true, 00:37:30.451 "seek_hole": false, 00:37:30.451 "seek_data": false, 00:37:30.451 "copy": true, 00:37:30.451 "nvme_iov_md": false 00:37:30.451 }, 00:37:30.451 "memory_domains": [ 00:37:30.451 { 00:37:30.451 "dma_device_id": "system", 00:37:30.451 "dma_device_type": 1 00:37:30.451 }, 00:37:30.451 { 00:37:30.451 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:37:30.451 "dma_device_type": 2 00:37:30.451 } 00:37:30.451 ], 00:37:30.451 "driver_specific": {} 00:37:30.451 } 00:37:30.451 ] 00:37:30.451 17:34:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:30.451 17:34:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:37:30.451 17:34:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:37:30.451 17:34:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:37:30.451 17:34:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 4 00:37:30.451 17:34:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:37:30.451 17:34:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:37:30.451 17:34:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:37:30.451 17:34:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:37:30.451 17:34:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:37:30.451 17:34:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:37:30.451 17:34:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:37:30.451 17:34:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:37:30.451 17:34:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:37:30.451 17:34:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:37:30.451 17:34:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:30.451 17:34:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:37:30.451 17:34:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:37:30.451 17:34:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:30.451 17:34:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:37:30.451 "name": "Existed_Raid", 00:37:30.451 "uuid": "7aeff1b5-b60e-4300-a4df-020ed87da119", 00:37:30.451 "strip_size_kb": 64, 00:37:30.451 "state": "online", 00:37:30.451 "raid_level": "raid5f", 00:37:30.451 "superblock": false, 00:37:30.451 "num_base_bdevs": 4, 00:37:30.451 "num_base_bdevs_discovered": 4, 00:37:30.451 "num_base_bdevs_operational": 4, 00:37:30.451 "base_bdevs_list": [ 00:37:30.451 { 00:37:30.451 "name": "BaseBdev1", 00:37:30.451 "uuid": "0701edc5-2527-4e38-bcb9-e125114fd271", 00:37:30.451 "is_configured": true, 00:37:30.451 "data_offset": 0, 00:37:30.451 "data_size": 65536 00:37:30.451 }, 00:37:30.451 { 00:37:30.451 "name": "BaseBdev2", 00:37:30.451 "uuid": "da532f28-da6a-41a2-9455-328548923dd9", 00:37:30.451 "is_configured": true, 00:37:30.451 "data_offset": 0, 00:37:30.451 "data_size": 65536 00:37:30.451 }, 00:37:30.451 { 00:37:30.451 "name": "BaseBdev3", 00:37:30.451 "uuid": "50064bc5-fd67-434f-9128-9188c437d87f", 00:37:30.451 "is_configured": true, 00:37:30.451 "data_offset": 0, 00:37:30.451 "data_size": 65536 00:37:30.451 }, 00:37:30.451 { 00:37:30.451 "name": "BaseBdev4", 00:37:30.451 "uuid": "a7705387-d12e-4b9d-a144-01eb4dedf761", 00:37:30.451 "is_configured": true, 00:37:30.451 "data_offset": 0, 00:37:30.451 "data_size": 65536 00:37:30.451 } 00:37:30.451 ] 00:37:30.451 }' 00:37:30.451 17:34:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:37:30.451 17:34:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:37:31.019 17:34:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:37:31.019 17:34:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:37:31.019 17:34:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:37:31.019 17:34:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:37:31.019 17:34:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:37:31.019 17:34:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:37:31.019 17:34:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:37:31.019 17:34:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:37:31.019 17:34:08 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:31.019 17:34:08 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:37:31.019 [2024-11-26 17:34:08.179854] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:37:31.019 17:34:08 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:31.019 17:34:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:37:31.019 "name": "Existed_Raid", 00:37:31.019 "aliases": [ 00:37:31.019 "7aeff1b5-b60e-4300-a4df-020ed87da119" 00:37:31.019 ], 00:37:31.019 "product_name": "Raid Volume", 00:37:31.019 "block_size": 512, 00:37:31.019 "num_blocks": 196608, 00:37:31.019 "uuid": "7aeff1b5-b60e-4300-a4df-020ed87da119", 00:37:31.019 "assigned_rate_limits": { 00:37:31.019 "rw_ios_per_sec": 0, 00:37:31.019 "rw_mbytes_per_sec": 0, 00:37:31.019 "r_mbytes_per_sec": 0, 00:37:31.019 "w_mbytes_per_sec": 0 00:37:31.019 }, 00:37:31.019 "claimed": false, 00:37:31.019 "zoned": false, 00:37:31.019 "supported_io_types": { 00:37:31.019 "read": true, 00:37:31.019 "write": true, 00:37:31.019 "unmap": false, 00:37:31.019 "flush": false, 00:37:31.019 "reset": true, 00:37:31.019 "nvme_admin": false, 00:37:31.019 "nvme_io": false, 00:37:31.019 "nvme_io_md": false, 00:37:31.019 "write_zeroes": true, 00:37:31.019 "zcopy": false, 00:37:31.019 "get_zone_info": false, 00:37:31.019 "zone_management": false, 00:37:31.019 "zone_append": false, 00:37:31.019 "compare": false, 00:37:31.019 "compare_and_write": false, 00:37:31.019 "abort": false, 00:37:31.019 "seek_hole": false, 00:37:31.019 "seek_data": false, 00:37:31.019 "copy": false, 00:37:31.019 "nvme_iov_md": false 00:37:31.019 }, 00:37:31.019 "driver_specific": { 00:37:31.019 "raid": { 00:37:31.019 "uuid": "7aeff1b5-b60e-4300-a4df-020ed87da119", 00:37:31.019 "strip_size_kb": 64, 00:37:31.019 "state": "online", 00:37:31.019 "raid_level": "raid5f", 00:37:31.019 "superblock": false, 00:37:31.019 "num_base_bdevs": 4, 00:37:31.019 "num_base_bdevs_discovered": 4, 00:37:31.019 "num_base_bdevs_operational": 4, 00:37:31.019 "base_bdevs_list": [ 00:37:31.019 { 00:37:31.019 "name": "BaseBdev1", 00:37:31.019 "uuid": "0701edc5-2527-4e38-bcb9-e125114fd271", 00:37:31.019 "is_configured": true, 00:37:31.019 "data_offset": 0, 00:37:31.019 "data_size": 65536 00:37:31.019 }, 00:37:31.019 { 00:37:31.019 "name": "BaseBdev2", 00:37:31.019 "uuid": "da532f28-da6a-41a2-9455-328548923dd9", 00:37:31.019 "is_configured": true, 00:37:31.019 "data_offset": 0, 00:37:31.019 "data_size": 65536 00:37:31.019 }, 00:37:31.019 { 00:37:31.019 "name": "BaseBdev3", 00:37:31.019 "uuid": "50064bc5-fd67-434f-9128-9188c437d87f", 00:37:31.019 "is_configured": true, 00:37:31.019 "data_offset": 0, 00:37:31.019 "data_size": 65536 00:37:31.019 }, 00:37:31.019 { 00:37:31.019 "name": "BaseBdev4", 00:37:31.019 "uuid": "a7705387-d12e-4b9d-a144-01eb4dedf761", 00:37:31.019 "is_configured": true, 00:37:31.019 "data_offset": 0, 00:37:31.019 "data_size": 65536 00:37:31.019 } 00:37:31.019 ] 00:37:31.019 } 00:37:31.019 } 00:37:31.019 }' 00:37:31.019 17:34:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:37:31.019 17:34:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:37:31.019 BaseBdev2 00:37:31.019 BaseBdev3 00:37:31.019 BaseBdev4' 00:37:31.019 17:34:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:37:31.019 17:34:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:37:31.019 17:34:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:37:31.019 17:34:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:37:31.019 17:34:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:37:31.019 17:34:08 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:31.019 17:34:08 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:37:31.019 17:34:08 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:31.019 17:34:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:37:31.019 17:34:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:37:31.019 17:34:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:37:31.019 17:34:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:37:31.020 17:34:08 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:31.020 17:34:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:37:31.020 17:34:08 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:37:31.020 17:34:08 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:31.020 17:34:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:37:31.020 17:34:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:37:31.020 17:34:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:37:31.020 17:34:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:37:31.020 17:34:08 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:31.020 17:34:08 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:37:31.020 17:34:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:37:31.020 17:34:08 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:31.020 17:34:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:37:31.020 17:34:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:37:31.020 17:34:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:37:31.020 17:34:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:37:31.020 17:34:08 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:31.020 17:34:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:37:31.020 17:34:08 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:37:31.020 17:34:08 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:31.020 17:34:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:37:31.020 17:34:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:37:31.020 17:34:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:37:31.020 17:34:08 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:31.020 17:34:08 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:37:31.279 [2024-11-26 17:34:08.467722] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:37:31.279 17:34:08 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:31.279 17:34:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@260 -- # local expected_state 00:37:31.279 17:34:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@261 -- # has_redundancy raid5f 00:37:31.279 17:34:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:37:31.279 17:34:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@199 -- # return 0 00:37:31.279 17:34:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@264 -- # expected_state=online 00:37:31.279 17:34:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 3 00:37:31.279 17:34:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:37:31.279 17:34:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:37:31.279 17:34:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:37:31.279 17:34:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:37:31.279 17:34:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:37:31.279 17:34:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:37:31.279 17:34:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:37:31.279 17:34:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:37:31.279 17:34:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:37:31.279 17:34:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:37:31.279 17:34:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:37:31.279 17:34:08 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:31.279 17:34:08 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:37:31.279 17:34:08 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:31.279 17:34:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:37:31.279 "name": "Existed_Raid", 00:37:31.279 "uuid": "7aeff1b5-b60e-4300-a4df-020ed87da119", 00:37:31.279 "strip_size_kb": 64, 00:37:31.279 "state": "online", 00:37:31.279 "raid_level": "raid5f", 00:37:31.279 "superblock": false, 00:37:31.279 "num_base_bdevs": 4, 00:37:31.279 "num_base_bdevs_discovered": 3, 00:37:31.279 "num_base_bdevs_operational": 3, 00:37:31.279 "base_bdevs_list": [ 00:37:31.279 { 00:37:31.279 "name": null, 00:37:31.279 "uuid": "00000000-0000-0000-0000-000000000000", 00:37:31.279 "is_configured": false, 00:37:31.279 "data_offset": 0, 00:37:31.279 "data_size": 65536 00:37:31.279 }, 00:37:31.279 { 00:37:31.279 "name": "BaseBdev2", 00:37:31.279 "uuid": "da532f28-da6a-41a2-9455-328548923dd9", 00:37:31.279 "is_configured": true, 00:37:31.279 "data_offset": 0, 00:37:31.279 "data_size": 65536 00:37:31.279 }, 00:37:31.279 { 00:37:31.279 "name": "BaseBdev3", 00:37:31.279 "uuid": "50064bc5-fd67-434f-9128-9188c437d87f", 00:37:31.279 "is_configured": true, 00:37:31.279 "data_offset": 0, 00:37:31.279 "data_size": 65536 00:37:31.279 }, 00:37:31.279 { 00:37:31.279 "name": "BaseBdev4", 00:37:31.279 "uuid": "a7705387-d12e-4b9d-a144-01eb4dedf761", 00:37:31.279 "is_configured": true, 00:37:31.279 "data_offset": 0, 00:37:31.279 "data_size": 65536 00:37:31.279 } 00:37:31.279 ] 00:37:31.279 }' 00:37:31.279 17:34:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:37:31.279 17:34:08 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:37:31.848 17:34:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:37:31.848 17:34:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:37:31.848 17:34:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:37:31.848 17:34:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:37:31.848 17:34:08 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:31.848 17:34:08 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:37:31.848 17:34:09 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:31.848 17:34:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:37:31.848 17:34:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:37:31.848 17:34:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:37:31.848 17:34:09 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:31.848 17:34:09 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:37:31.848 [2024-11-26 17:34:09.033699] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:37:31.848 [2024-11-26 17:34:09.033803] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:37:31.848 [2024-11-26 17:34:09.130183] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:37:31.848 17:34:09 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:31.848 17:34:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:37:31.848 17:34:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:37:31.848 17:34:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:37:31.848 17:34:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:37:31.848 17:34:09 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:31.848 17:34:09 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:37:31.848 17:34:09 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:31.848 17:34:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:37:31.848 17:34:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:37:31.848 17:34:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:37:31.848 17:34:09 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:31.848 17:34:09 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:37:31.848 [2024-11-26 17:34:09.174210] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:37:31.848 17:34:09 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:31.848 17:34:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:37:31.848 17:34:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:37:31.848 17:34:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:37:31.848 17:34:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:37:31.848 17:34:09 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:31.848 17:34:09 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:37:32.107 17:34:09 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:32.107 17:34:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:37:32.107 17:34:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:37:32.107 17:34:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev4 00:37:32.107 17:34:09 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:32.107 17:34:09 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:37:32.107 [2024-11-26 17:34:09.326427] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev4 00:37:32.107 [2024-11-26 17:34:09.326486] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:37:32.107 17:34:09 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:32.107 17:34:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:37:32.107 17:34:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:37:32.107 17:34:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:37:32.107 17:34:09 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:32.107 17:34:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:37:32.107 17:34:09 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:37:32.107 17:34:09 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:32.107 17:34:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:37:32.107 17:34:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:37:32.107 17:34:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@284 -- # '[' 4 -gt 2 ']' 00:37:32.107 17:34:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:37:32.107 17:34:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:37:32.107 17:34:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:37:32.107 17:34:09 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:32.107 17:34:09 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:37:32.107 BaseBdev2 00:37:32.107 17:34:09 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:32.107 17:34:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:37:32.107 17:34:09 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:37:32.107 17:34:09 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:37:32.107 17:34:09 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@905 -- # local i 00:37:32.107 17:34:09 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:37:32.107 17:34:09 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:37:32.107 17:34:09 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:37:32.108 17:34:09 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:32.108 17:34:09 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:37:32.108 17:34:09 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:32.108 17:34:09 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:37:32.108 17:34:09 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:32.108 17:34:09 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:37:32.108 [ 00:37:32.108 { 00:37:32.108 "name": "BaseBdev2", 00:37:32.108 "aliases": [ 00:37:32.108 "35359d0b-43dd-4bdb-97ea-e8a4cc1d9966" 00:37:32.108 ], 00:37:32.108 "product_name": "Malloc disk", 00:37:32.108 "block_size": 512, 00:37:32.108 "num_blocks": 65536, 00:37:32.108 "uuid": "35359d0b-43dd-4bdb-97ea-e8a4cc1d9966", 00:37:32.108 "assigned_rate_limits": { 00:37:32.108 "rw_ios_per_sec": 0, 00:37:32.108 "rw_mbytes_per_sec": 0, 00:37:32.108 "r_mbytes_per_sec": 0, 00:37:32.108 "w_mbytes_per_sec": 0 00:37:32.108 }, 00:37:32.108 "claimed": false, 00:37:32.108 "zoned": false, 00:37:32.108 "supported_io_types": { 00:37:32.108 "read": true, 00:37:32.108 "write": true, 00:37:32.108 "unmap": true, 00:37:32.108 "flush": true, 00:37:32.108 "reset": true, 00:37:32.108 "nvme_admin": false, 00:37:32.108 "nvme_io": false, 00:37:32.108 "nvme_io_md": false, 00:37:32.108 "write_zeroes": true, 00:37:32.108 "zcopy": true, 00:37:32.108 "get_zone_info": false, 00:37:32.108 "zone_management": false, 00:37:32.108 "zone_append": false, 00:37:32.108 "compare": false, 00:37:32.108 "compare_and_write": false, 00:37:32.108 "abort": true, 00:37:32.108 "seek_hole": false, 00:37:32.108 "seek_data": false, 00:37:32.108 "copy": true, 00:37:32.108 "nvme_iov_md": false 00:37:32.108 }, 00:37:32.108 "memory_domains": [ 00:37:32.108 { 00:37:32.108 "dma_device_id": "system", 00:37:32.108 "dma_device_type": 1 00:37:32.108 }, 00:37:32.108 { 00:37:32.108 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:37:32.108 "dma_device_type": 2 00:37:32.108 } 00:37:32.108 ], 00:37:32.108 "driver_specific": {} 00:37:32.108 } 00:37:32.108 ] 00:37:32.108 17:34:09 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:32.108 17:34:09 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:37:32.108 17:34:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:37:32.108 17:34:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:37:32.108 17:34:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:37:32.108 17:34:09 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:32.108 17:34:09 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:37:32.367 BaseBdev3 00:37:32.367 17:34:09 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:32.367 17:34:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:37:32.367 17:34:09 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:37:32.367 17:34:09 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:37:32.367 17:34:09 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@905 -- # local i 00:37:32.367 17:34:09 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:37:32.367 17:34:09 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:37:32.367 17:34:09 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:37:32.367 17:34:09 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:32.367 17:34:09 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:37:32.367 17:34:09 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:32.367 17:34:09 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:37:32.367 17:34:09 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:32.367 17:34:09 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:37:32.367 [ 00:37:32.367 { 00:37:32.367 "name": "BaseBdev3", 00:37:32.367 "aliases": [ 00:37:32.367 "e47d9a36-7ea6-4e7b-a0fe-9e9170b818d5" 00:37:32.367 ], 00:37:32.367 "product_name": "Malloc disk", 00:37:32.367 "block_size": 512, 00:37:32.367 "num_blocks": 65536, 00:37:32.367 "uuid": "e47d9a36-7ea6-4e7b-a0fe-9e9170b818d5", 00:37:32.367 "assigned_rate_limits": { 00:37:32.367 "rw_ios_per_sec": 0, 00:37:32.367 "rw_mbytes_per_sec": 0, 00:37:32.367 "r_mbytes_per_sec": 0, 00:37:32.367 "w_mbytes_per_sec": 0 00:37:32.367 }, 00:37:32.367 "claimed": false, 00:37:32.367 "zoned": false, 00:37:32.367 "supported_io_types": { 00:37:32.367 "read": true, 00:37:32.367 "write": true, 00:37:32.367 "unmap": true, 00:37:32.367 "flush": true, 00:37:32.367 "reset": true, 00:37:32.367 "nvme_admin": false, 00:37:32.367 "nvme_io": false, 00:37:32.367 "nvme_io_md": false, 00:37:32.367 "write_zeroes": true, 00:37:32.367 "zcopy": true, 00:37:32.367 "get_zone_info": false, 00:37:32.367 "zone_management": false, 00:37:32.367 "zone_append": false, 00:37:32.367 "compare": false, 00:37:32.367 "compare_and_write": false, 00:37:32.367 "abort": true, 00:37:32.367 "seek_hole": false, 00:37:32.367 "seek_data": false, 00:37:32.367 "copy": true, 00:37:32.367 "nvme_iov_md": false 00:37:32.367 }, 00:37:32.367 "memory_domains": [ 00:37:32.367 { 00:37:32.367 "dma_device_id": "system", 00:37:32.367 "dma_device_type": 1 00:37:32.367 }, 00:37:32.367 { 00:37:32.367 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:37:32.367 "dma_device_type": 2 00:37:32.367 } 00:37:32.367 ], 00:37:32.367 "driver_specific": {} 00:37:32.367 } 00:37:32.367 ] 00:37:32.367 17:34:09 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:32.367 17:34:09 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:37:32.367 17:34:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:37:32.367 17:34:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:37:32.367 17:34:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:37:32.367 17:34:09 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:32.367 17:34:09 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:37:32.367 BaseBdev4 00:37:32.367 17:34:09 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:32.367 17:34:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev4 00:37:32.367 17:34:09 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev4 00:37:32.368 17:34:09 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:37:32.368 17:34:09 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@905 -- # local i 00:37:32.368 17:34:09 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:37:32.368 17:34:09 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:37:32.368 17:34:09 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:37:32.368 17:34:09 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:32.368 17:34:09 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:37:32.368 17:34:09 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:32.368 17:34:09 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:37:32.368 17:34:09 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:32.368 17:34:09 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:37:32.368 [ 00:37:32.368 { 00:37:32.368 "name": "BaseBdev4", 00:37:32.368 "aliases": [ 00:37:32.368 "4d337c57-bf21-4312-b425-ac2ee750ac11" 00:37:32.368 ], 00:37:32.368 "product_name": "Malloc disk", 00:37:32.368 "block_size": 512, 00:37:32.368 "num_blocks": 65536, 00:37:32.368 "uuid": "4d337c57-bf21-4312-b425-ac2ee750ac11", 00:37:32.368 "assigned_rate_limits": { 00:37:32.368 "rw_ios_per_sec": 0, 00:37:32.368 "rw_mbytes_per_sec": 0, 00:37:32.368 "r_mbytes_per_sec": 0, 00:37:32.368 "w_mbytes_per_sec": 0 00:37:32.368 }, 00:37:32.368 "claimed": false, 00:37:32.368 "zoned": false, 00:37:32.368 "supported_io_types": { 00:37:32.368 "read": true, 00:37:32.368 "write": true, 00:37:32.368 "unmap": true, 00:37:32.368 "flush": true, 00:37:32.368 "reset": true, 00:37:32.368 "nvme_admin": false, 00:37:32.368 "nvme_io": false, 00:37:32.368 "nvme_io_md": false, 00:37:32.368 "write_zeroes": true, 00:37:32.368 "zcopy": true, 00:37:32.368 "get_zone_info": false, 00:37:32.368 "zone_management": false, 00:37:32.368 "zone_append": false, 00:37:32.368 "compare": false, 00:37:32.368 "compare_and_write": false, 00:37:32.368 "abort": true, 00:37:32.368 "seek_hole": false, 00:37:32.368 "seek_data": false, 00:37:32.368 "copy": true, 00:37:32.368 "nvme_iov_md": false 00:37:32.368 }, 00:37:32.368 "memory_domains": [ 00:37:32.368 { 00:37:32.368 "dma_device_id": "system", 00:37:32.368 "dma_device_type": 1 00:37:32.368 }, 00:37:32.368 { 00:37:32.368 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:37:32.368 "dma_device_type": 2 00:37:32.368 } 00:37:32.368 ], 00:37:32.368 "driver_specific": {} 00:37:32.368 } 00:37:32.368 ] 00:37:32.368 17:34:09 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:32.368 17:34:09 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:37:32.368 17:34:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:37:32.368 17:34:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:37:32.368 17:34:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:37:32.368 17:34:09 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:32.368 17:34:09 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:37:32.368 [2024-11-26 17:34:09.688015] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:37:32.368 [2024-11-26 17:34:09.688074] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:37:32.368 [2024-11-26 17:34:09.688099] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:37:32.368 [2024-11-26 17:34:09.690190] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:37:32.368 [2024-11-26 17:34:09.690242] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:37:32.368 17:34:09 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:32.368 17:34:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:37:32.368 17:34:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:37:32.368 17:34:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:37:32.368 17:34:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:37:32.368 17:34:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:37:32.368 17:34:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:37:32.368 17:34:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:37:32.368 17:34:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:37:32.368 17:34:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:37:32.368 17:34:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:37:32.368 17:34:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:37:32.368 17:34:09 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:32.368 17:34:09 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:37:32.368 17:34:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:37:32.368 17:34:09 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:32.368 17:34:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:37:32.368 "name": "Existed_Raid", 00:37:32.368 "uuid": "00000000-0000-0000-0000-000000000000", 00:37:32.368 "strip_size_kb": 64, 00:37:32.368 "state": "configuring", 00:37:32.368 "raid_level": "raid5f", 00:37:32.368 "superblock": false, 00:37:32.368 "num_base_bdevs": 4, 00:37:32.368 "num_base_bdevs_discovered": 3, 00:37:32.368 "num_base_bdevs_operational": 4, 00:37:32.368 "base_bdevs_list": [ 00:37:32.368 { 00:37:32.368 "name": "BaseBdev1", 00:37:32.368 "uuid": "00000000-0000-0000-0000-000000000000", 00:37:32.368 "is_configured": false, 00:37:32.368 "data_offset": 0, 00:37:32.368 "data_size": 0 00:37:32.368 }, 00:37:32.368 { 00:37:32.368 "name": "BaseBdev2", 00:37:32.368 "uuid": "35359d0b-43dd-4bdb-97ea-e8a4cc1d9966", 00:37:32.368 "is_configured": true, 00:37:32.368 "data_offset": 0, 00:37:32.368 "data_size": 65536 00:37:32.368 }, 00:37:32.368 { 00:37:32.368 "name": "BaseBdev3", 00:37:32.368 "uuid": "e47d9a36-7ea6-4e7b-a0fe-9e9170b818d5", 00:37:32.368 "is_configured": true, 00:37:32.368 "data_offset": 0, 00:37:32.368 "data_size": 65536 00:37:32.368 }, 00:37:32.368 { 00:37:32.368 "name": "BaseBdev4", 00:37:32.368 "uuid": "4d337c57-bf21-4312-b425-ac2ee750ac11", 00:37:32.368 "is_configured": true, 00:37:32.368 "data_offset": 0, 00:37:32.368 "data_size": 65536 00:37:32.368 } 00:37:32.368 ] 00:37:32.368 }' 00:37:32.368 17:34:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:37:32.368 17:34:09 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:37:32.935 17:34:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:37:32.935 17:34:10 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:32.935 17:34:10 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:37:32.935 [2024-11-26 17:34:10.144144] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:37:32.935 17:34:10 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:32.935 17:34:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:37:32.935 17:34:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:37:32.935 17:34:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:37:32.935 17:34:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:37:32.935 17:34:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:37:32.935 17:34:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:37:32.935 17:34:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:37:32.935 17:34:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:37:32.935 17:34:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:37:32.935 17:34:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:37:32.935 17:34:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:37:32.935 17:34:10 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:32.935 17:34:10 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:37:32.935 17:34:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:37:32.935 17:34:10 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:32.935 17:34:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:37:32.935 "name": "Existed_Raid", 00:37:32.935 "uuid": "00000000-0000-0000-0000-000000000000", 00:37:32.935 "strip_size_kb": 64, 00:37:32.935 "state": "configuring", 00:37:32.935 "raid_level": "raid5f", 00:37:32.935 "superblock": false, 00:37:32.935 "num_base_bdevs": 4, 00:37:32.935 "num_base_bdevs_discovered": 2, 00:37:32.935 "num_base_bdevs_operational": 4, 00:37:32.935 "base_bdevs_list": [ 00:37:32.935 { 00:37:32.935 "name": "BaseBdev1", 00:37:32.935 "uuid": "00000000-0000-0000-0000-000000000000", 00:37:32.935 "is_configured": false, 00:37:32.935 "data_offset": 0, 00:37:32.935 "data_size": 0 00:37:32.935 }, 00:37:32.935 { 00:37:32.935 "name": null, 00:37:32.935 "uuid": "35359d0b-43dd-4bdb-97ea-e8a4cc1d9966", 00:37:32.935 "is_configured": false, 00:37:32.935 "data_offset": 0, 00:37:32.935 "data_size": 65536 00:37:32.935 }, 00:37:32.935 { 00:37:32.935 "name": "BaseBdev3", 00:37:32.935 "uuid": "e47d9a36-7ea6-4e7b-a0fe-9e9170b818d5", 00:37:32.935 "is_configured": true, 00:37:32.935 "data_offset": 0, 00:37:32.935 "data_size": 65536 00:37:32.935 }, 00:37:32.935 { 00:37:32.935 "name": "BaseBdev4", 00:37:32.935 "uuid": "4d337c57-bf21-4312-b425-ac2ee750ac11", 00:37:32.935 "is_configured": true, 00:37:32.935 "data_offset": 0, 00:37:32.935 "data_size": 65536 00:37:32.935 } 00:37:32.935 ] 00:37:32.935 }' 00:37:32.935 17:34:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:37:32.935 17:34:10 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:37:33.194 17:34:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:37:33.194 17:34:10 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:33.194 17:34:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:37:33.194 17:34:10 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:37:33.194 17:34:10 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:33.452 17:34:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:37:33.452 17:34:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:37:33.452 17:34:10 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:33.452 17:34:10 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:37:33.452 [2024-11-26 17:34:10.691426] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:37:33.452 BaseBdev1 00:37:33.452 17:34:10 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:33.452 17:34:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:37:33.452 17:34:10 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:37:33.452 17:34:10 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:37:33.452 17:34:10 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@905 -- # local i 00:37:33.452 17:34:10 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:37:33.452 17:34:10 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:37:33.452 17:34:10 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:37:33.452 17:34:10 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:33.452 17:34:10 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:37:33.452 17:34:10 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:33.452 17:34:10 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:37:33.452 17:34:10 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:33.452 17:34:10 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:37:33.452 [ 00:37:33.452 { 00:37:33.452 "name": "BaseBdev1", 00:37:33.452 "aliases": [ 00:37:33.452 "0cb752ed-5972-40a6-a172-a109867a0cb3" 00:37:33.452 ], 00:37:33.452 "product_name": "Malloc disk", 00:37:33.452 "block_size": 512, 00:37:33.452 "num_blocks": 65536, 00:37:33.452 "uuid": "0cb752ed-5972-40a6-a172-a109867a0cb3", 00:37:33.453 "assigned_rate_limits": { 00:37:33.453 "rw_ios_per_sec": 0, 00:37:33.453 "rw_mbytes_per_sec": 0, 00:37:33.453 "r_mbytes_per_sec": 0, 00:37:33.453 "w_mbytes_per_sec": 0 00:37:33.453 }, 00:37:33.453 "claimed": true, 00:37:33.453 "claim_type": "exclusive_write", 00:37:33.453 "zoned": false, 00:37:33.453 "supported_io_types": { 00:37:33.453 "read": true, 00:37:33.453 "write": true, 00:37:33.453 "unmap": true, 00:37:33.453 "flush": true, 00:37:33.453 "reset": true, 00:37:33.453 "nvme_admin": false, 00:37:33.453 "nvme_io": false, 00:37:33.453 "nvme_io_md": false, 00:37:33.453 "write_zeroes": true, 00:37:33.453 "zcopy": true, 00:37:33.453 "get_zone_info": false, 00:37:33.453 "zone_management": false, 00:37:33.453 "zone_append": false, 00:37:33.453 "compare": false, 00:37:33.453 "compare_and_write": false, 00:37:33.453 "abort": true, 00:37:33.453 "seek_hole": false, 00:37:33.453 "seek_data": false, 00:37:33.453 "copy": true, 00:37:33.453 "nvme_iov_md": false 00:37:33.453 }, 00:37:33.453 "memory_domains": [ 00:37:33.453 { 00:37:33.453 "dma_device_id": "system", 00:37:33.453 "dma_device_type": 1 00:37:33.453 }, 00:37:33.453 { 00:37:33.453 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:37:33.453 "dma_device_type": 2 00:37:33.453 } 00:37:33.453 ], 00:37:33.453 "driver_specific": {} 00:37:33.453 } 00:37:33.453 ] 00:37:33.453 17:34:10 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:33.453 17:34:10 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:37:33.453 17:34:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:37:33.453 17:34:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:37:33.453 17:34:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:37:33.453 17:34:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:37:33.453 17:34:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:37:33.453 17:34:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:37:33.453 17:34:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:37:33.453 17:34:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:37:33.453 17:34:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:37:33.453 17:34:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:37:33.453 17:34:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:37:33.453 17:34:10 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:33.453 17:34:10 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:37:33.453 17:34:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:37:33.453 17:34:10 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:33.453 17:34:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:37:33.453 "name": "Existed_Raid", 00:37:33.453 "uuid": "00000000-0000-0000-0000-000000000000", 00:37:33.453 "strip_size_kb": 64, 00:37:33.453 "state": "configuring", 00:37:33.453 "raid_level": "raid5f", 00:37:33.453 "superblock": false, 00:37:33.453 "num_base_bdevs": 4, 00:37:33.453 "num_base_bdevs_discovered": 3, 00:37:33.453 "num_base_bdevs_operational": 4, 00:37:33.453 "base_bdevs_list": [ 00:37:33.453 { 00:37:33.453 "name": "BaseBdev1", 00:37:33.453 "uuid": "0cb752ed-5972-40a6-a172-a109867a0cb3", 00:37:33.453 "is_configured": true, 00:37:33.453 "data_offset": 0, 00:37:33.453 "data_size": 65536 00:37:33.453 }, 00:37:33.453 { 00:37:33.453 "name": null, 00:37:33.453 "uuid": "35359d0b-43dd-4bdb-97ea-e8a4cc1d9966", 00:37:33.453 "is_configured": false, 00:37:33.453 "data_offset": 0, 00:37:33.453 "data_size": 65536 00:37:33.453 }, 00:37:33.453 { 00:37:33.453 "name": "BaseBdev3", 00:37:33.453 "uuid": "e47d9a36-7ea6-4e7b-a0fe-9e9170b818d5", 00:37:33.453 "is_configured": true, 00:37:33.453 "data_offset": 0, 00:37:33.453 "data_size": 65536 00:37:33.453 }, 00:37:33.453 { 00:37:33.453 "name": "BaseBdev4", 00:37:33.453 "uuid": "4d337c57-bf21-4312-b425-ac2ee750ac11", 00:37:33.453 "is_configured": true, 00:37:33.453 "data_offset": 0, 00:37:33.453 "data_size": 65536 00:37:33.453 } 00:37:33.453 ] 00:37:33.453 }' 00:37:33.453 17:34:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:37:33.453 17:34:10 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:37:34.021 17:34:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:37:34.021 17:34:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:37:34.021 17:34:11 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:34.021 17:34:11 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:37:34.022 17:34:11 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:34.022 17:34:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:37:34.022 17:34:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:37:34.022 17:34:11 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:34.022 17:34:11 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:37:34.022 [2024-11-26 17:34:11.235629] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:37:34.022 17:34:11 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:34.022 17:34:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:37:34.022 17:34:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:37:34.022 17:34:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:37:34.022 17:34:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:37:34.022 17:34:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:37:34.022 17:34:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:37:34.022 17:34:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:37:34.022 17:34:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:37:34.022 17:34:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:37:34.022 17:34:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:37:34.022 17:34:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:37:34.022 17:34:11 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:34.022 17:34:11 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:37:34.022 17:34:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:37:34.022 17:34:11 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:34.022 17:34:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:37:34.022 "name": "Existed_Raid", 00:37:34.022 "uuid": "00000000-0000-0000-0000-000000000000", 00:37:34.022 "strip_size_kb": 64, 00:37:34.022 "state": "configuring", 00:37:34.022 "raid_level": "raid5f", 00:37:34.022 "superblock": false, 00:37:34.022 "num_base_bdevs": 4, 00:37:34.022 "num_base_bdevs_discovered": 2, 00:37:34.022 "num_base_bdevs_operational": 4, 00:37:34.022 "base_bdevs_list": [ 00:37:34.022 { 00:37:34.022 "name": "BaseBdev1", 00:37:34.022 "uuid": "0cb752ed-5972-40a6-a172-a109867a0cb3", 00:37:34.022 "is_configured": true, 00:37:34.022 "data_offset": 0, 00:37:34.022 "data_size": 65536 00:37:34.022 }, 00:37:34.022 { 00:37:34.022 "name": null, 00:37:34.022 "uuid": "35359d0b-43dd-4bdb-97ea-e8a4cc1d9966", 00:37:34.022 "is_configured": false, 00:37:34.022 "data_offset": 0, 00:37:34.022 "data_size": 65536 00:37:34.022 }, 00:37:34.022 { 00:37:34.022 "name": null, 00:37:34.022 "uuid": "e47d9a36-7ea6-4e7b-a0fe-9e9170b818d5", 00:37:34.022 "is_configured": false, 00:37:34.022 "data_offset": 0, 00:37:34.022 "data_size": 65536 00:37:34.022 }, 00:37:34.022 { 00:37:34.022 "name": "BaseBdev4", 00:37:34.022 "uuid": "4d337c57-bf21-4312-b425-ac2ee750ac11", 00:37:34.022 "is_configured": true, 00:37:34.022 "data_offset": 0, 00:37:34.022 "data_size": 65536 00:37:34.022 } 00:37:34.022 ] 00:37:34.022 }' 00:37:34.022 17:34:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:37:34.022 17:34:11 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:37:34.281 17:34:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:37:34.281 17:34:11 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:34.281 17:34:11 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:37:34.281 17:34:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:37:34.281 17:34:11 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:34.539 17:34:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:37:34.539 17:34:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:37:34.539 17:34:11 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:34.539 17:34:11 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:37:34.539 [2024-11-26 17:34:11.735699] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:37:34.539 17:34:11 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:34.539 17:34:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:37:34.539 17:34:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:37:34.539 17:34:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:37:34.539 17:34:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:37:34.539 17:34:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:37:34.539 17:34:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:37:34.539 17:34:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:37:34.539 17:34:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:37:34.539 17:34:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:37:34.539 17:34:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:37:34.539 17:34:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:37:34.539 17:34:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:37:34.539 17:34:11 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:34.539 17:34:11 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:37:34.539 17:34:11 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:34.539 17:34:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:37:34.539 "name": "Existed_Raid", 00:37:34.539 "uuid": "00000000-0000-0000-0000-000000000000", 00:37:34.539 "strip_size_kb": 64, 00:37:34.539 "state": "configuring", 00:37:34.539 "raid_level": "raid5f", 00:37:34.539 "superblock": false, 00:37:34.539 "num_base_bdevs": 4, 00:37:34.539 "num_base_bdevs_discovered": 3, 00:37:34.539 "num_base_bdevs_operational": 4, 00:37:34.539 "base_bdevs_list": [ 00:37:34.539 { 00:37:34.539 "name": "BaseBdev1", 00:37:34.539 "uuid": "0cb752ed-5972-40a6-a172-a109867a0cb3", 00:37:34.539 "is_configured": true, 00:37:34.539 "data_offset": 0, 00:37:34.539 "data_size": 65536 00:37:34.539 }, 00:37:34.539 { 00:37:34.539 "name": null, 00:37:34.539 "uuid": "35359d0b-43dd-4bdb-97ea-e8a4cc1d9966", 00:37:34.539 "is_configured": false, 00:37:34.539 "data_offset": 0, 00:37:34.539 "data_size": 65536 00:37:34.539 }, 00:37:34.539 { 00:37:34.539 "name": "BaseBdev3", 00:37:34.539 "uuid": "e47d9a36-7ea6-4e7b-a0fe-9e9170b818d5", 00:37:34.539 "is_configured": true, 00:37:34.539 "data_offset": 0, 00:37:34.539 "data_size": 65536 00:37:34.539 }, 00:37:34.539 { 00:37:34.539 "name": "BaseBdev4", 00:37:34.539 "uuid": "4d337c57-bf21-4312-b425-ac2ee750ac11", 00:37:34.539 "is_configured": true, 00:37:34.539 "data_offset": 0, 00:37:34.539 "data_size": 65536 00:37:34.539 } 00:37:34.539 ] 00:37:34.539 }' 00:37:34.539 17:34:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:37:34.539 17:34:11 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:37:34.798 17:34:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:37:34.798 17:34:12 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:34.798 17:34:12 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:37:34.798 17:34:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:37:34.798 17:34:12 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:34.798 17:34:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:37:34.798 17:34:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:37:34.798 17:34:12 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:34.798 17:34:12 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:37:34.798 [2024-11-26 17:34:12.243852] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:37:35.057 17:34:12 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:35.057 17:34:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:37:35.057 17:34:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:37:35.057 17:34:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:37:35.057 17:34:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:37:35.057 17:34:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:37:35.057 17:34:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:37:35.057 17:34:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:37:35.057 17:34:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:37:35.057 17:34:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:37:35.057 17:34:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:37:35.057 17:34:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:37:35.057 17:34:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:37:35.057 17:34:12 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:35.057 17:34:12 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:37:35.057 17:34:12 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:35.057 17:34:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:37:35.057 "name": "Existed_Raid", 00:37:35.057 "uuid": "00000000-0000-0000-0000-000000000000", 00:37:35.057 "strip_size_kb": 64, 00:37:35.057 "state": "configuring", 00:37:35.057 "raid_level": "raid5f", 00:37:35.057 "superblock": false, 00:37:35.057 "num_base_bdevs": 4, 00:37:35.057 "num_base_bdevs_discovered": 2, 00:37:35.057 "num_base_bdevs_operational": 4, 00:37:35.057 "base_bdevs_list": [ 00:37:35.057 { 00:37:35.057 "name": null, 00:37:35.057 "uuid": "0cb752ed-5972-40a6-a172-a109867a0cb3", 00:37:35.057 "is_configured": false, 00:37:35.057 "data_offset": 0, 00:37:35.057 "data_size": 65536 00:37:35.057 }, 00:37:35.057 { 00:37:35.057 "name": null, 00:37:35.057 "uuid": "35359d0b-43dd-4bdb-97ea-e8a4cc1d9966", 00:37:35.057 "is_configured": false, 00:37:35.057 "data_offset": 0, 00:37:35.057 "data_size": 65536 00:37:35.057 }, 00:37:35.057 { 00:37:35.057 "name": "BaseBdev3", 00:37:35.057 "uuid": "e47d9a36-7ea6-4e7b-a0fe-9e9170b818d5", 00:37:35.057 "is_configured": true, 00:37:35.057 "data_offset": 0, 00:37:35.057 "data_size": 65536 00:37:35.057 }, 00:37:35.057 { 00:37:35.057 "name": "BaseBdev4", 00:37:35.057 "uuid": "4d337c57-bf21-4312-b425-ac2ee750ac11", 00:37:35.057 "is_configured": true, 00:37:35.057 "data_offset": 0, 00:37:35.057 "data_size": 65536 00:37:35.057 } 00:37:35.057 ] 00:37:35.057 }' 00:37:35.057 17:34:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:37:35.057 17:34:12 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:37:35.624 17:34:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:37:35.624 17:34:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:37:35.624 17:34:12 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:35.624 17:34:12 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:37:35.624 17:34:12 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:35.624 17:34:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:37:35.624 17:34:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:37:35.624 17:34:12 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:35.624 17:34:12 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:37:35.624 [2024-11-26 17:34:12.842565] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:37:35.624 17:34:12 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:35.624 17:34:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:37:35.624 17:34:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:37:35.624 17:34:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:37:35.624 17:34:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:37:35.624 17:34:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:37:35.624 17:34:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:37:35.624 17:34:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:37:35.624 17:34:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:37:35.624 17:34:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:37:35.624 17:34:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:37:35.624 17:34:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:37:35.624 17:34:12 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:35.624 17:34:12 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:37:35.624 17:34:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:37:35.624 17:34:12 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:35.624 17:34:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:37:35.624 "name": "Existed_Raid", 00:37:35.624 "uuid": "00000000-0000-0000-0000-000000000000", 00:37:35.624 "strip_size_kb": 64, 00:37:35.624 "state": "configuring", 00:37:35.624 "raid_level": "raid5f", 00:37:35.624 "superblock": false, 00:37:35.624 "num_base_bdevs": 4, 00:37:35.624 "num_base_bdevs_discovered": 3, 00:37:35.624 "num_base_bdevs_operational": 4, 00:37:35.624 "base_bdevs_list": [ 00:37:35.624 { 00:37:35.624 "name": null, 00:37:35.624 "uuid": "0cb752ed-5972-40a6-a172-a109867a0cb3", 00:37:35.624 "is_configured": false, 00:37:35.624 "data_offset": 0, 00:37:35.624 "data_size": 65536 00:37:35.624 }, 00:37:35.624 { 00:37:35.624 "name": "BaseBdev2", 00:37:35.624 "uuid": "35359d0b-43dd-4bdb-97ea-e8a4cc1d9966", 00:37:35.624 "is_configured": true, 00:37:35.624 "data_offset": 0, 00:37:35.624 "data_size": 65536 00:37:35.624 }, 00:37:35.624 { 00:37:35.624 "name": "BaseBdev3", 00:37:35.624 "uuid": "e47d9a36-7ea6-4e7b-a0fe-9e9170b818d5", 00:37:35.624 "is_configured": true, 00:37:35.624 "data_offset": 0, 00:37:35.624 "data_size": 65536 00:37:35.624 }, 00:37:35.624 { 00:37:35.624 "name": "BaseBdev4", 00:37:35.624 "uuid": "4d337c57-bf21-4312-b425-ac2ee750ac11", 00:37:35.624 "is_configured": true, 00:37:35.624 "data_offset": 0, 00:37:35.624 "data_size": 65536 00:37:35.624 } 00:37:35.624 ] 00:37:35.624 }' 00:37:35.624 17:34:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:37:35.624 17:34:12 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:37:35.882 17:34:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:37:35.882 17:34:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:37:35.882 17:34:13 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:35.883 17:34:13 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:37:35.883 17:34:13 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:36.141 17:34:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:37:36.141 17:34:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:37:36.141 17:34:13 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:36.141 17:34:13 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:37:36.141 17:34:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:37:36.141 17:34:13 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:36.141 17:34:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u 0cb752ed-5972-40a6-a172-a109867a0cb3 00:37:36.141 17:34:13 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:36.141 17:34:13 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:37:36.141 [2024-11-26 17:34:13.419993] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:37:36.141 [2024-11-26 17:34:13.420076] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:37:36.141 [2024-11-26 17:34:13.420088] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 196608, blocklen 512 00:37:36.141 [2024-11-26 17:34:13.420389] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000063c0 00:37:36.141 [2024-11-26 17:34:13.427554] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:37:36.141 [2024-11-26 17:34:13.427583] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000008200 00:37:36.141 [2024-11-26 17:34:13.427869] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:37:36.141 NewBaseBdev 00:37:36.141 17:34:13 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:36.141 17:34:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:37:36.141 17:34:13 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=NewBaseBdev 00:37:36.141 17:34:13 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:37:36.141 17:34:13 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@905 -- # local i 00:37:36.141 17:34:13 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:37:36.141 17:34:13 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:37:36.141 17:34:13 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:37:36.141 17:34:13 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:36.141 17:34:13 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:37:36.141 17:34:13 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:36.142 17:34:13 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:37:36.142 17:34:13 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:36.142 17:34:13 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:37:36.142 [ 00:37:36.142 { 00:37:36.142 "name": "NewBaseBdev", 00:37:36.142 "aliases": [ 00:37:36.142 "0cb752ed-5972-40a6-a172-a109867a0cb3" 00:37:36.142 ], 00:37:36.142 "product_name": "Malloc disk", 00:37:36.142 "block_size": 512, 00:37:36.142 "num_blocks": 65536, 00:37:36.142 "uuid": "0cb752ed-5972-40a6-a172-a109867a0cb3", 00:37:36.142 "assigned_rate_limits": { 00:37:36.142 "rw_ios_per_sec": 0, 00:37:36.142 "rw_mbytes_per_sec": 0, 00:37:36.142 "r_mbytes_per_sec": 0, 00:37:36.142 "w_mbytes_per_sec": 0 00:37:36.142 }, 00:37:36.142 "claimed": true, 00:37:36.142 "claim_type": "exclusive_write", 00:37:36.142 "zoned": false, 00:37:36.142 "supported_io_types": { 00:37:36.142 "read": true, 00:37:36.142 "write": true, 00:37:36.142 "unmap": true, 00:37:36.142 "flush": true, 00:37:36.142 "reset": true, 00:37:36.142 "nvme_admin": false, 00:37:36.142 "nvme_io": false, 00:37:36.142 "nvme_io_md": false, 00:37:36.142 "write_zeroes": true, 00:37:36.142 "zcopy": true, 00:37:36.142 "get_zone_info": false, 00:37:36.142 "zone_management": false, 00:37:36.142 "zone_append": false, 00:37:36.142 "compare": false, 00:37:36.142 "compare_and_write": false, 00:37:36.142 "abort": true, 00:37:36.142 "seek_hole": false, 00:37:36.142 "seek_data": false, 00:37:36.142 "copy": true, 00:37:36.142 "nvme_iov_md": false 00:37:36.142 }, 00:37:36.142 "memory_domains": [ 00:37:36.142 { 00:37:36.142 "dma_device_id": "system", 00:37:36.142 "dma_device_type": 1 00:37:36.142 }, 00:37:36.142 { 00:37:36.142 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:37:36.142 "dma_device_type": 2 00:37:36.142 } 00:37:36.142 ], 00:37:36.142 "driver_specific": {} 00:37:36.142 } 00:37:36.142 ] 00:37:36.142 17:34:13 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:36.142 17:34:13 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:37:36.142 17:34:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 4 00:37:36.142 17:34:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:37:36.142 17:34:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:37:36.142 17:34:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:37:36.142 17:34:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:37:36.142 17:34:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:37:36.142 17:34:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:37:36.142 17:34:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:37:36.142 17:34:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:37:36.142 17:34:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:37:36.142 17:34:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:37:36.142 17:34:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:37:36.142 17:34:13 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:36.142 17:34:13 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:37:36.142 17:34:13 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:36.142 17:34:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:37:36.142 "name": "Existed_Raid", 00:37:36.142 "uuid": "a99f97c9-fdec-46c7-9738-39c40f0dc474", 00:37:36.142 "strip_size_kb": 64, 00:37:36.142 "state": "online", 00:37:36.142 "raid_level": "raid5f", 00:37:36.142 "superblock": false, 00:37:36.142 "num_base_bdevs": 4, 00:37:36.142 "num_base_bdevs_discovered": 4, 00:37:36.142 "num_base_bdevs_operational": 4, 00:37:36.142 "base_bdevs_list": [ 00:37:36.142 { 00:37:36.142 "name": "NewBaseBdev", 00:37:36.142 "uuid": "0cb752ed-5972-40a6-a172-a109867a0cb3", 00:37:36.142 "is_configured": true, 00:37:36.142 "data_offset": 0, 00:37:36.142 "data_size": 65536 00:37:36.142 }, 00:37:36.142 { 00:37:36.142 "name": "BaseBdev2", 00:37:36.142 "uuid": "35359d0b-43dd-4bdb-97ea-e8a4cc1d9966", 00:37:36.142 "is_configured": true, 00:37:36.142 "data_offset": 0, 00:37:36.142 "data_size": 65536 00:37:36.142 }, 00:37:36.142 { 00:37:36.142 "name": "BaseBdev3", 00:37:36.142 "uuid": "e47d9a36-7ea6-4e7b-a0fe-9e9170b818d5", 00:37:36.142 "is_configured": true, 00:37:36.142 "data_offset": 0, 00:37:36.142 "data_size": 65536 00:37:36.142 }, 00:37:36.142 { 00:37:36.142 "name": "BaseBdev4", 00:37:36.142 "uuid": "4d337c57-bf21-4312-b425-ac2ee750ac11", 00:37:36.142 "is_configured": true, 00:37:36.142 "data_offset": 0, 00:37:36.142 "data_size": 65536 00:37:36.142 } 00:37:36.142 ] 00:37:36.142 }' 00:37:36.142 17:34:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:37:36.142 17:34:13 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:37:36.710 17:34:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:37:36.710 17:34:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:37:36.710 17:34:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:37:36.710 17:34:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:37:36.710 17:34:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:37:36.710 17:34:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:37:36.710 17:34:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:37:36.711 17:34:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:37:36.711 17:34:13 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:36.711 17:34:13 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:37:36.711 [2024-11-26 17:34:13.900756] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:37:36.711 17:34:13 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:36.711 17:34:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:37:36.711 "name": "Existed_Raid", 00:37:36.711 "aliases": [ 00:37:36.711 "a99f97c9-fdec-46c7-9738-39c40f0dc474" 00:37:36.711 ], 00:37:36.711 "product_name": "Raid Volume", 00:37:36.711 "block_size": 512, 00:37:36.711 "num_blocks": 196608, 00:37:36.711 "uuid": "a99f97c9-fdec-46c7-9738-39c40f0dc474", 00:37:36.711 "assigned_rate_limits": { 00:37:36.711 "rw_ios_per_sec": 0, 00:37:36.711 "rw_mbytes_per_sec": 0, 00:37:36.711 "r_mbytes_per_sec": 0, 00:37:36.711 "w_mbytes_per_sec": 0 00:37:36.711 }, 00:37:36.711 "claimed": false, 00:37:36.711 "zoned": false, 00:37:36.711 "supported_io_types": { 00:37:36.711 "read": true, 00:37:36.711 "write": true, 00:37:36.711 "unmap": false, 00:37:36.711 "flush": false, 00:37:36.711 "reset": true, 00:37:36.711 "nvme_admin": false, 00:37:36.711 "nvme_io": false, 00:37:36.711 "nvme_io_md": false, 00:37:36.711 "write_zeroes": true, 00:37:36.711 "zcopy": false, 00:37:36.711 "get_zone_info": false, 00:37:36.711 "zone_management": false, 00:37:36.711 "zone_append": false, 00:37:36.711 "compare": false, 00:37:36.711 "compare_and_write": false, 00:37:36.711 "abort": false, 00:37:36.711 "seek_hole": false, 00:37:36.711 "seek_data": false, 00:37:36.711 "copy": false, 00:37:36.711 "nvme_iov_md": false 00:37:36.711 }, 00:37:36.711 "driver_specific": { 00:37:36.711 "raid": { 00:37:36.711 "uuid": "a99f97c9-fdec-46c7-9738-39c40f0dc474", 00:37:36.711 "strip_size_kb": 64, 00:37:36.711 "state": "online", 00:37:36.711 "raid_level": "raid5f", 00:37:36.711 "superblock": false, 00:37:36.711 "num_base_bdevs": 4, 00:37:36.711 "num_base_bdevs_discovered": 4, 00:37:36.711 "num_base_bdevs_operational": 4, 00:37:36.711 "base_bdevs_list": [ 00:37:36.711 { 00:37:36.711 "name": "NewBaseBdev", 00:37:36.711 "uuid": "0cb752ed-5972-40a6-a172-a109867a0cb3", 00:37:36.711 "is_configured": true, 00:37:36.711 "data_offset": 0, 00:37:36.711 "data_size": 65536 00:37:36.711 }, 00:37:36.711 { 00:37:36.711 "name": "BaseBdev2", 00:37:36.711 "uuid": "35359d0b-43dd-4bdb-97ea-e8a4cc1d9966", 00:37:36.711 "is_configured": true, 00:37:36.711 "data_offset": 0, 00:37:36.711 "data_size": 65536 00:37:36.711 }, 00:37:36.711 { 00:37:36.711 "name": "BaseBdev3", 00:37:36.711 "uuid": "e47d9a36-7ea6-4e7b-a0fe-9e9170b818d5", 00:37:36.711 "is_configured": true, 00:37:36.711 "data_offset": 0, 00:37:36.711 "data_size": 65536 00:37:36.711 }, 00:37:36.711 { 00:37:36.711 "name": "BaseBdev4", 00:37:36.711 "uuid": "4d337c57-bf21-4312-b425-ac2ee750ac11", 00:37:36.711 "is_configured": true, 00:37:36.711 "data_offset": 0, 00:37:36.711 "data_size": 65536 00:37:36.711 } 00:37:36.711 ] 00:37:36.711 } 00:37:36.711 } 00:37:36.711 }' 00:37:36.711 17:34:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:37:36.711 17:34:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:37:36.711 BaseBdev2 00:37:36.711 BaseBdev3 00:37:36.711 BaseBdev4' 00:37:36.711 17:34:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:37:36.711 17:34:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:37:36.711 17:34:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:37:36.711 17:34:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:37:36.711 17:34:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:37:36.711 17:34:14 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:36.711 17:34:14 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:37:36.711 17:34:14 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:36.711 17:34:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:37:36.711 17:34:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:37:36.711 17:34:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:37:36.711 17:34:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:37:36.711 17:34:14 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:36.711 17:34:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:37:36.711 17:34:14 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:37:36.711 17:34:14 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:36.711 17:34:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:37:36.711 17:34:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:37:36.711 17:34:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:37:36.711 17:34:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:37:36.711 17:34:14 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:36.711 17:34:14 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:37:36.711 17:34:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:37:36.711 17:34:14 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:36.994 17:34:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:37:36.994 17:34:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:37:36.994 17:34:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:37:36.994 17:34:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:37:36.994 17:34:14 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:36.994 17:34:14 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:37:36.994 17:34:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:37:36.994 17:34:14 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:36.994 17:34:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:37:36.994 17:34:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:37:36.994 17:34:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:37:36.994 17:34:14 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:36.994 17:34:14 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:37:36.994 [2024-11-26 17:34:14.212568] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:37:36.994 [2024-11-26 17:34:14.212615] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:37:36.994 [2024-11-26 17:34:14.212685] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:37:36.994 [2024-11-26 17:34:14.212977] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:37:36.994 [2024-11-26 17:34:14.212997] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name Existed_Raid, state offline 00:37:36.994 17:34:14 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:36.994 17:34:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@326 -- # killprocess 83243 00:37:36.994 17:34:14 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@954 -- # '[' -z 83243 ']' 00:37:36.994 17:34:14 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@958 -- # kill -0 83243 00:37:36.994 17:34:14 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@959 -- # uname 00:37:36.994 17:34:14 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:37:36.994 17:34:14 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 83243 00:37:36.994 17:34:14 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:37:36.994 killing process with pid 83243 00:37:36.994 17:34:14 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:37:36.995 17:34:14 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 83243' 00:37:36.995 17:34:14 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@973 -- # kill 83243 00:37:36.995 [2024-11-26 17:34:14.258702] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:37:36.995 17:34:14 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@978 -- # wait 83243 00:37:37.284 [2024-11-26 17:34:14.675137] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:37:38.661 17:34:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@328 -- # return 0 00:37:38.661 00:37:38.661 real 0m11.725s 00:37:38.661 user 0m18.706s 00:37:38.661 sys 0m2.229s 00:37:38.661 17:34:15 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:37:38.661 17:34:15 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:37:38.661 ************************************ 00:37:38.661 END TEST raid5f_state_function_test 00:37:38.661 ************************************ 00:37:38.661 17:34:15 bdev_raid -- bdev/bdev_raid.sh@987 -- # run_test raid5f_state_function_test_sb raid_state_function_test raid5f 4 true 00:37:38.661 17:34:15 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:37:38.661 17:34:15 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:37:38.661 17:34:15 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:37:38.661 ************************************ 00:37:38.661 START TEST raid5f_state_function_test_sb 00:37:38.661 ************************************ 00:37:38.661 17:34:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@1129 -- # raid_state_function_test raid5f 4 true 00:37:38.661 17:34:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # local raid_level=raid5f 00:37:38.661 17:34:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=4 00:37:38.661 17:34:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:37:38.661 17:34:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:37:38.661 17:34:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:37:38.662 17:34:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:37:38.662 17:34:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:37:38.662 17:34:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:37:38.662 17:34:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:37:38.662 17:34:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:37:38.662 17:34:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:37:38.662 17:34:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:37:38.662 17:34:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:37:38.662 17:34:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:37:38.662 17:34:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:37:38.662 17:34:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev4 00:37:38.662 17:34:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:37:38.662 17:34:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:37:38.662 17:34:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:37:38.662 17:34:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:37:38.662 17:34:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:37:38.662 17:34:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # local strip_size 00:37:38.662 17:34:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:37:38.662 17:34:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:37:38.662 17:34:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@215 -- # '[' raid5f '!=' raid1 ']' 00:37:38.662 17:34:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:37:38.662 17:34:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:37:38.662 17:34:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:37:38.662 17:34:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:37:38.662 17:34:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@229 -- # raid_pid=83913 00:37:38.662 Process raid pid: 83913 00:37:38.662 17:34:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 83913' 00:37:38.662 17:34:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # waitforlisten 83913 00:37:38.662 17:34:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@835 -- # '[' -z 83913 ']' 00:37:38.662 17:34:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:37:38.662 17:34:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@840 -- # local max_retries=100 00:37:38.662 17:34:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:37:38.662 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:37:38.662 17:34:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:37:38.662 17:34:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@844 -- # xtrace_disable 00:37:38.662 17:34:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:37:38.662 [2024-11-26 17:34:16.015523] Starting SPDK v25.01-pre git sha1 f7ce15267 / DPDK 24.03.0 initialization... 00:37:38.662 [2024-11-26 17:34:16.015693] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:37:38.921 [2024-11-26 17:34:16.214688] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:37:38.921 [2024-11-26 17:34:16.330877] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:37:39.179 [2024-11-26 17:34:16.534013] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:37:39.179 [2024-11-26 17:34:16.534059] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:37:39.746 17:34:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:37:39.746 17:34:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@868 -- # return 0 00:37:39.746 17:34:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:37:39.746 17:34:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:39.746 17:34:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:37:39.746 [2024-11-26 17:34:16.939141] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:37:39.746 [2024-11-26 17:34:16.939190] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:37:39.746 [2024-11-26 17:34:16.939202] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:37:39.746 [2024-11-26 17:34:16.939215] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:37:39.746 [2024-11-26 17:34:16.939223] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:37:39.746 [2024-11-26 17:34:16.939235] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:37:39.746 [2024-11-26 17:34:16.939242] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:37:39.746 [2024-11-26 17:34:16.939254] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:37:39.746 17:34:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:39.746 17:34:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:37:39.746 17:34:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:37:39.746 17:34:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:37:39.746 17:34:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:37:39.746 17:34:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:37:39.746 17:34:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:37:39.746 17:34:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:37:39.746 17:34:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:37:39.746 17:34:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:37:39.746 17:34:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:37:39.746 17:34:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:37:39.746 17:34:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:37:39.746 17:34:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:39.746 17:34:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:37:39.746 17:34:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:39.746 17:34:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:37:39.746 "name": "Existed_Raid", 00:37:39.746 "uuid": "ba137eb4-8846-42ce-9914-1d4edc995b4f", 00:37:39.746 "strip_size_kb": 64, 00:37:39.746 "state": "configuring", 00:37:39.746 "raid_level": "raid5f", 00:37:39.746 "superblock": true, 00:37:39.746 "num_base_bdevs": 4, 00:37:39.746 "num_base_bdevs_discovered": 0, 00:37:39.746 "num_base_bdevs_operational": 4, 00:37:39.746 "base_bdevs_list": [ 00:37:39.746 { 00:37:39.746 "name": "BaseBdev1", 00:37:39.746 "uuid": "00000000-0000-0000-0000-000000000000", 00:37:39.746 "is_configured": false, 00:37:39.746 "data_offset": 0, 00:37:39.746 "data_size": 0 00:37:39.746 }, 00:37:39.746 { 00:37:39.746 "name": "BaseBdev2", 00:37:39.746 "uuid": "00000000-0000-0000-0000-000000000000", 00:37:39.746 "is_configured": false, 00:37:39.746 "data_offset": 0, 00:37:39.746 "data_size": 0 00:37:39.746 }, 00:37:39.746 { 00:37:39.746 "name": "BaseBdev3", 00:37:39.746 "uuid": "00000000-0000-0000-0000-000000000000", 00:37:39.746 "is_configured": false, 00:37:39.746 "data_offset": 0, 00:37:39.746 "data_size": 0 00:37:39.746 }, 00:37:39.746 { 00:37:39.746 "name": "BaseBdev4", 00:37:39.746 "uuid": "00000000-0000-0000-0000-000000000000", 00:37:39.746 "is_configured": false, 00:37:39.746 "data_offset": 0, 00:37:39.746 "data_size": 0 00:37:39.746 } 00:37:39.746 ] 00:37:39.746 }' 00:37:39.746 17:34:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:37:39.746 17:34:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:37:40.004 17:34:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:37:40.004 17:34:17 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:40.004 17:34:17 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:37:40.004 [2024-11-26 17:34:17.375309] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:37:40.004 [2024-11-26 17:34:17.375387] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:37:40.004 17:34:17 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:40.004 17:34:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:37:40.004 17:34:17 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:40.004 17:34:17 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:37:40.004 [2024-11-26 17:34:17.387283] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:37:40.005 [2024-11-26 17:34:17.387343] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:37:40.005 [2024-11-26 17:34:17.387359] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:37:40.005 [2024-11-26 17:34:17.387380] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:37:40.005 [2024-11-26 17:34:17.387393] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:37:40.005 [2024-11-26 17:34:17.387413] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:37:40.005 [2024-11-26 17:34:17.387425] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:37:40.005 [2024-11-26 17:34:17.387445] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:37:40.005 17:34:17 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:40.005 17:34:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:37:40.005 17:34:17 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:40.005 17:34:17 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:37:40.263 [2024-11-26 17:34:17.467180] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:37:40.263 BaseBdev1 00:37:40.263 17:34:17 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:40.263 17:34:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:37:40.263 17:34:17 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:37:40.263 17:34:17 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:37:40.263 17:34:17 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:37:40.263 17:34:17 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:37:40.263 17:34:17 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:37:40.263 17:34:17 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:37:40.263 17:34:17 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:40.263 17:34:17 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:37:40.263 17:34:17 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:40.263 17:34:17 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:37:40.263 17:34:17 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:40.263 17:34:17 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:37:40.263 [ 00:37:40.263 { 00:37:40.263 "name": "BaseBdev1", 00:37:40.263 "aliases": [ 00:37:40.263 "bf99e958-4e19-4d68-87dc-3324dedcfb27" 00:37:40.263 ], 00:37:40.263 "product_name": "Malloc disk", 00:37:40.263 "block_size": 512, 00:37:40.263 "num_blocks": 65536, 00:37:40.263 "uuid": "bf99e958-4e19-4d68-87dc-3324dedcfb27", 00:37:40.263 "assigned_rate_limits": { 00:37:40.263 "rw_ios_per_sec": 0, 00:37:40.263 "rw_mbytes_per_sec": 0, 00:37:40.263 "r_mbytes_per_sec": 0, 00:37:40.263 "w_mbytes_per_sec": 0 00:37:40.263 }, 00:37:40.263 "claimed": true, 00:37:40.263 "claim_type": "exclusive_write", 00:37:40.263 "zoned": false, 00:37:40.263 "supported_io_types": { 00:37:40.263 "read": true, 00:37:40.263 "write": true, 00:37:40.263 "unmap": true, 00:37:40.263 "flush": true, 00:37:40.263 "reset": true, 00:37:40.263 "nvme_admin": false, 00:37:40.263 "nvme_io": false, 00:37:40.263 "nvme_io_md": false, 00:37:40.263 "write_zeroes": true, 00:37:40.263 "zcopy": true, 00:37:40.263 "get_zone_info": false, 00:37:40.263 "zone_management": false, 00:37:40.263 "zone_append": false, 00:37:40.263 "compare": false, 00:37:40.263 "compare_and_write": false, 00:37:40.263 "abort": true, 00:37:40.263 "seek_hole": false, 00:37:40.263 "seek_data": false, 00:37:40.263 "copy": true, 00:37:40.263 "nvme_iov_md": false 00:37:40.263 }, 00:37:40.263 "memory_domains": [ 00:37:40.263 { 00:37:40.263 "dma_device_id": "system", 00:37:40.263 "dma_device_type": 1 00:37:40.263 }, 00:37:40.263 { 00:37:40.263 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:37:40.263 "dma_device_type": 2 00:37:40.263 } 00:37:40.264 ], 00:37:40.264 "driver_specific": {} 00:37:40.264 } 00:37:40.264 ] 00:37:40.264 17:34:17 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:40.264 17:34:17 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:37:40.264 17:34:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:37:40.264 17:34:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:37:40.264 17:34:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:37:40.264 17:34:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:37:40.264 17:34:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:37:40.264 17:34:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:37:40.264 17:34:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:37:40.264 17:34:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:37:40.264 17:34:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:37:40.264 17:34:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:37:40.264 17:34:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:37:40.264 17:34:17 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:40.264 17:34:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:37:40.264 17:34:17 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:37:40.264 17:34:17 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:40.264 17:34:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:37:40.264 "name": "Existed_Raid", 00:37:40.264 "uuid": "6ffa5190-766b-4511-81be-dd7e2cae86cf", 00:37:40.264 "strip_size_kb": 64, 00:37:40.264 "state": "configuring", 00:37:40.264 "raid_level": "raid5f", 00:37:40.264 "superblock": true, 00:37:40.264 "num_base_bdevs": 4, 00:37:40.264 "num_base_bdevs_discovered": 1, 00:37:40.264 "num_base_bdevs_operational": 4, 00:37:40.264 "base_bdevs_list": [ 00:37:40.264 { 00:37:40.264 "name": "BaseBdev1", 00:37:40.264 "uuid": "bf99e958-4e19-4d68-87dc-3324dedcfb27", 00:37:40.264 "is_configured": true, 00:37:40.264 "data_offset": 2048, 00:37:40.264 "data_size": 63488 00:37:40.264 }, 00:37:40.264 { 00:37:40.264 "name": "BaseBdev2", 00:37:40.264 "uuid": "00000000-0000-0000-0000-000000000000", 00:37:40.264 "is_configured": false, 00:37:40.264 "data_offset": 0, 00:37:40.264 "data_size": 0 00:37:40.264 }, 00:37:40.264 { 00:37:40.264 "name": "BaseBdev3", 00:37:40.264 "uuid": "00000000-0000-0000-0000-000000000000", 00:37:40.264 "is_configured": false, 00:37:40.264 "data_offset": 0, 00:37:40.264 "data_size": 0 00:37:40.264 }, 00:37:40.264 { 00:37:40.264 "name": "BaseBdev4", 00:37:40.264 "uuid": "00000000-0000-0000-0000-000000000000", 00:37:40.264 "is_configured": false, 00:37:40.264 "data_offset": 0, 00:37:40.264 "data_size": 0 00:37:40.264 } 00:37:40.264 ] 00:37:40.264 }' 00:37:40.264 17:34:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:37:40.264 17:34:17 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:37:40.523 17:34:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:37:40.523 17:34:17 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:40.523 17:34:17 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:37:40.523 [2024-11-26 17:34:17.955278] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:37:40.523 [2024-11-26 17:34:17.955325] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:37:40.523 17:34:17 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:40.523 17:34:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:37:40.523 17:34:17 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:40.523 17:34:17 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:37:40.523 [2024-11-26 17:34:17.963358] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:37:40.523 [2024-11-26 17:34:17.965769] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:37:40.523 [2024-11-26 17:34:17.965812] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:37:40.523 [2024-11-26 17:34:17.965824] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:37:40.523 [2024-11-26 17:34:17.965840] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:37:40.523 [2024-11-26 17:34:17.965848] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:37:40.523 [2024-11-26 17:34:17.965860] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:37:40.523 17:34:17 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:40.523 17:34:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:37:40.523 17:34:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:37:40.523 17:34:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:37:40.523 17:34:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:37:40.523 17:34:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:37:40.523 17:34:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:37:40.523 17:34:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:37:40.523 17:34:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:37:40.782 17:34:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:37:40.782 17:34:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:37:40.782 17:34:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:37:40.782 17:34:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:37:40.782 17:34:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:37:40.782 17:34:17 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:40.782 17:34:17 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:37:40.782 17:34:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:37:40.782 17:34:17 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:40.782 17:34:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:37:40.782 "name": "Existed_Raid", 00:37:40.782 "uuid": "0f53900f-cdcb-4907-8320-4b9dabdb4508", 00:37:40.782 "strip_size_kb": 64, 00:37:40.782 "state": "configuring", 00:37:40.782 "raid_level": "raid5f", 00:37:40.782 "superblock": true, 00:37:40.782 "num_base_bdevs": 4, 00:37:40.782 "num_base_bdevs_discovered": 1, 00:37:40.782 "num_base_bdevs_operational": 4, 00:37:40.782 "base_bdevs_list": [ 00:37:40.782 { 00:37:40.782 "name": "BaseBdev1", 00:37:40.782 "uuid": "bf99e958-4e19-4d68-87dc-3324dedcfb27", 00:37:40.782 "is_configured": true, 00:37:40.782 "data_offset": 2048, 00:37:40.782 "data_size": 63488 00:37:40.782 }, 00:37:40.782 { 00:37:40.782 "name": "BaseBdev2", 00:37:40.782 "uuid": "00000000-0000-0000-0000-000000000000", 00:37:40.782 "is_configured": false, 00:37:40.782 "data_offset": 0, 00:37:40.782 "data_size": 0 00:37:40.782 }, 00:37:40.782 { 00:37:40.782 "name": "BaseBdev3", 00:37:40.782 "uuid": "00000000-0000-0000-0000-000000000000", 00:37:40.782 "is_configured": false, 00:37:40.782 "data_offset": 0, 00:37:40.782 "data_size": 0 00:37:40.782 }, 00:37:40.782 { 00:37:40.782 "name": "BaseBdev4", 00:37:40.782 "uuid": "00000000-0000-0000-0000-000000000000", 00:37:40.782 "is_configured": false, 00:37:40.782 "data_offset": 0, 00:37:40.782 "data_size": 0 00:37:40.782 } 00:37:40.782 ] 00:37:40.782 }' 00:37:40.782 17:34:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:37:40.782 17:34:18 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:37:41.042 17:34:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:37:41.042 17:34:18 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:41.042 17:34:18 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:37:41.042 [2024-11-26 17:34:18.476258] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:37:41.042 BaseBdev2 00:37:41.042 17:34:18 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:41.042 17:34:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:37:41.042 17:34:18 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:37:41.042 17:34:18 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:37:41.042 17:34:18 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:37:41.042 17:34:18 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:37:41.042 17:34:18 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:37:41.042 17:34:18 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:37:41.042 17:34:18 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:41.042 17:34:18 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:37:41.301 17:34:18 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:41.301 17:34:18 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:37:41.301 17:34:18 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:41.301 17:34:18 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:37:41.301 [ 00:37:41.301 { 00:37:41.301 "name": "BaseBdev2", 00:37:41.301 "aliases": [ 00:37:41.301 "66237645-98fc-49e7-9c9e-0259e850021a" 00:37:41.301 ], 00:37:41.301 "product_name": "Malloc disk", 00:37:41.301 "block_size": 512, 00:37:41.301 "num_blocks": 65536, 00:37:41.301 "uuid": "66237645-98fc-49e7-9c9e-0259e850021a", 00:37:41.301 "assigned_rate_limits": { 00:37:41.301 "rw_ios_per_sec": 0, 00:37:41.301 "rw_mbytes_per_sec": 0, 00:37:41.301 "r_mbytes_per_sec": 0, 00:37:41.301 "w_mbytes_per_sec": 0 00:37:41.301 }, 00:37:41.301 "claimed": true, 00:37:41.301 "claim_type": "exclusive_write", 00:37:41.301 "zoned": false, 00:37:41.301 "supported_io_types": { 00:37:41.301 "read": true, 00:37:41.301 "write": true, 00:37:41.301 "unmap": true, 00:37:41.301 "flush": true, 00:37:41.301 "reset": true, 00:37:41.301 "nvme_admin": false, 00:37:41.301 "nvme_io": false, 00:37:41.301 "nvme_io_md": false, 00:37:41.301 "write_zeroes": true, 00:37:41.301 "zcopy": true, 00:37:41.301 "get_zone_info": false, 00:37:41.301 "zone_management": false, 00:37:41.301 "zone_append": false, 00:37:41.301 "compare": false, 00:37:41.301 "compare_and_write": false, 00:37:41.301 "abort": true, 00:37:41.301 "seek_hole": false, 00:37:41.301 "seek_data": false, 00:37:41.301 "copy": true, 00:37:41.301 "nvme_iov_md": false 00:37:41.301 }, 00:37:41.301 "memory_domains": [ 00:37:41.301 { 00:37:41.301 "dma_device_id": "system", 00:37:41.301 "dma_device_type": 1 00:37:41.301 }, 00:37:41.301 { 00:37:41.301 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:37:41.301 "dma_device_type": 2 00:37:41.301 } 00:37:41.301 ], 00:37:41.301 "driver_specific": {} 00:37:41.301 } 00:37:41.301 ] 00:37:41.301 17:34:18 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:41.301 17:34:18 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:37:41.301 17:34:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:37:41.301 17:34:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:37:41.301 17:34:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:37:41.301 17:34:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:37:41.301 17:34:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:37:41.301 17:34:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:37:41.301 17:34:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:37:41.301 17:34:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:37:41.301 17:34:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:37:41.301 17:34:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:37:41.301 17:34:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:37:41.301 17:34:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:37:41.301 17:34:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:37:41.302 17:34:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:37:41.302 17:34:18 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:41.302 17:34:18 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:37:41.302 17:34:18 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:41.302 17:34:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:37:41.302 "name": "Existed_Raid", 00:37:41.302 "uuid": "0f53900f-cdcb-4907-8320-4b9dabdb4508", 00:37:41.302 "strip_size_kb": 64, 00:37:41.302 "state": "configuring", 00:37:41.302 "raid_level": "raid5f", 00:37:41.302 "superblock": true, 00:37:41.302 "num_base_bdevs": 4, 00:37:41.302 "num_base_bdevs_discovered": 2, 00:37:41.302 "num_base_bdevs_operational": 4, 00:37:41.302 "base_bdevs_list": [ 00:37:41.302 { 00:37:41.302 "name": "BaseBdev1", 00:37:41.302 "uuid": "bf99e958-4e19-4d68-87dc-3324dedcfb27", 00:37:41.302 "is_configured": true, 00:37:41.302 "data_offset": 2048, 00:37:41.302 "data_size": 63488 00:37:41.302 }, 00:37:41.302 { 00:37:41.302 "name": "BaseBdev2", 00:37:41.302 "uuid": "66237645-98fc-49e7-9c9e-0259e850021a", 00:37:41.302 "is_configured": true, 00:37:41.302 "data_offset": 2048, 00:37:41.302 "data_size": 63488 00:37:41.302 }, 00:37:41.302 { 00:37:41.302 "name": "BaseBdev3", 00:37:41.302 "uuid": "00000000-0000-0000-0000-000000000000", 00:37:41.302 "is_configured": false, 00:37:41.302 "data_offset": 0, 00:37:41.302 "data_size": 0 00:37:41.302 }, 00:37:41.302 { 00:37:41.302 "name": "BaseBdev4", 00:37:41.302 "uuid": "00000000-0000-0000-0000-000000000000", 00:37:41.302 "is_configured": false, 00:37:41.302 "data_offset": 0, 00:37:41.302 "data_size": 0 00:37:41.302 } 00:37:41.302 ] 00:37:41.302 }' 00:37:41.302 17:34:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:37:41.302 17:34:18 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:37:41.561 17:34:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:37:41.561 17:34:18 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:41.561 17:34:18 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:37:41.819 [2024-11-26 17:34:19.008502] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:37:41.819 BaseBdev3 00:37:41.819 17:34:19 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:41.819 17:34:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:37:41.819 17:34:19 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:37:41.819 17:34:19 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:37:41.819 17:34:19 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:37:41.819 17:34:19 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:37:41.819 17:34:19 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:37:41.819 17:34:19 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:37:41.819 17:34:19 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:41.819 17:34:19 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:37:41.819 17:34:19 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:41.819 17:34:19 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:37:41.819 17:34:19 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:41.819 17:34:19 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:37:41.819 [ 00:37:41.819 { 00:37:41.819 "name": "BaseBdev3", 00:37:41.819 "aliases": [ 00:37:41.819 "009a8ef8-2ec2-4aff-9991-db0f61e61de4" 00:37:41.819 ], 00:37:41.819 "product_name": "Malloc disk", 00:37:41.819 "block_size": 512, 00:37:41.819 "num_blocks": 65536, 00:37:41.819 "uuid": "009a8ef8-2ec2-4aff-9991-db0f61e61de4", 00:37:41.819 "assigned_rate_limits": { 00:37:41.820 "rw_ios_per_sec": 0, 00:37:41.820 "rw_mbytes_per_sec": 0, 00:37:41.820 "r_mbytes_per_sec": 0, 00:37:41.820 "w_mbytes_per_sec": 0 00:37:41.820 }, 00:37:41.820 "claimed": true, 00:37:41.820 "claim_type": "exclusive_write", 00:37:41.820 "zoned": false, 00:37:41.820 "supported_io_types": { 00:37:41.820 "read": true, 00:37:41.820 "write": true, 00:37:41.820 "unmap": true, 00:37:41.820 "flush": true, 00:37:41.820 "reset": true, 00:37:41.820 "nvme_admin": false, 00:37:41.820 "nvme_io": false, 00:37:41.820 "nvme_io_md": false, 00:37:41.820 "write_zeroes": true, 00:37:41.820 "zcopy": true, 00:37:41.820 "get_zone_info": false, 00:37:41.820 "zone_management": false, 00:37:41.820 "zone_append": false, 00:37:41.820 "compare": false, 00:37:41.820 "compare_and_write": false, 00:37:41.820 "abort": true, 00:37:41.820 "seek_hole": false, 00:37:41.820 "seek_data": false, 00:37:41.820 "copy": true, 00:37:41.820 "nvme_iov_md": false 00:37:41.820 }, 00:37:41.820 "memory_domains": [ 00:37:41.820 { 00:37:41.820 "dma_device_id": "system", 00:37:41.820 "dma_device_type": 1 00:37:41.820 }, 00:37:41.820 { 00:37:41.820 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:37:41.820 "dma_device_type": 2 00:37:41.820 } 00:37:41.820 ], 00:37:41.820 "driver_specific": {} 00:37:41.820 } 00:37:41.820 ] 00:37:41.820 17:34:19 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:41.820 17:34:19 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:37:41.820 17:34:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:37:41.820 17:34:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:37:41.820 17:34:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:37:41.820 17:34:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:37:41.820 17:34:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:37:41.820 17:34:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:37:41.820 17:34:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:37:41.820 17:34:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:37:41.820 17:34:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:37:41.820 17:34:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:37:41.820 17:34:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:37:41.820 17:34:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:37:41.820 17:34:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:37:41.820 17:34:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:37:41.820 17:34:19 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:41.820 17:34:19 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:37:41.820 17:34:19 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:41.820 17:34:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:37:41.820 "name": "Existed_Raid", 00:37:41.820 "uuid": "0f53900f-cdcb-4907-8320-4b9dabdb4508", 00:37:41.820 "strip_size_kb": 64, 00:37:41.820 "state": "configuring", 00:37:41.820 "raid_level": "raid5f", 00:37:41.820 "superblock": true, 00:37:41.820 "num_base_bdevs": 4, 00:37:41.820 "num_base_bdevs_discovered": 3, 00:37:41.820 "num_base_bdevs_operational": 4, 00:37:41.820 "base_bdevs_list": [ 00:37:41.820 { 00:37:41.820 "name": "BaseBdev1", 00:37:41.820 "uuid": "bf99e958-4e19-4d68-87dc-3324dedcfb27", 00:37:41.820 "is_configured": true, 00:37:41.820 "data_offset": 2048, 00:37:41.820 "data_size": 63488 00:37:41.820 }, 00:37:41.820 { 00:37:41.820 "name": "BaseBdev2", 00:37:41.820 "uuid": "66237645-98fc-49e7-9c9e-0259e850021a", 00:37:41.820 "is_configured": true, 00:37:41.820 "data_offset": 2048, 00:37:41.820 "data_size": 63488 00:37:41.820 }, 00:37:41.820 { 00:37:41.820 "name": "BaseBdev3", 00:37:41.820 "uuid": "009a8ef8-2ec2-4aff-9991-db0f61e61de4", 00:37:41.820 "is_configured": true, 00:37:41.820 "data_offset": 2048, 00:37:41.820 "data_size": 63488 00:37:41.820 }, 00:37:41.820 { 00:37:41.820 "name": "BaseBdev4", 00:37:41.820 "uuid": "00000000-0000-0000-0000-000000000000", 00:37:41.820 "is_configured": false, 00:37:41.820 "data_offset": 0, 00:37:41.820 "data_size": 0 00:37:41.820 } 00:37:41.820 ] 00:37:41.820 }' 00:37:41.820 17:34:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:37:41.820 17:34:19 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:37:42.079 17:34:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:37:42.079 17:34:19 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:42.079 17:34:19 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:37:42.337 [2024-11-26 17:34:19.539695] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:37:42.337 [2024-11-26 17:34:19.540072] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:37:42.337 [2024-11-26 17:34:19.540090] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:37:42.337 [2024-11-26 17:34:19.540427] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:37:42.337 BaseBdev4 00:37:42.337 17:34:19 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:42.337 17:34:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev4 00:37:42.337 17:34:19 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev4 00:37:42.337 17:34:19 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:37:42.338 17:34:19 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:37:42.338 17:34:19 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:37:42.338 17:34:19 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:37:42.338 17:34:19 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:37:42.338 17:34:19 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:42.338 17:34:19 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:37:42.338 [2024-11-26 17:34:19.548882] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:37:42.338 [2024-11-26 17:34:19.548912] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:37:42.338 [2024-11-26 17:34:19.549235] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:37:42.338 17:34:19 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:42.338 17:34:19 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:37:42.338 17:34:19 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:42.338 17:34:19 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:37:42.338 [ 00:37:42.338 { 00:37:42.338 "name": "BaseBdev4", 00:37:42.338 "aliases": [ 00:37:42.338 "d9bb51e0-bc3e-44f6-85c6-9fe74f86e26a" 00:37:42.338 ], 00:37:42.338 "product_name": "Malloc disk", 00:37:42.338 "block_size": 512, 00:37:42.338 "num_blocks": 65536, 00:37:42.338 "uuid": "d9bb51e0-bc3e-44f6-85c6-9fe74f86e26a", 00:37:42.338 "assigned_rate_limits": { 00:37:42.338 "rw_ios_per_sec": 0, 00:37:42.338 "rw_mbytes_per_sec": 0, 00:37:42.338 "r_mbytes_per_sec": 0, 00:37:42.338 "w_mbytes_per_sec": 0 00:37:42.338 }, 00:37:42.338 "claimed": true, 00:37:42.338 "claim_type": "exclusive_write", 00:37:42.338 "zoned": false, 00:37:42.338 "supported_io_types": { 00:37:42.338 "read": true, 00:37:42.338 "write": true, 00:37:42.338 "unmap": true, 00:37:42.338 "flush": true, 00:37:42.338 "reset": true, 00:37:42.338 "nvme_admin": false, 00:37:42.338 "nvme_io": false, 00:37:42.338 "nvme_io_md": false, 00:37:42.338 "write_zeroes": true, 00:37:42.338 "zcopy": true, 00:37:42.338 "get_zone_info": false, 00:37:42.338 "zone_management": false, 00:37:42.338 "zone_append": false, 00:37:42.338 "compare": false, 00:37:42.338 "compare_and_write": false, 00:37:42.338 "abort": true, 00:37:42.338 "seek_hole": false, 00:37:42.338 "seek_data": false, 00:37:42.338 "copy": true, 00:37:42.338 "nvme_iov_md": false 00:37:42.338 }, 00:37:42.338 "memory_domains": [ 00:37:42.338 { 00:37:42.338 "dma_device_id": "system", 00:37:42.338 "dma_device_type": 1 00:37:42.338 }, 00:37:42.338 { 00:37:42.338 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:37:42.338 "dma_device_type": 2 00:37:42.338 } 00:37:42.338 ], 00:37:42.338 "driver_specific": {} 00:37:42.338 } 00:37:42.338 ] 00:37:42.338 17:34:19 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:42.338 17:34:19 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:37:42.338 17:34:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:37:42.338 17:34:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:37:42.338 17:34:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 4 00:37:42.338 17:34:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:37:42.338 17:34:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:37:42.338 17:34:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:37:42.338 17:34:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:37:42.338 17:34:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:37:42.338 17:34:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:37:42.338 17:34:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:37:42.338 17:34:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:37:42.338 17:34:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:37:42.338 17:34:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:37:42.338 17:34:19 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:42.338 17:34:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:37:42.338 17:34:19 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:37:42.338 17:34:19 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:42.338 17:34:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:37:42.338 "name": "Existed_Raid", 00:37:42.338 "uuid": "0f53900f-cdcb-4907-8320-4b9dabdb4508", 00:37:42.338 "strip_size_kb": 64, 00:37:42.338 "state": "online", 00:37:42.338 "raid_level": "raid5f", 00:37:42.338 "superblock": true, 00:37:42.338 "num_base_bdevs": 4, 00:37:42.338 "num_base_bdevs_discovered": 4, 00:37:42.338 "num_base_bdevs_operational": 4, 00:37:42.338 "base_bdevs_list": [ 00:37:42.338 { 00:37:42.338 "name": "BaseBdev1", 00:37:42.338 "uuid": "bf99e958-4e19-4d68-87dc-3324dedcfb27", 00:37:42.338 "is_configured": true, 00:37:42.338 "data_offset": 2048, 00:37:42.338 "data_size": 63488 00:37:42.338 }, 00:37:42.338 { 00:37:42.338 "name": "BaseBdev2", 00:37:42.338 "uuid": "66237645-98fc-49e7-9c9e-0259e850021a", 00:37:42.338 "is_configured": true, 00:37:42.338 "data_offset": 2048, 00:37:42.338 "data_size": 63488 00:37:42.338 }, 00:37:42.338 { 00:37:42.338 "name": "BaseBdev3", 00:37:42.338 "uuid": "009a8ef8-2ec2-4aff-9991-db0f61e61de4", 00:37:42.338 "is_configured": true, 00:37:42.338 "data_offset": 2048, 00:37:42.338 "data_size": 63488 00:37:42.338 }, 00:37:42.338 { 00:37:42.338 "name": "BaseBdev4", 00:37:42.338 "uuid": "d9bb51e0-bc3e-44f6-85c6-9fe74f86e26a", 00:37:42.338 "is_configured": true, 00:37:42.338 "data_offset": 2048, 00:37:42.338 "data_size": 63488 00:37:42.338 } 00:37:42.338 ] 00:37:42.338 }' 00:37:42.338 17:34:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:37:42.338 17:34:19 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:37:42.597 17:34:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:37:42.597 17:34:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:37:42.597 17:34:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:37:42.597 17:34:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:37:42.597 17:34:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:37:42.597 17:34:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:37:42.597 17:34:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:37:42.597 17:34:20 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:42.597 17:34:20 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:37:42.597 17:34:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:37:42.597 [2024-11-26 17:34:20.035688] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:37:42.856 17:34:20 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:42.856 17:34:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:37:42.856 "name": "Existed_Raid", 00:37:42.856 "aliases": [ 00:37:42.856 "0f53900f-cdcb-4907-8320-4b9dabdb4508" 00:37:42.856 ], 00:37:42.856 "product_name": "Raid Volume", 00:37:42.856 "block_size": 512, 00:37:42.856 "num_blocks": 190464, 00:37:42.856 "uuid": "0f53900f-cdcb-4907-8320-4b9dabdb4508", 00:37:42.856 "assigned_rate_limits": { 00:37:42.856 "rw_ios_per_sec": 0, 00:37:42.856 "rw_mbytes_per_sec": 0, 00:37:42.856 "r_mbytes_per_sec": 0, 00:37:42.856 "w_mbytes_per_sec": 0 00:37:42.856 }, 00:37:42.856 "claimed": false, 00:37:42.856 "zoned": false, 00:37:42.856 "supported_io_types": { 00:37:42.856 "read": true, 00:37:42.856 "write": true, 00:37:42.856 "unmap": false, 00:37:42.856 "flush": false, 00:37:42.856 "reset": true, 00:37:42.856 "nvme_admin": false, 00:37:42.856 "nvme_io": false, 00:37:42.856 "nvme_io_md": false, 00:37:42.856 "write_zeroes": true, 00:37:42.856 "zcopy": false, 00:37:42.856 "get_zone_info": false, 00:37:42.856 "zone_management": false, 00:37:42.856 "zone_append": false, 00:37:42.856 "compare": false, 00:37:42.856 "compare_and_write": false, 00:37:42.856 "abort": false, 00:37:42.856 "seek_hole": false, 00:37:42.856 "seek_data": false, 00:37:42.856 "copy": false, 00:37:42.856 "nvme_iov_md": false 00:37:42.856 }, 00:37:42.856 "driver_specific": { 00:37:42.856 "raid": { 00:37:42.856 "uuid": "0f53900f-cdcb-4907-8320-4b9dabdb4508", 00:37:42.856 "strip_size_kb": 64, 00:37:42.856 "state": "online", 00:37:42.856 "raid_level": "raid5f", 00:37:42.856 "superblock": true, 00:37:42.856 "num_base_bdevs": 4, 00:37:42.856 "num_base_bdevs_discovered": 4, 00:37:42.856 "num_base_bdevs_operational": 4, 00:37:42.856 "base_bdevs_list": [ 00:37:42.856 { 00:37:42.856 "name": "BaseBdev1", 00:37:42.856 "uuid": "bf99e958-4e19-4d68-87dc-3324dedcfb27", 00:37:42.856 "is_configured": true, 00:37:42.856 "data_offset": 2048, 00:37:42.856 "data_size": 63488 00:37:42.856 }, 00:37:42.856 { 00:37:42.856 "name": "BaseBdev2", 00:37:42.856 "uuid": "66237645-98fc-49e7-9c9e-0259e850021a", 00:37:42.856 "is_configured": true, 00:37:42.856 "data_offset": 2048, 00:37:42.856 "data_size": 63488 00:37:42.856 }, 00:37:42.856 { 00:37:42.856 "name": "BaseBdev3", 00:37:42.856 "uuid": "009a8ef8-2ec2-4aff-9991-db0f61e61de4", 00:37:42.856 "is_configured": true, 00:37:42.856 "data_offset": 2048, 00:37:42.856 "data_size": 63488 00:37:42.856 }, 00:37:42.856 { 00:37:42.856 "name": "BaseBdev4", 00:37:42.856 "uuid": "d9bb51e0-bc3e-44f6-85c6-9fe74f86e26a", 00:37:42.856 "is_configured": true, 00:37:42.856 "data_offset": 2048, 00:37:42.856 "data_size": 63488 00:37:42.856 } 00:37:42.856 ] 00:37:42.856 } 00:37:42.856 } 00:37:42.856 }' 00:37:42.856 17:34:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:37:42.856 17:34:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:37:42.856 BaseBdev2 00:37:42.856 BaseBdev3 00:37:42.856 BaseBdev4' 00:37:42.856 17:34:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:37:42.856 17:34:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:37:42.857 17:34:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:37:42.857 17:34:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:37:42.857 17:34:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:37:42.857 17:34:20 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:42.857 17:34:20 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:37:42.857 17:34:20 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:42.857 17:34:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:37:42.857 17:34:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:37:42.857 17:34:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:37:42.857 17:34:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:37:42.857 17:34:20 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:42.857 17:34:20 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:37:42.857 17:34:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:37:42.857 17:34:20 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:42.857 17:34:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:37:42.857 17:34:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:37:42.857 17:34:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:37:42.857 17:34:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:37:42.857 17:34:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:37:42.857 17:34:20 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:42.857 17:34:20 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:37:42.857 17:34:20 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:42.857 17:34:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:37:42.857 17:34:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:37:42.857 17:34:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:37:42.857 17:34:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:37:42.857 17:34:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:37:42.857 17:34:20 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:42.857 17:34:20 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:37:43.153 17:34:20 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:43.153 17:34:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:37:43.153 17:34:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:37:43.153 17:34:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:37:43.153 17:34:20 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:43.153 17:34:20 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:37:43.153 [2024-11-26 17:34:20.347538] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:37:43.153 17:34:20 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:43.153 17:34:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # local expected_state 00:37:43.153 17:34:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@261 -- # has_redundancy raid5f 00:37:43.153 17:34:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # case $1 in 00:37:43.153 17:34:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@199 -- # return 0 00:37:43.153 17:34:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@264 -- # expected_state=online 00:37:43.153 17:34:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 3 00:37:43.153 17:34:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:37:43.153 17:34:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:37:43.154 17:34:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:37:43.154 17:34:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:37:43.154 17:34:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:37:43.154 17:34:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:37:43.154 17:34:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:37:43.154 17:34:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:37:43.154 17:34:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:37:43.154 17:34:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:37:43.154 17:34:20 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:43.154 17:34:20 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:37:43.154 17:34:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:37:43.154 17:34:20 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:43.154 17:34:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:37:43.154 "name": "Existed_Raid", 00:37:43.154 "uuid": "0f53900f-cdcb-4907-8320-4b9dabdb4508", 00:37:43.154 "strip_size_kb": 64, 00:37:43.154 "state": "online", 00:37:43.154 "raid_level": "raid5f", 00:37:43.154 "superblock": true, 00:37:43.154 "num_base_bdevs": 4, 00:37:43.154 "num_base_bdevs_discovered": 3, 00:37:43.154 "num_base_bdevs_operational": 3, 00:37:43.154 "base_bdevs_list": [ 00:37:43.154 { 00:37:43.154 "name": null, 00:37:43.154 "uuid": "00000000-0000-0000-0000-000000000000", 00:37:43.154 "is_configured": false, 00:37:43.154 "data_offset": 0, 00:37:43.154 "data_size": 63488 00:37:43.154 }, 00:37:43.154 { 00:37:43.154 "name": "BaseBdev2", 00:37:43.154 "uuid": "66237645-98fc-49e7-9c9e-0259e850021a", 00:37:43.154 "is_configured": true, 00:37:43.154 "data_offset": 2048, 00:37:43.154 "data_size": 63488 00:37:43.154 }, 00:37:43.154 { 00:37:43.154 "name": "BaseBdev3", 00:37:43.154 "uuid": "009a8ef8-2ec2-4aff-9991-db0f61e61de4", 00:37:43.154 "is_configured": true, 00:37:43.154 "data_offset": 2048, 00:37:43.154 "data_size": 63488 00:37:43.154 }, 00:37:43.154 { 00:37:43.154 "name": "BaseBdev4", 00:37:43.154 "uuid": "d9bb51e0-bc3e-44f6-85c6-9fe74f86e26a", 00:37:43.154 "is_configured": true, 00:37:43.154 "data_offset": 2048, 00:37:43.154 "data_size": 63488 00:37:43.154 } 00:37:43.154 ] 00:37:43.154 }' 00:37:43.154 17:34:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:37:43.154 17:34:20 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:37:43.721 17:34:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:37:43.721 17:34:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:37:43.721 17:34:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:37:43.721 17:34:20 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:43.721 17:34:20 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:37:43.721 17:34:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:37:43.721 17:34:20 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:43.721 17:34:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:37:43.721 17:34:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:37:43.721 17:34:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:37:43.721 17:34:20 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:43.721 17:34:20 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:37:43.721 [2024-11-26 17:34:20.957476] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:37:43.721 [2024-11-26 17:34:20.957708] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:37:43.721 [2024-11-26 17:34:21.065966] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:37:43.721 17:34:21 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:43.721 17:34:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:37:43.721 17:34:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:37:43.721 17:34:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:37:43.721 17:34:21 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:43.721 17:34:21 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:37:43.721 17:34:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:37:43.721 17:34:21 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:43.721 17:34:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:37:43.721 17:34:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:37:43.721 17:34:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:37:43.721 17:34:21 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:43.721 17:34:21 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:37:43.722 [2024-11-26 17:34:21.122056] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:37:43.979 17:34:21 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:43.979 17:34:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:37:43.979 17:34:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:37:43.979 17:34:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:37:43.979 17:34:21 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:43.979 17:34:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:37:43.979 17:34:21 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:37:43.979 17:34:21 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:43.979 17:34:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:37:43.979 17:34:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:37:43.979 17:34:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev4 00:37:43.979 17:34:21 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:43.979 17:34:21 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:37:43.979 [2024-11-26 17:34:21.282871] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev4 00:37:43.979 [2024-11-26 17:34:21.282946] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:37:43.979 17:34:21 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:43.979 17:34:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:37:43.979 17:34:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:37:43.979 17:34:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:37:43.979 17:34:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:37:43.979 17:34:21 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:43.979 17:34:21 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:37:43.979 17:34:21 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:44.238 17:34:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:37:44.238 17:34:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:37:44.238 17:34:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@284 -- # '[' 4 -gt 2 ']' 00:37:44.238 17:34:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:37:44.238 17:34:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:37:44.238 17:34:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:37:44.238 17:34:21 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:44.238 17:34:21 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:37:44.238 BaseBdev2 00:37:44.238 17:34:21 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:44.238 17:34:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:37:44.238 17:34:21 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:37:44.238 17:34:21 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:37:44.238 17:34:21 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:37:44.238 17:34:21 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:37:44.238 17:34:21 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:37:44.238 17:34:21 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:37:44.238 17:34:21 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:44.238 17:34:21 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:37:44.238 17:34:21 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:44.238 17:34:21 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:37:44.238 17:34:21 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:44.238 17:34:21 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:37:44.238 [ 00:37:44.238 { 00:37:44.238 "name": "BaseBdev2", 00:37:44.238 "aliases": [ 00:37:44.238 "e86c3c7f-b3ee-4e44-abc3-aa31953fee2c" 00:37:44.238 ], 00:37:44.238 "product_name": "Malloc disk", 00:37:44.238 "block_size": 512, 00:37:44.238 "num_blocks": 65536, 00:37:44.238 "uuid": "e86c3c7f-b3ee-4e44-abc3-aa31953fee2c", 00:37:44.238 "assigned_rate_limits": { 00:37:44.238 "rw_ios_per_sec": 0, 00:37:44.238 "rw_mbytes_per_sec": 0, 00:37:44.238 "r_mbytes_per_sec": 0, 00:37:44.238 "w_mbytes_per_sec": 0 00:37:44.238 }, 00:37:44.238 "claimed": false, 00:37:44.238 "zoned": false, 00:37:44.238 "supported_io_types": { 00:37:44.238 "read": true, 00:37:44.238 "write": true, 00:37:44.238 "unmap": true, 00:37:44.238 "flush": true, 00:37:44.238 "reset": true, 00:37:44.238 "nvme_admin": false, 00:37:44.238 "nvme_io": false, 00:37:44.238 "nvme_io_md": false, 00:37:44.238 "write_zeroes": true, 00:37:44.238 "zcopy": true, 00:37:44.238 "get_zone_info": false, 00:37:44.238 "zone_management": false, 00:37:44.238 "zone_append": false, 00:37:44.238 "compare": false, 00:37:44.238 "compare_and_write": false, 00:37:44.238 "abort": true, 00:37:44.238 "seek_hole": false, 00:37:44.238 "seek_data": false, 00:37:44.238 "copy": true, 00:37:44.238 "nvme_iov_md": false 00:37:44.238 }, 00:37:44.238 "memory_domains": [ 00:37:44.238 { 00:37:44.238 "dma_device_id": "system", 00:37:44.238 "dma_device_type": 1 00:37:44.238 }, 00:37:44.238 { 00:37:44.238 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:37:44.238 "dma_device_type": 2 00:37:44.238 } 00:37:44.238 ], 00:37:44.238 "driver_specific": {} 00:37:44.238 } 00:37:44.238 ] 00:37:44.238 17:34:21 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:44.238 17:34:21 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:37:44.238 17:34:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:37:44.238 17:34:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:37:44.238 17:34:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:37:44.239 17:34:21 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:44.239 17:34:21 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:37:44.239 BaseBdev3 00:37:44.239 17:34:21 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:44.239 17:34:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:37:44.239 17:34:21 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:37:44.239 17:34:21 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:37:44.239 17:34:21 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:37:44.239 17:34:21 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:37:44.239 17:34:21 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:37:44.239 17:34:21 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:37:44.239 17:34:21 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:44.239 17:34:21 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:37:44.239 17:34:21 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:44.239 17:34:21 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:37:44.239 17:34:21 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:44.239 17:34:21 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:37:44.239 [ 00:37:44.239 { 00:37:44.239 "name": "BaseBdev3", 00:37:44.239 "aliases": [ 00:37:44.239 "d6aa1e11-92be-482a-b968-c06eb1ada827" 00:37:44.239 ], 00:37:44.239 "product_name": "Malloc disk", 00:37:44.239 "block_size": 512, 00:37:44.239 "num_blocks": 65536, 00:37:44.239 "uuid": "d6aa1e11-92be-482a-b968-c06eb1ada827", 00:37:44.239 "assigned_rate_limits": { 00:37:44.239 "rw_ios_per_sec": 0, 00:37:44.239 "rw_mbytes_per_sec": 0, 00:37:44.239 "r_mbytes_per_sec": 0, 00:37:44.239 "w_mbytes_per_sec": 0 00:37:44.239 }, 00:37:44.239 "claimed": false, 00:37:44.239 "zoned": false, 00:37:44.239 "supported_io_types": { 00:37:44.239 "read": true, 00:37:44.239 "write": true, 00:37:44.239 "unmap": true, 00:37:44.239 "flush": true, 00:37:44.239 "reset": true, 00:37:44.239 "nvme_admin": false, 00:37:44.239 "nvme_io": false, 00:37:44.239 "nvme_io_md": false, 00:37:44.239 "write_zeroes": true, 00:37:44.239 "zcopy": true, 00:37:44.239 "get_zone_info": false, 00:37:44.239 "zone_management": false, 00:37:44.239 "zone_append": false, 00:37:44.239 "compare": false, 00:37:44.239 "compare_and_write": false, 00:37:44.239 "abort": true, 00:37:44.239 "seek_hole": false, 00:37:44.239 "seek_data": false, 00:37:44.239 "copy": true, 00:37:44.239 "nvme_iov_md": false 00:37:44.239 }, 00:37:44.239 "memory_domains": [ 00:37:44.239 { 00:37:44.239 "dma_device_id": "system", 00:37:44.239 "dma_device_type": 1 00:37:44.239 }, 00:37:44.239 { 00:37:44.239 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:37:44.239 "dma_device_type": 2 00:37:44.239 } 00:37:44.239 ], 00:37:44.239 "driver_specific": {} 00:37:44.239 } 00:37:44.239 ] 00:37:44.239 17:34:21 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:44.239 17:34:21 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:37:44.239 17:34:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:37:44.239 17:34:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:37:44.239 17:34:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:37:44.239 17:34:21 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:44.239 17:34:21 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:37:44.239 BaseBdev4 00:37:44.239 17:34:21 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:44.239 17:34:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev4 00:37:44.239 17:34:21 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev4 00:37:44.239 17:34:21 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:37:44.239 17:34:21 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:37:44.239 17:34:21 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:37:44.239 17:34:21 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:37:44.239 17:34:21 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:37:44.239 17:34:21 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:44.239 17:34:21 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:37:44.239 17:34:21 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:44.239 17:34:21 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:37:44.239 17:34:21 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:44.239 17:34:21 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:37:44.239 [ 00:37:44.239 { 00:37:44.239 "name": "BaseBdev4", 00:37:44.239 "aliases": [ 00:37:44.239 "c968d570-2af6-464d-b577-1773f0e1db40" 00:37:44.239 ], 00:37:44.239 "product_name": "Malloc disk", 00:37:44.239 "block_size": 512, 00:37:44.239 "num_blocks": 65536, 00:37:44.239 "uuid": "c968d570-2af6-464d-b577-1773f0e1db40", 00:37:44.239 "assigned_rate_limits": { 00:37:44.239 "rw_ios_per_sec": 0, 00:37:44.239 "rw_mbytes_per_sec": 0, 00:37:44.239 "r_mbytes_per_sec": 0, 00:37:44.239 "w_mbytes_per_sec": 0 00:37:44.239 }, 00:37:44.239 "claimed": false, 00:37:44.239 "zoned": false, 00:37:44.239 "supported_io_types": { 00:37:44.239 "read": true, 00:37:44.239 "write": true, 00:37:44.239 "unmap": true, 00:37:44.239 "flush": true, 00:37:44.239 "reset": true, 00:37:44.239 "nvme_admin": false, 00:37:44.239 "nvme_io": false, 00:37:44.239 "nvme_io_md": false, 00:37:44.239 "write_zeroes": true, 00:37:44.239 "zcopy": true, 00:37:44.239 "get_zone_info": false, 00:37:44.239 "zone_management": false, 00:37:44.239 "zone_append": false, 00:37:44.239 "compare": false, 00:37:44.239 "compare_and_write": false, 00:37:44.239 "abort": true, 00:37:44.239 "seek_hole": false, 00:37:44.239 "seek_data": false, 00:37:44.239 "copy": true, 00:37:44.239 "nvme_iov_md": false 00:37:44.239 }, 00:37:44.239 "memory_domains": [ 00:37:44.239 { 00:37:44.239 "dma_device_id": "system", 00:37:44.239 "dma_device_type": 1 00:37:44.239 }, 00:37:44.239 { 00:37:44.239 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:37:44.239 "dma_device_type": 2 00:37:44.239 } 00:37:44.239 ], 00:37:44.239 "driver_specific": {} 00:37:44.239 } 00:37:44.239 ] 00:37:44.239 17:34:21 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:44.239 17:34:21 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:37:44.239 17:34:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:37:44.239 17:34:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:37:44.239 17:34:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:37:44.239 17:34:21 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:44.239 17:34:21 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:37:44.497 [2024-11-26 17:34:21.684951] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:37:44.498 [2024-11-26 17:34:21.685023] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:37:44.498 [2024-11-26 17:34:21.685080] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:37:44.498 [2024-11-26 17:34:21.687741] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:37:44.498 [2024-11-26 17:34:21.687816] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:37:44.498 17:34:21 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:44.498 17:34:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:37:44.498 17:34:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:37:44.498 17:34:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:37:44.498 17:34:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:37:44.498 17:34:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:37:44.498 17:34:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:37:44.498 17:34:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:37:44.498 17:34:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:37:44.498 17:34:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:37:44.498 17:34:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:37:44.498 17:34:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:37:44.498 17:34:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:37:44.498 17:34:21 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:44.498 17:34:21 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:37:44.498 17:34:21 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:44.498 17:34:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:37:44.498 "name": "Existed_Raid", 00:37:44.498 "uuid": "0d792dd8-5e1b-4bf1-bb85-ee1f5dc13143", 00:37:44.498 "strip_size_kb": 64, 00:37:44.498 "state": "configuring", 00:37:44.498 "raid_level": "raid5f", 00:37:44.498 "superblock": true, 00:37:44.498 "num_base_bdevs": 4, 00:37:44.498 "num_base_bdevs_discovered": 3, 00:37:44.498 "num_base_bdevs_operational": 4, 00:37:44.498 "base_bdevs_list": [ 00:37:44.498 { 00:37:44.498 "name": "BaseBdev1", 00:37:44.498 "uuid": "00000000-0000-0000-0000-000000000000", 00:37:44.498 "is_configured": false, 00:37:44.498 "data_offset": 0, 00:37:44.498 "data_size": 0 00:37:44.498 }, 00:37:44.498 { 00:37:44.498 "name": "BaseBdev2", 00:37:44.498 "uuid": "e86c3c7f-b3ee-4e44-abc3-aa31953fee2c", 00:37:44.498 "is_configured": true, 00:37:44.498 "data_offset": 2048, 00:37:44.498 "data_size": 63488 00:37:44.498 }, 00:37:44.498 { 00:37:44.498 "name": "BaseBdev3", 00:37:44.498 "uuid": "d6aa1e11-92be-482a-b968-c06eb1ada827", 00:37:44.498 "is_configured": true, 00:37:44.498 "data_offset": 2048, 00:37:44.498 "data_size": 63488 00:37:44.498 }, 00:37:44.498 { 00:37:44.498 "name": "BaseBdev4", 00:37:44.498 "uuid": "c968d570-2af6-464d-b577-1773f0e1db40", 00:37:44.498 "is_configured": true, 00:37:44.498 "data_offset": 2048, 00:37:44.498 "data_size": 63488 00:37:44.498 } 00:37:44.498 ] 00:37:44.498 }' 00:37:44.498 17:34:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:37:44.498 17:34:21 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:37:44.756 17:34:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:37:44.756 17:34:22 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:44.756 17:34:22 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:37:44.756 [2024-11-26 17:34:22.153064] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:37:44.756 17:34:22 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:44.756 17:34:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:37:44.756 17:34:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:37:44.756 17:34:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:37:44.756 17:34:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:37:44.756 17:34:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:37:44.756 17:34:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:37:44.756 17:34:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:37:44.756 17:34:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:37:44.756 17:34:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:37:44.756 17:34:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:37:44.756 17:34:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:37:44.756 17:34:22 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:44.756 17:34:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:37:44.756 17:34:22 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:37:44.756 17:34:22 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:45.015 17:34:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:37:45.015 "name": "Existed_Raid", 00:37:45.015 "uuid": "0d792dd8-5e1b-4bf1-bb85-ee1f5dc13143", 00:37:45.015 "strip_size_kb": 64, 00:37:45.015 "state": "configuring", 00:37:45.015 "raid_level": "raid5f", 00:37:45.015 "superblock": true, 00:37:45.015 "num_base_bdevs": 4, 00:37:45.015 "num_base_bdevs_discovered": 2, 00:37:45.015 "num_base_bdevs_operational": 4, 00:37:45.015 "base_bdevs_list": [ 00:37:45.015 { 00:37:45.015 "name": "BaseBdev1", 00:37:45.015 "uuid": "00000000-0000-0000-0000-000000000000", 00:37:45.015 "is_configured": false, 00:37:45.015 "data_offset": 0, 00:37:45.015 "data_size": 0 00:37:45.015 }, 00:37:45.015 { 00:37:45.015 "name": null, 00:37:45.015 "uuid": "e86c3c7f-b3ee-4e44-abc3-aa31953fee2c", 00:37:45.015 "is_configured": false, 00:37:45.015 "data_offset": 0, 00:37:45.015 "data_size": 63488 00:37:45.015 }, 00:37:45.015 { 00:37:45.015 "name": "BaseBdev3", 00:37:45.015 "uuid": "d6aa1e11-92be-482a-b968-c06eb1ada827", 00:37:45.015 "is_configured": true, 00:37:45.015 "data_offset": 2048, 00:37:45.015 "data_size": 63488 00:37:45.015 }, 00:37:45.015 { 00:37:45.015 "name": "BaseBdev4", 00:37:45.015 "uuid": "c968d570-2af6-464d-b577-1773f0e1db40", 00:37:45.015 "is_configured": true, 00:37:45.015 "data_offset": 2048, 00:37:45.015 "data_size": 63488 00:37:45.015 } 00:37:45.015 ] 00:37:45.015 }' 00:37:45.015 17:34:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:37:45.015 17:34:22 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:37:45.273 17:34:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:37:45.273 17:34:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:37:45.273 17:34:22 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:45.273 17:34:22 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:37:45.273 17:34:22 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:45.273 17:34:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:37:45.273 17:34:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:37:45.273 17:34:22 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:45.273 17:34:22 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:37:45.273 [2024-11-26 17:34:22.710447] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:37:45.273 BaseBdev1 00:37:45.273 17:34:22 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:45.273 17:34:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:37:45.273 17:34:22 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:37:45.273 17:34:22 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:37:45.273 17:34:22 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:37:45.273 17:34:22 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:37:45.273 17:34:22 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:37:45.273 17:34:22 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:37:45.273 17:34:22 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:45.273 17:34:22 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:37:45.531 17:34:22 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:45.531 17:34:22 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:37:45.531 17:34:22 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:45.531 17:34:22 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:37:45.531 [ 00:37:45.531 { 00:37:45.531 "name": "BaseBdev1", 00:37:45.531 "aliases": [ 00:37:45.531 "c9487b7c-4870-4bd7-a808-d29bde6715dd" 00:37:45.531 ], 00:37:45.531 "product_name": "Malloc disk", 00:37:45.531 "block_size": 512, 00:37:45.531 "num_blocks": 65536, 00:37:45.531 "uuid": "c9487b7c-4870-4bd7-a808-d29bde6715dd", 00:37:45.531 "assigned_rate_limits": { 00:37:45.531 "rw_ios_per_sec": 0, 00:37:45.531 "rw_mbytes_per_sec": 0, 00:37:45.531 "r_mbytes_per_sec": 0, 00:37:45.531 "w_mbytes_per_sec": 0 00:37:45.531 }, 00:37:45.531 "claimed": true, 00:37:45.531 "claim_type": "exclusive_write", 00:37:45.531 "zoned": false, 00:37:45.531 "supported_io_types": { 00:37:45.531 "read": true, 00:37:45.531 "write": true, 00:37:45.531 "unmap": true, 00:37:45.531 "flush": true, 00:37:45.531 "reset": true, 00:37:45.531 "nvme_admin": false, 00:37:45.531 "nvme_io": false, 00:37:45.531 "nvme_io_md": false, 00:37:45.531 "write_zeroes": true, 00:37:45.531 "zcopy": true, 00:37:45.531 "get_zone_info": false, 00:37:45.531 "zone_management": false, 00:37:45.531 "zone_append": false, 00:37:45.531 "compare": false, 00:37:45.531 "compare_and_write": false, 00:37:45.532 "abort": true, 00:37:45.532 "seek_hole": false, 00:37:45.532 "seek_data": false, 00:37:45.532 "copy": true, 00:37:45.532 "nvme_iov_md": false 00:37:45.532 }, 00:37:45.532 "memory_domains": [ 00:37:45.532 { 00:37:45.532 "dma_device_id": "system", 00:37:45.532 "dma_device_type": 1 00:37:45.532 }, 00:37:45.532 { 00:37:45.532 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:37:45.532 "dma_device_type": 2 00:37:45.532 } 00:37:45.532 ], 00:37:45.532 "driver_specific": {} 00:37:45.532 } 00:37:45.532 ] 00:37:45.532 17:34:22 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:45.532 17:34:22 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:37:45.532 17:34:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:37:45.532 17:34:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:37:45.532 17:34:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:37:45.532 17:34:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:37:45.532 17:34:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:37:45.532 17:34:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:37:45.532 17:34:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:37:45.532 17:34:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:37:45.532 17:34:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:37:45.532 17:34:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:37:45.532 17:34:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:37:45.532 17:34:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:37:45.532 17:34:22 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:45.532 17:34:22 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:37:45.532 17:34:22 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:45.532 17:34:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:37:45.532 "name": "Existed_Raid", 00:37:45.532 "uuid": "0d792dd8-5e1b-4bf1-bb85-ee1f5dc13143", 00:37:45.532 "strip_size_kb": 64, 00:37:45.532 "state": "configuring", 00:37:45.532 "raid_level": "raid5f", 00:37:45.532 "superblock": true, 00:37:45.532 "num_base_bdevs": 4, 00:37:45.532 "num_base_bdevs_discovered": 3, 00:37:45.532 "num_base_bdevs_operational": 4, 00:37:45.532 "base_bdevs_list": [ 00:37:45.532 { 00:37:45.532 "name": "BaseBdev1", 00:37:45.532 "uuid": "c9487b7c-4870-4bd7-a808-d29bde6715dd", 00:37:45.532 "is_configured": true, 00:37:45.532 "data_offset": 2048, 00:37:45.532 "data_size": 63488 00:37:45.532 }, 00:37:45.532 { 00:37:45.532 "name": null, 00:37:45.532 "uuid": "e86c3c7f-b3ee-4e44-abc3-aa31953fee2c", 00:37:45.532 "is_configured": false, 00:37:45.532 "data_offset": 0, 00:37:45.532 "data_size": 63488 00:37:45.532 }, 00:37:45.532 { 00:37:45.532 "name": "BaseBdev3", 00:37:45.532 "uuid": "d6aa1e11-92be-482a-b968-c06eb1ada827", 00:37:45.532 "is_configured": true, 00:37:45.532 "data_offset": 2048, 00:37:45.532 "data_size": 63488 00:37:45.532 }, 00:37:45.532 { 00:37:45.532 "name": "BaseBdev4", 00:37:45.532 "uuid": "c968d570-2af6-464d-b577-1773f0e1db40", 00:37:45.532 "is_configured": true, 00:37:45.532 "data_offset": 2048, 00:37:45.532 "data_size": 63488 00:37:45.532 } 00:37:45.532 ] 00:37:45.532 }' 00:37:45.532 17:34:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:37:45.532 17:34:22 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:37:45.790 17:34:23 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:37:45.790 17:34:23 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:45.790 17:34:23 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:37:45.790 17:34:23 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:37:45.790 17:34:23 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:46.048 17:34:23 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:37:46.049 17:34:23 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:37:46.049 17:34:23 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:46.049 17:34:23 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:37:46.049 [2024-11-26 17:34:23.250687] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:37:46.049 17:34:23 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:46.049 17:34:23 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:37:46.049 17:34:23 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:37:46.049 17:34:23 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:37:46.049 17:34:23 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:37:46.049 17:34:23 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:37:46.049 17:34:23 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:37:46.049 17:34:23 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:37:46.049 17:34:23 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:37:46.049 17:34:23 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:37:46.049 17:34:23 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:37:46.049 17:34:23 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:37:46.049 17:34:23 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:37:46.049 17:34:23 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:46.049 17:34:23 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:37:46.049 17:34:23 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:46.049 17:34:23 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:37:46.049 "name": "Existed_Raid", 00:37:46.049 "uuid": "0d792dd8-5e1b-4bf1-bb85-ee1f5dc13143", 00:37:46.049 "strip_size_kb": 64, 00:37:46.049 "state": "configuring", 00:37:46.049 "raid_level": "raid5f", 00:37:46.049 "superblock": true, 00:37:46.049 "num_base_bdevs": 4, 00:37:46.049 "num_base_bdevs_discovered": 2, 00:37:46.049 "num_base_bdevs_operational": 4, 00:37:46.049 "base_bdevs_list": [ 00:37:46.049 { 00:37:46.049 "name": "BaseBdev1", 00:37:46.049 "uuid": "c9487b7c-4870-4bd7-a808-d29bde6715dd", 00:37:46.049 "is_configured": true, 00:37:46.049 "data_offset": 2048, 00:37:46.049 "data_size": 63488 00:37:46.049 }, 00:37:46.049 { 00:37:46.049 "name": null, 00:37:46.049 "uuid": "e86c3c7f-b3ee-4e44-abc3-aa31953fee2c", 00:37:46.049 "is_configured": false, 00:37:46.049 "data_offset": 0, 00:37:46.049 "data_size": 63488 00:37:46.049 }, 00:37:46.049 { 00:37:46.049 "name": null, 00:37:46.049 "uuid": "d6aa1e11-92be-482a-b968-c06eb1ada827", 00:37:46.049 "is_configured": false, 00:37:46.049 "data_offset": 0, 00:37:46.049 "data_size": 63488 00:37:46.049 }, 00:37:46.049 { 00:37:46.049 "name": "BaseBdev4", 00:37:46.049 "uuid": "c968d570-2af6-464d-b577-1773f0e1db40", 00:37:46.049 "is_configured": true, 00:37:46.049 "data_offset": 2048, 00:37:46.049 "data_size": 63488 00:37:46.049 } 00:37:46.049 ] 00:37:46.049 }' 00:37:46.049 17:34:23 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:37:46.049 17:34:23 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:37:46.308 17:34:23 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:37:46.308 17:34:23 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:37:46.308 17:34:23 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:46.308 17:34:23 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:37:46.308 17:34:23 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:46.308 17:34:23 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:37:46.308 17:34:23 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:37:46.308 17:34:23 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:46.308 17:34:23 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:37:46.308 [2024-11-26 17:34:23.746754] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:37:46.308 17:34:23 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:46.308 17:34:23 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:37:46.308 17:34:23 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:37:46.308 17:34:23 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:37:46.308 17:34:23 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:37:46.308 17:34:23 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:37:46.308 17:34:23 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:37:46.308 17:34:23 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:37:46.308 17:34:23 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:37:46.308 17:34:23 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:37:46.308 17:34:23 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:37:46.568 17:34:23 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:37:46.568 17:34:23 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:37:46.568 17:34:23 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:46.568 17:34:23 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:37:46.568 17:34:23 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:46.568 17:34:23 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:37:46.568 "name": "Existed_Raid", 00:37:46.568 "uuid": "0d792dd8-5e1b-4bf1-bb85-ee1f5dc13143", 00:37:46.568 "strip_size_kb": 64, 00:37:46.568 "state": "configuring", 00:37:46.568 "raid_level": "raid5f", 00:37:46.568 "superblock": true, 00:37:46.568 "num_base_bdevs": 4, 00:37:46.568 "num_base_bdevs_discovered": 3, 00:37:46.568 "num_base_bdevs_operational": 4, 00:37:46.568 "base_bdevs_list": [ 00:37:46.568 { 00:37:46.568 "name": "BaseBdev1", 00:37:46.568 "uuid": "c9487b7c-4870-4bd7-a808-d29bde6715dd", 00:37:46.568 "is_configured": true, 00:37:46.568 "data_offset": 2048, 00:37:46.568 "data_size": 63488 00:37:46.568 }, 00:37:46.568 { 00:37:46.568 "name": null, 00:37:46.568 "uuid": "e86c3c7f-b3ee-4e44-abc3-aa31953fee2c", 00:37:46.568 "is_configured": false, 00:37:46.568 "data_offset": 0, 00:37:46.568 "data_size": 63488 00:37:46.568 }, 00:37:46.568 { 00:37:46.568 "name": "BaseBdev3", 00:37:46.568 "uuid": "d6aa1e11-92be-482a-b968-c06eb1ada827", 00:37:46.568 "is_configured": true, 00:37:46.568 "data_offset": 2048, 00:37:46.568 "data_size": 63488 00:37:46.568 }, 00:37:46.568 { 00:37:46.568 "name": "BaseBdev4", 00:37:46.568 "uuid": "c968d570-2af6-464d-b577-1773f0e1db40", 00:37:46.568 "is_configured": true, 00:37:46.568 "data_offset": 2048, 00:37:46.568 "data_size": 63488 00:37:46.568 } 00:37:46.568 ] 00:37:46.568 }' 00:37:46.568 17:34:23 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:37:46.568 17:34:23 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:37:46.842 17:34:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:37:46.842 17:34:24 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:46.842 17:34:24 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:37:46.842 17:34:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:37:46.842 17:34:24 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:46.842 17:34:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:37:46.842 17:34:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:37:46.842 17:34:24 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:46.842 17:34:24 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:37:46.842 [2024-11-26 17:34:24.219354] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:37:47.142 17:34:24 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:47.142 17:34:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:37:47.142 17:34:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:37:47.142 17:34:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:37:47.142 17:34:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:37:47.142 17:34:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:37:47.142 17:34:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:37:47.142 17:34:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:37:47.142 17:34:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:37:47.142 17:34:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:37:47.142 17:34:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:37:47.142 17:34:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:37:47.142 17:34:24 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:47.142 17:34:24 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:37:47.142 17:34:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:37:47.142 17:34:24 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:47.143 17:34:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:37:47.143 "name": "Existed_Raid", 00:37:47.143 "uuid": "0d792dd8-5e1b-4bf1-bb85-ee1f5dc13143", 00:37:47.143 "strip_size_kb": 64, 00:37:47.143 "state": "configuring", 00:37:47.143 "raid_level": "raid5f", 00:37:47.143 "superblock": true, 00:37:47.143 "num_base_bdevs": 4, 00:37:47.143 "num_base_bdevs_discovered": 2, 00:37:47.143 "num_base_bdevs_operational": 4, 00:37:47.143 "base_bdevs_list": [ 00:37:47.143 { 00:37:47.143 "name": null, 00:37:47.143 "uuid": "c9487b7c-4870-4bd7-a808-d29bde6715dd", 00:37:47.143 "is_configured": false, 00:37:47.143 "data_offset": 0, 00:37:47.143 "data_size": 63488 00:37:47.143 }, 00:37:47.143 { 00:37:47.143 "name": null, 00:37:47.143 "uuid": "e86c3c7f-b3ee-4e44-abc3-aa31953fee2c", 00:37:47.143 "is_configured": false, 00:37:47.143 "data_offset": 0, 00:37:47.143 "data_size": 63488 00:37:47.143 }, 00:37:47.143 { 00:37:47.143 "name": "BaseBdev3", 00:37:47.143 "uuid": "d6aa1e11-92be-482a-b968-c06eb1ada827", 00:37:47.143 "is_configured": true, 00:37:47.143 "data_offset": 2048, 00:37:47.143 "data_size": 63488 00:37:47.143 }, 00:37:47.143 { 00:37:47.143 "name": "BaseBdev4", 00:37:47.143 "uuid": "c968d570-2af6-464d-b577-1773f0e1db40", 00:37:47.143 "is_configured": true, 00:37:47.143 "data_offset": 2048, 00:37:47.143 "data_size": 63488 00:37:47.143 } 00:37:47.143 ] 00:37:47.143 }' 00:37:47.143 17:34:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:37:47.143 17:34:24 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:37:47.401 17:34:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:37:47.401 17:34:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:37:47.401 17:34:24 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:47.401 17:34:24 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:37:47.401 17:34:24 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:47.401 17:34:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:37:47.401 17:34:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:37:47.401 17:34:24 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:47.401 17:34:24 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:37:47.401 [2024-11-26 17:34:24.830434] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:37:47.401 17:34:24 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:47.401 17:34:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:37:47.401 17:34:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:37:47.401 17:34:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:37:47.401 17:34:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:37:47.402 17:34:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:37:47.402 17:34:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:37:47.402 17:34:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:37:47.402 17:34:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:37:47.402 17:34:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:37:47.402 17:34:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:37:47.402 17:34:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:37:47.402 17:34:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:37:47.402 17:34:24 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:47.402 17:34:24 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:37:47.660 17:34:24 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:47.660 17:34:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:37:47.660 "name": "Existed_Raid", 00:37:47.660 "uuid": "0d792dd8-5e1b-4bf1-bb85-ee1f5dc13143", 00:37:47.660 "strip_size_kb": 64, 00:37:47.660 "state": "configuring", 00:37:47.660 "raid_level": "raid5f", 00:37:47.660 "superblock": true, 00:37:47.660 "num_base_bdevs": 4, 00:37:47.660 "num_base_bdevs_discovered": 3, 00:37:47.660 "num_base_bdevs_operational": 4, 00:37:47.660 "base_bdevs_list": [ 00:37:47.660 { 00:37:47.660 "name": null, 00:37:47.660 "uuid": "c9487b7c-4870-4bd7-a808-d29bde6715dd", 00:37:47.660 "is_configured": false, 00:37:47.660 "data_offset": 0, 00:37:47.660 "data_size": 63488 00:37:47.660 }, 00:37:47.660 { 00:37:47.660 "name": "BaseBdev2", 00:37:47.660 "uuid": "e86c3c7f-b3ee-4e44-abc3-aa31953fee2c", 00:37:47.660 "is_configured": true, 00:37:47.660 "data_offset": 2048, 00:37:47.660 "data_size": 63488 00:37:47.660 }, 00:37:47.660 { 00:37:47.660 "name": "BaseBdev3", 00:37:47.660 "uuid": "d6aa1e11-92be-482a-b968-c06eb1ada827", 00:37:47.660 "is_configured": true, 00:37:47.660 "data_offset": 2048, 00:37:47.660 "data_size": 63488 00:37:47.660 }, 00:37:47.660 { 00:37:47.660 "name": "BaseBdev4", 00:37:47.660 "uuid": "c968d570-2af6-464d-b577-1773f0e1db40", 00:37:47.660 "is_configured": true, 00:37:47.660 "data_offset": 2048, 00:37:47.660 "data_size": 63488 00:37:47.660 } 00:37:47.660 ] 00:37:47.660 }' 00:37:47.660 17:34:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:37:47.660 17:34:24 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:37:47.918 17:34:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:37:47.918 17:34:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:37:47.918 17:34:25 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:47.918 17:34:25 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:37:47.918 17:34:25 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:47.918 17:34:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:37:47.918 17:34:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:37:47.918 17:34:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:37:47.918 17:34:25 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:47.918 17:34:25 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:37:47.918 17:34:25 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:47.918 17:34:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u c9487b7c-4870-4bd7-a808-d29bde6715dd 00:37:47.918 17:34:25 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:47.918 17:34:25 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:37:48.177 [2024-11-26 17:34:25.407749] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:37:48.177 [2024-11-26 17:34:25.408011] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:37:48.177 [2024-11-26 17:34:25.408030] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:37:48.177 [2024-11-26 17:34:25.408356] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000063c0 00:37:48.177 NewBaseBdev 00:37:48.177 17:34:25 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:48.177 17:34:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:37:48.177 17:34:25 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=NewBaseBdev 00:37:48.177 17:34:25 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:37:48.177 17:34:25 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:37:48.177 17:34:25 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:37:48.177 17:34:25 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:37:48.177 17:34:25 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:37:48.177 17:34:25 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:48.177 17:34:25 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:37:48.177 [2024-11-26 17:34:25.416004] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:37:48.177 [2024-11-26 17:34:25.416037] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000008200 00:37:48.177 [2024-11-26 17:34:25.416312] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:37:48.177 17:34:25 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:48.177 17:34:25 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:37:48.177 17:34:25 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:48.177 17:34:25 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:37:48.177 [ 00:37:48.177 { 00:37:48.177 "name": "NewBaseBdev", 00:37:48.177 "aliases": [ 00:37:48.177 "c9487b7c-4870-4bd7-a808-d29bde6715dd" 00:37:48.177 ], 00:37:48.177 "product_name": "Malloc disk", 00:37:48.177 "block_size": 512, 00:37:48.177 "num_blocks": 65536, 00:37:48.177 "uuid": "c9487b7c-4870-4bd7-a808-d29bde6715dd", 00:37:48.177 "assigned_rate_limits": { 00:37:48.177 "rw_ios_per_sec": 0, 00:37:48.177 "rw_mbytes_per_sec": 0, 00:37:48.177 "r_mbytes_per_sec": 0, 00:37:48.177 "w_mbytes_per_sec": 0 00:37:48.177 }, 00:37:48.177 "claimed": true, 00:37:48.177 "claim_type": "exclusive_write", 00:37:48.177 "zoned": false, 00:37:48.177 "supported_io_types": { 00:37:48.177 "read": true, 00:37:48.177 "write": true, 00:37:48.177 "unmap": true, 00:37:48.177 "flush": true, 00:37:48.177 "reset": true, 00:37:48.177 "nvme_admin": false, 00:37:48.177 "nvme_io": false, 00:37:48.177 "nvme_io_md": false, 00:37:48.177 "write_zeroes": true, 00:37:48.177 "zcopy": true, 00:37:48.177 "get_zone_info": false, 00:37:48.177 "zone_management": false, 00:37:48.177 "zone_append": false, 00:37:48.177 "compare": false, 00:37:48.177 "compare_and_write": false, 00:37:48.177 "abort": true, 00:37:48.177 "seek_hole": false, 00:37:48.177 "seek_data": false, 00:37:48.177 "copy": true, 00:37:48.177 "nvme_iov_md": false 00:37:48.177 }, 00:37:48.177 "memory_domains": [ 00:37:48.177 { 00:37:48.177 "dma_device_id": "system", 00:37:48.177 "dma_device_type": 1 00:37:48.177 }, 00:37:48.177 { 00:37:48.177 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:37:48.177 "dma_device_type": 2 00:37:48.177 } 00:37:48.177 ], 00:37:48.177 "driver_specific": {} 00:37:48.177 } 00:37:48.177 ] 00:37:48.177 17:34:25 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:48.177 17:34:25 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:37:48.177 17:34:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 4 00:37:48.177 17:34:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:37:48.177 17:34:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:37:48.177 17:34:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:37:48.177 17:34:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:37:48.177 17:34:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:37:48.177 17:34:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:37:48.177 17:34:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:37:48.177 17:34:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:37:48.177 17:34:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:37:48.177 17:34:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:37:48.177 17:34:25 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:48.177 17:34:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:37:48.177 17:34:25 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:37:48.177 17:34:25 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:48.177 17:34:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:37:48.177 "name": "Existed_Raid", 00:37:48.177 "uuid": "0d792dd8-5e1b-4bf1-bb85-ee1f5dc13143", 00:37:48.177 "strip_size_kb": 64, 00:37:48.177 "state": "online", 00:37:48.177 "raid_level": "raid5f", 00:37:48.177 "superblock": true, 00:37:48.177 "num_base_bdevs": 4, 00:37:48.177 "num_base_bdevs_discovered": 4, 00:37:48.177 "num_base_bdevs_operational": 4, 00:37:48.177 "base_bdevs_list": [ 00:37:48.177 { 00:37:48.177 "name": "NewBaseBdev", 00:37:48.177 "uuid": "c9487b7c-4870-4bd7-a808-d29bde6715dd", 00:37:48.177 "is_configured": true, 00:37:48.177 "data_offset": 2048, 00:37:48.177 "data_size": 63488 00:37:48.177 }, 00:37:48.177 { 00:37:48.177 "name": "BaseBdev2", 00:37:48.177 "uuid": "e86c3c7f-b3ee-4e44-abc3-aa31953fee2c", 00:37:48.177 "is_configured": true, 00:37:48.177 "data_offset": 2048, 00:37:48.177 "data_size": 63488 00:37:48.177 }, 00:37:48.177 { 00:37:48.177 "name": "BaseBdev3", 00:37:48.177 "uuid": "d6aa1e11-92be-482a-b968-c06eb1ada827", 00:37:48.177 "is_configured": true, 00:37:48.177 "data_offset": 2048, 00:37:48.177 "data_size": 63488 00:37:48.177 }, 00:37:48.177 { 00:37:48.177 "name": "BaseBdev4", 00:37:48.177 "uuid": "c968d570-2af6-464d-b577-1773f0e1db40", 00:37:48.177 "is_configured": true, 00:37:48.177 "data_offset": 2048, 00:37:48.177 "data_size": 63488 00:37:48.177 } 00:37:48.177 ] 00:37:48.177 }' 00:37:48.177 17:34:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:37:48.177 17:34:25 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:37:48.744 17:34:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:37:48.744 17:34:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:37:48.744 17:34:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:37:48.744 17:34:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:37:48.744 17:34:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:37:48.744 17:34:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:37:48.744 17:34:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:37:48.744 17:34:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:37:48.744 17:34:25 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:48.744 17:34:25 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:37:48.744 [2024-11-26 17:34:25.926339] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:37:48.744 17:34:25 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:48.744 17:34:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:37:48.744 "name": "Existed_Raid", 00:37:48.744 "aliases": [ 00:37:48.744 "0d792dd8-5e1b-4bf1-bb85-ee1f5dc13143" 00:37:48.744 ], 00:37:48.744 "product_name": "Raid Volume", 00:37:48.744 "block_size": 512, 00:37:48.744 "num_blocks": 190464, 00:37:48.744 "uuid": "0d792dd8-5e1b-4bf1-bb85-ee1f5dc13143", 00:37:48.744 "assigned_rate_limits": { 00:37:48.744 "rw_ios_per_sec": 0, 00:37:48.744 "rw_mbytes_per_sec": 0, 00:37:48.744 "r_mbytes_per_sec": 0, 00:37:48.744 "w_mbytes_per_sec": 0 00:37:48.744 }, 00:37:48.744 "claimed": false, 00:37:48.744 "zoned": false, 00:37:48.744 "supported_io_types": { 00:37:48.744 "read": true, 00:37:48.744 "write": true, 00:37:48.744 "unmap": false, 00:37:48.744 "flush": false, 00:37:48.744 "reset": true, 00:37:48.744 "nvme_admin": false, 00:37:48.744 "nvme_io": false, 00:37:48.744 "nvme_io_md": false, 00:37:48.744 "write_zeroes": true, 00:37:48.744 "zcopy": false, 00:37:48.744 "get_zone_info": false, 00:37:48.744 "zone_management": false, 00:37:48.744 "zone_append": false, 00:37:48.744 "compare": false, 00:37:48.744 "compare_and_write": false, 00:37:48.744 "abort": false, 00:37:48.744 "seek_hole": false, 00:37:48.744 "seek_data": false, 00:37:48.744 "copy": false, 00:37:48.744 "nvme_iov_md": false 00:37:48.744 }, 00:37:48.744 "driver_specific": { 00:37:48.744 "raid": { 00:37:48.744 "uuid": "0d792dd8-5e1b-4bf1-bb85-ee1f5dc13143", 00:37:48.744 "strip_size_kb": 64, 00:37:48.744 "state": "online", 00:37:48.744 "raid_level": "raid5f", 00:37:48.744 "superblock": true, 00:37:48.744 "num_base_bdevs": 4, 00:37:48.744 "num_base_bdevs_discovered": 4, 00:37:48.744 "num_base_bdevs_operational": 4, 00:37:48.744 "base_bdevs_list": [ 00:37:48.744 { 00:37:48.744 "name": "NewBaseBdev", 00:37:48.744 "uuid": "c9487b7c-4870-4bd7-a808-d29bde6715dd", 00:37:48.744 "is_configured": true, 00:37:48.744 "data_offset": 2048, 00:37:48.744 "data_size": 63488 00:37:48.744 }, 00:37:48.744 { 00:37:48.744 "name": "BaseBdev2", 00:37:48.744 "uuid": "e86c3c7f-b3ee-4e44-abc3-aa31953fee2c", 00:37:48.744 "is_configured": true, 00:37:48.744 "data_offset": 2048, 00:37:48.744 "data_size": 63488 00:37:48.744 }, 00:37:48.744 { 00:37:48.744 "name": "BaseBdev3", 00:37:48.744 "uuid": "d6aa1e11-92be-482a-b968-c06eb1ada827", 00:37:48.744 "is_configured": true, 00:37:48.744 "data_offset": 2048, 00:37:48.744 "data_size": 63488 00:37:48.744 }, 00:37:48.744 { 00:37:48.744 "name": "BaseBdev4", 00:37:48.744 "uuid": "c968d570-2af6-464d-b577-1773f0e1db40", 00:37:48.744 "is_configured": true, 00:37:48.744 "data_offset": 2048, 00:37:48.744 "data_size": 63488 00:37:48.744 } 00:37:48.744 ] 00:37:48.744 } 00:37:48.744 } 00:37:48.744 }' 00:37:48.744 17:34:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:37:48.744 17:34:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:37:48.744 BaseBdev2 00:37:48.744 BaseBdev3 00:37:48.744 BaseBdev4' 00:37:48.744 17:34:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:37:48.744 17:34:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:37:48.744 17:34:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:37:48.744 17:34:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:37:48.744 17:34:26 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:48.744 17:34:26 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:37:48.744 17:34:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:37:48.744 17:34:26 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:48.744 17:34:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:37:48.744 17:34:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:37:48.744 17:34:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:37:48.744 17:34:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:37:48.744 17:34:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:37:48.744 17:34:26 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:48.744 17:34:26 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:37:48.744 17:34:26 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:48.744 17:34:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:37:48.744 17:34:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:37:48.744 17:34:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:37:48.744 17:34:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:37:48.744 17:34:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:37:48.744 17:34:26 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:48.744 17:34:26 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:37:48.744 17:34:26 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:48.744 17:34:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:37:48.744 17:34:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:37:48.744 17:34:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:37:48.744 17:34:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:37:48.744 17:34:26 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:49.003 17:34:26 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:37:49.003 17:34:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:37:49.003 17:34:26 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:49.003 17:34:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:37:49.003 17:34:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:37:49.003 17:34:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:37:49.003 17:34:26 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:49.003 17:34:26 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:37:49.003 [2024-11-26 17:34:26.238041] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:37:49.003 [2024-11-26 17:34:26.238107] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:37:49.003 [2024-11-26 17:34:26.238208] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:37:49.003 [2024-11-26 17:34:26.238594] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:37:49.003 [2024-11-26 17:34:26.238617] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name Existed_Raid, state offline 00:37:49.003 17:34:26 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:49.003 17:34:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@326 -- # killprocess 83913 00:37:49.003 17:34:26 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@954 -- # '[' -z 83913 ']' 00:37:49.003 17:34:26 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@958 -- # kill -0 83913 00:37:49.003 17:34:26 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@959 -- # uname 00:37:49.003 17:34:26 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:37:49.003 17:34:26 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 83913 00:37:49.003 killing process with pid 83913 00:37:49.003 17:34:26 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:37:49.003 17:34:26 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:37:49.003 17:34:26 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@972 -- # echo 'killing process with pid 83913' 00:37:49.003 17:34:26 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@973 -- # kill 83913 00:37:49.003 [2024-11-26 17:34:26.286559] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:37:49.003 17:34:26 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@978 -- # wait 83913 00:37:49.568 [2024-11-26 17:34:26.738263] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:37:50.943 17:34:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@328 -- # return 0 00:37:50.943 00:37:50.943 real 0m12.134s 00:37:50.943 user 0m19.130s 00:37:50.943 sys 0m2.302s 00:37:50.943 17:34:28 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@1130 -- # xtrace_disable 00:37:50.943 ************************************ 00:37:50.943 END TEST raid5f_state_function_test_sb 00:37:50.943 ************************************ 00:37:50.943 17:34:28 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:37:50.943 17:34:28 bdev_raid -- bdev/bdev_raid.sh@988 -- # run_test raid5f_superblock_test raid_superblock_test raid5f 4 00:37:50.943 17:34:28 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:37:50.943 17:34:28 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:37:50.943 17:34:28 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:37:50.943 ************************************ 00:37:50.943 START TEST raid5f_superblock_test 00:37:50.943 ************************************ 00:37:50.943 17:34:28 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@1129 -- # raid_superblock_test raid5f 4 00:37:50.943 17:34:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@393 -- # local raid_level=raid5f 00:37:50.943 17:34:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=4 00:37:50.943 17:34:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:37:50.943 17:34:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:37:50.943 17:34:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:37:50.943 17:34:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:37:50.943 17:34:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:37:50.943 17:34:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:37:50.943 17:34:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:37:50.943 17:34:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size 00:37:50.943 17:34:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:37:50.943 17:34:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:37:50.943 17:34:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:37:50.943 17:34:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@404 -- # '[' raid5f '!=' raid1 ']' 00:37:50.943 17:34:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@405 -- # strip_size=64 00:37:50.943 17:34:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@406 -- # strip_size_create_arg='-z 64' 00:37:50.943 17:34:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@412 -- # raid_pid=84584 00:37:50.943 17:34:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@413 -- # waitforlisten 84584 00:37:50.943 17:34:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:37:50.943 17:34:28 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@835 -- # '[' -z 84584 ']' 00:37:50.943 17:34:28 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:37:50.943 17:34:28 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:37:50.944 17:34:28 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:37:50.944 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:37:50.944 17:34:28 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:37:50.944 17:34:28 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:37:50.944 [2024-11-26 17:34:28.206374] Starting SPDK v25.01-pre git sha1 f7ce15267 / DPDK 24.03.0 initialization... 00:37:50.944 [2024-11-26 17:34:28.206563] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid84584 ] 00:37:51.201 [2024-11-26 17:34:28.399149] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:37:51.202 [2024-11-26 17:34:28.548210] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:37:51.460 [2024-11-26 17:34:28.789416] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:37:51.460 [2024-11-26 17:34:28.789499] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:37:51.719 17:34:29 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:37:51.719 17:34:29 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@868 -- # return 0 00:37:51.719 17:34:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:37:51.719 17:34:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:37:51.719 17:34:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:37:51.719 17:34:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:37:51.719 17:34:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:37:51.719 17:34:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:37:51.719 17:34:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:37:51.719 17:34:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:37:51.719 17:34:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc1 00:37:51.719 17:34:29 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:51.719 17:34:29 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:37:51.978 malloc1 00:37:51.978 17:34:29 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:51.978 17:34:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:37:51.978 17:34:29 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:51.978 17:34:29 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:37:51.978 [2024-11-26 17:34:29.175490] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:37:51.978 [2024-11-26 17:34:29.175578] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:37:51.978 [2024-11-26 17:34:29.175608] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:37:51.978 [2024-11-26 17:34:29.175621] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:37:51.978 [2024-11-26 17:34:29.178436] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:37:51.978 [2024-11-26 17:34:29.178473] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:37:51.978 pt1 00:37:51.978 17:34:29 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:51.978 17:34:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:37:51.978 17:34:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:37:51.978 17:34:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:37:51.978 17:34:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:37:51.978 17:34:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:37:51.978 17:34:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:37:51.978 17:34:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:37:51.978 17:34:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:37:51.978 17:34:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc2 00:37:51.978 17:34:29 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:51.978 17:34:29 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:37:51.978 malloc2 00:37:51.978 17:34:29 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:51.978 17:34:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:37:51.978 17:34:29 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:51.978 17:34:29 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:37:51.978 [2024-11-26 17:34:29.236930] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:37:51.978 [2024-11-26 17:34:29.236995] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:37:51.978 [2024-11-26 17:34:29.237032] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:37:51.978 [2024-11-26 17:34:29.237058] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:37:51.978 [2024-11-26 17:34:29.239871] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:37:51.978 [2024-11-26 17:34:29.239909] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:37:51.978 pt2 00:37:51.978 17:34:29 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:51.978 17:34:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:37:51.978 17:34:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:37:51.978 17:34:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc3 00:37:51.978 17:34:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt3 00:37:51.978 17:34:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000003 00:37:51.978 17:34:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:37:51.978 17:34:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:37:51.978 17:34:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:37:51.978 17:34:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc3 00:37:51.978 17:34:29 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:51.978 17:34:29 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:37:51.978 malloc3 00:37:51.978 17:34:29 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:51.978 17:34:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:37:51.978 17:34:29 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:51.978 17:34:29 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:37:51.978 [2024-11-26 17:34:29.311033] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:37:51.978 [2024-11-26 17:34:29.311113] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:37:51.978 [2024-11-26 17:34:29.311145] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:37:51.978 [2024-11-26 17:34:29.311159] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:37:51.978 [2024-11-26 17:34:29.313912] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:37:51.978 [2024-11-26 17:34:29.313950] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:37:51.978 pt3 00:37:51.978 17:34:29 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:51.978 17:34:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:37:51.978 17:34:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:37:51.978 17:34:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc4 00:37:51.978 17:34:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt4 00:37:51.978 17:34:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000004 00:37:51.978 17:34:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:37:51.978 17:34:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:37:51.978 17:34:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:37:51.978 17:34:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc4 00:37:51.978 17:34:29 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:51.978 17:34:29 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:37:51.978 malloc4 00:37:51.978 17:34:29 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:51.978 17:34:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:37:51.978 17:34:29 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:51.978 17:34:29 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:37:51.978 [2024-11-26 17:34:29.372428] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:37:51.978 [2024-11-26 17:34:29.372500] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:37:51.978 [2024-11-26 17:34:29.372529] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:37:51.978 [2024-11-26 17:34:29.372542] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:37:51.978 [2024-11-26 17:34:29.375456] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:37:51.978 [2024-11-26 17:34:29.375495] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:37:51.978 pt4 00:37:51.978 17:34:29 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:51.978 17:34:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:37:51.978 17:34:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:37:51.978 17:34:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''pt1 pt2 pt3 pt4'\''' -n raid_bdev1 -s 00:37:51.978 17:34:29 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:51.978 17:34:29 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:37:51.978 [2024-11-26 17:34:29.380462] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:37:51.978 [2024-11-26 17:34:29.382906] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:37:51.978 [2024-11-26 17:34:29.383004] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:37:51.978 [2024-11-26 17:34:29.383070] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:37:51.978 [2024-11-26 17:34:29.383280] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:37:51.978 [2024-11-26 17:34:29.383299] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:37:51.979 [2024-11-26 17:34:29.383592] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:37:51.979 [2024-11-26 17:34:29.391293] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:37:51.979 [2024-11-26 17:34:29.391321] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:37:51.979 [2024-11-26 17:34:29.391519] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:37:51.979 17:34:29 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:51.979 17:34:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 4 00:37:51.979 17:34:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:37:51.979 17:34:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:37:51.979 17:34:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:37:51.979 17:34:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:37:51.979 17:34:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:37:51.979 17:34:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:37:51.979 17:34:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:37:51.979 17:34:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:37:51.979 17:34:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:37:51.979 17:34:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:37:51.979 17:34:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:37:51.979 17:34:29 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:51.979 17:34:29 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:37:51.979 17:34:29 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:52.237 17:34:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:37:52.237 "name": "raid_bdev1", 00:37:52.237 "uuid": "7d7c1de8-3075-4e39-bcb2-86a0b06a0e16", 00:37:52.237 "strip_size_kb": 64, 00:37:52.237 "state": "online", 00:37:52.237 "raid_level": "raid5f", 00:37:52.237 "superblock": true, 00:37:52.237 "num_base_bdevs": 4, 00:37:52.237 "num_base_bdevs_discovered": 4, 00:37:52.237 "num_base_bdevs_operational": 4, 00:37:52.237 "base_bdevs_list": [ 00:37:52.237 { 00:37:52.237 "name": "pt1", 00:37:52.237 "uuid": "00000000-0000-0000-0000-000000000001", 00:37:52.237 "is_configured": true, 00:37:52.237 "data_offset": 2048, 00:37:52.237 "data_size": 63488 00:37:52.237 }, 00:37:52.237 { 00:37:52.237 "name": "pt2", 00:37:52.237 "uuid": "00000000-0000-0000-0000-000000000002", 00:37:52.237 "is_configured": true, 00:37:52.237 "data_offset": 2048, 00:37:52.237 "data_size": 63488 00:37:52.237 }, 00:37:52.237 { 00:37:52.237 "name": "pt3", 00:37:52.237 "uuid": "00000000-0000-0000-0000-000000000003", 00:37:52.237 "is_configured": true, 00:37:52.237 "data_offset": 2048, 00:37:52.237 "data_size": 63488 00:37:52.237 }, 00:37:52.237 { 00:37:52.237 "name": "pt4", 00:37:52.237 "uuid": "00000000-0000-0000-0000-000000000004", 00:37:52.237 "is_configured": true, 00:37:52.237 "data_offset": 2048, 00:37:52.237 "data_size": 63488 00:37:52.237 } 00:37:52.237 ] 00:37:52.237 }' 00:37:52.237 17:34:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:37:52.237 17:34:29 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:37:52.495 17:34:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:37:52.495 17:34:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:37:52.495 17:34:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:37:52.495 17:34:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:37:52.495 17:34:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:37:52.495 17:34:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:37:52.495 17:34:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:37:52.495 17:34:29 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:52.495 17:34:29 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:37:52.496 17:34:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:37:52.496 [2024-11-26 17:34:29.833559] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:37:52.496 17:34:29 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:52.496 17:34:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:37:52.496 "name": "raid_bdev1", 00:37:52.496 "aliases": [ 00:37:52.496 "7d7c1de8-3075-4e39-bcb2-86a0b06a0e16" 00:37:52.496 ], 00:37:52.496 "product_name": "Raid Volume", 00:37:52.496 "block_size": 512, 00:37:52.496 "num_blocks": 190464, 00:37:52.496 "uuid": "7d7c1de8-3075-4e39-bcb2-86a0b06a0e16", 00:37:52.496 "assigned_rate_limits": { 00:37:52.496 "rw_ios_per_sec": 0, 00:37:52.496 "rw_mbytes_per_sec": 0, 00:37:52.496 "r_mbytes_per_sec": 0, 00:37:52.496 "w_mbytes_per_sec": 0 00:37:52.496 }, 00:37:52.496 "claimed": false, 00:37:52.496 "zoned": false, 00:37:52.496 "supported_io_types": { 00:37:52.496 "read": true, 00:37:52.496 "write": true, 00:37:52.496 "unmap": false, 00:37:52.496 "flush": false, 00:37:52.496 "reset": true, 00:37:52.496 "nvme_admin": false, 00:37:52.496 "nvme_io": false, 00:37:52.496 "nvme_io_md": false, 00:37:52.496 "write_zeroes": true, 00:37:52.496 "zcopy": false, 00:37:52.496 "get_zone_info": false, 00:37:52.496 "zone_management": false, 00:37:52.496 "zone_append": false, 00:37:52.496 "compare": false, 00:37:52.496 "compare_and_write": false, 00:37:52.496 "abort": false, 00:37:52.496 "seek_hole": false, 00:37:52.496 "seek_data": false, 00:37:52.496 "copy": false, 00:37:52.496 "nvme_iov_md": false 00:37:52.496 }, 00:37:52.496 "driver_specific": { 00:37:52.496 "raid": { 00:37:52.496 "uuid": "7d7c1de8-3075-4e39-bcb2-86a0b06a0e16", 00:37:52.496 "strip_size_kb": 64, 00:37:52.496 "state": "online", 00:37:52.496 "raid_level": "raid5f", 00:37:52.496 "superblock": true, 00:37:52.496 "num_base_bdevs": 4, 00:37:52.496 "num_base_bdevs_discovered": 4, 00:37:52.496 "num_base_bdevs_operational": 4, 00:37:52.496 "base_bdevs_list": [ 00:37:52.496 { 00:37:52.496 "name": "pt1", 00:37:52.496 "uuid": "00000000-0000-0000-0000-000000000001", 00:37:52.496 "is_configured": true, 00:37:52.496 "data_offset": 2048, 00:37:52.496 "data_size": 63488 00:37:52.496 }, 00:37:52.496 { 00:37:52.496 "name": "pt2", 00:37:52.496 "uuid": "00000000-0000-0000-0000-000000000002", 00:37:52.496 "is_configured": true, 00:37:52.496 "data_offset": 2048, 00:37:52.496 "data_size": 63488 00:37:52.496 }, 00:37:52.496 { 00:37:52.496 "name": "pt3", 00:37:52.496 "uuid": "00000000-0000-0000-0000-000000000003", 00:37:52.496 "is_configured": true, 00:37:52.496 "data_offset": 2048, 00:37:52.496 "data_size": 63488 00:37:52.496 }, 00:37:52.496 { 00:37:52.496 "name": "pt4", 00:37:52.496 "uuid": "00000000-0000-0000-0000-000000000004", 00:37:52.496 "is_configured": true, 00:37:52.496 "data_offset": 2048, 00:37:52.496 "data_size": 63488 00:37:52.496 } 00:37:52.496 ] 00:37:52.496 } 00:37:52.496 } 00:37:52.496 }' 00:37:52.496 17:34:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:37:52.496 17:34:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:37:52.496 pt2 00:37:52.496 pt3 00:37:52.496 pt4' 00:37:52.496 17:34:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:37:52.754 17:34:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:37:52.754 17:34:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:37:52.754 17:34:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:37:52.754 17:34:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:37:52.754 17:34:29 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:52.754 17:34:29 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:37:52.754 17:34:29 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:52.754 17:34:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:37:52.754 17:34:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:37:52.754 17:34:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:37:52.754 17:34:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:37:52.754 17:34:30 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:52.754 17:34:30 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:37:52.754 17:34:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:37:52.754 17:34:30 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:52.754 17:34:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:37:52.754 17:34:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:37:52.754 17:34:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:37:52.754 17:34:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:37:52.754 17:34:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:37:52.754 17:34:30 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:52.754 17:34:30 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:37:52.754 17:34:30 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:52.754 17:34:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:37:52.754 17:34:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:37:52.754 17:34:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:37:52.754 17:34:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt4 00:37:52.754 17:34:30 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:52.754 17:34:30 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:37:52.754 17:34:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:37:52.754 17:34:30 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:52.754 17:34:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:37:52.755 17:34:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:37:52.755 17:34:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:37:52.755 17:34:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:37:52.755 17:34:30 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:52.755 17:34:30 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:37:52.755 [2024-11-26 17:34:30.157492] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:37:52.755 17:34:30 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:53.013 17:34:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=7d7c1de8-3075-4e39-bcb2-86a0b06a0e16 00:37:53.013 17:34:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@436 -- # '[' -z 7d7c1de8-3075-4e39-bcb2-86a0b06a0e16 ']' 00:37:53.013 17:34:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:37:53.013 17:34:30 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:53.013 17:34:30 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:37:53.013 [2024-11-26 17:34:30.225370] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:37:53.013 [2024-11-26 17:34:30.225595] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:37:53.013 [2024-11-26 17:34:30.225774] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:37:53.013 [2024-11-26 17:34:30.226026] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:37:53.013 [2024-11-26 17:34:30.226069] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:37:53.013 17:34:30 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:53.013 17:34:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:37:53.013 17:34:30 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:53.013 17:34:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:37:53.013 17:34:30 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:37:53.013 17:34:30 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:53.013 17:34:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:37:53.013 17:34:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:37:53.013 17:34:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:37:53.014 17:34:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:37:53.014 17:34:30 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:53.014 17:34:30 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:37:53.014 17:34:30 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:53.014 17:34:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:37:53.014 17:34:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:37:53.014 17:34:30 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:53.014 17:34:30 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:37:53.014 17:34:30 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:53.014 17:34:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:37:53.014 17:34:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt3 00:37:53.014 17:34:30 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:53.014 17:34:30 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:37:53.014 17:34:30 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:53.014 17:34:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:37:53.014 17:34:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt4 00:37:53.014 17:34:30 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:53.014 17:34:30 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:37:53.014 17:34:30 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:53.014 17:34:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:37:53.014 17:34:30 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:53.014 17:34:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:37:53.014 17:34:30 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:37:53.014 17:34:30 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:53.014 17:34:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:37:53.014 17:34:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''malloc1 malloc2 malloc3 malloc4'\''' -n raid_bdev1 00:37:53.014 17:34:30 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@652 -- # local es=0 00:37:53.014 17:34:30 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''malloc1 malloc2 malloc3 malloc4'\''' -n raid_bdev1 00:37:53.014 17:34:30 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:37:53.014 17:34:30 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:37:53.014 17:34:30 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:37:53.014 17:34:30 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:37:53.014 17:34:30 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''malloc1 malloc2 malloc3 malloc4'\''' -n raid_bdev1 00:37:53.014 17:34:30 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:53.014 17:34:30 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:37:53.014 [2024-11-26 17:34:30.377399] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:37:53.014 [2024-11-26 17:34:30.380130] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:37:53.014 [2024-11-26 17:34:30.380181] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc3 is claimed 00:37:53.014 [2024-11-26 17:34:30.380217] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc4 is claimed 00:37:53.014 [2024-11-26 17:34:30.380275] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:37:53.014 [2024-11-26 17:34:30.380328] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:37:53.014 [2024-11-26 17:34:30.380351] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc3 00:37:53.014 [2024-11-26 17:34:30.380374] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc4 00:37:53.014 [2024-11-26 17:34:30.380390] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:37:53.014 [2024-11-26 17:34:30.380404] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state configuring 00:37:53.014 request: 00:37:53.014 { 00:37:53.014 "name": "raid_bdev1", 00:37:53.014 "raid_level": "raid5f", 00:37:53.014 "base_bdevs": [ 00:37:53.014 "malloc1", 00:37:53.014 "malloc2", 00:37:53.014 "malloc3", 00:37:53.014 "malloc4" 00:37:53.014 ], 00:37:53.014 "strip_size_kb": 64, 00:37:53.014 "superblock": false, 00:37:53.014 "method": "bdev_raid_create", 00:37:53.014 "req_id": 1 00:37:53.014 } 00:37:53.014 Got JSON-RPC error response 00:37:53.014 response: 00:37:53.014 { 00:37:53.014 "code": -17, 00:37:53.014 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:37:53.014 } 00:37:53.014 17:34:30 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:37:53.014 17:34:30 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@655 -- # es=1 00:37:53.014 17:34:30 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:37:53.014 17:34:30 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:37:53.014 17:34:30 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:37:53.014 17:34:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:37:53.014 17:34:30 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:53.014 17:34:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:37:53.014 17:34:30 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:37:53.014 17:34:30 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:53.014 17:34:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:37:53.014 17:34:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:37:53.014 17:34:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:37:53.014 17:34:30 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:53.014 17:34:30 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:37:53.014 [2024-11-26 17:34:30.433384] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:37:53.014 [2024-11-26 17:34:30.433582] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:37:53.014 [2024-11-26 17:34:30.433639] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:37:53.014 [2024-11-26 17:34:30.433723] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:37:53.014 [2024-11-26 17:34:30.437038] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:37:53.014 [2024-11-26 17:34:30.437237] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:37:53.014 [2024-11-26 17:34:30.437497] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:37:53.014 [2024-11-26 17:34:30.437626] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:37:53.014 pt1 00:37:53.014 17:34:30 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:53.014 17:34:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring raid5f 64 4 00:37:53.014 17:34:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:37:53.014 17:34:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:37:53.014 17:34:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:37:53.014 17:34:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:37:53.014 17:34:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:37:53.014 17:34:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:37:53.014 17:34:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:37:53.014 17:34:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:37:53.014 17:34:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:37:53.014 17:34:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:37:53.014 17:34:30 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:53.014 17:34:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:37:53.014 17:34:30 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:37:53.273 17:34:30 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:53.273 17:34:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:37:53.273 "name": "raid_bdev1", 00:37:53.273 "uuid": "7d7c1de8-3075-4e39-bcb2-86a0b06a0e16", 00:37:53.273 "strip_size_kb": 64, 00:37:53.273 "state": "configuring", 00:37:53.273 "raid_level": "raid5f", 00:37:53.273 "superblock": true, 00:37:53.273 "num_base_bdevs": 4, 00:37:53.273 "num_base_bdevs_discovered": 1, 00:37:53.273 "num_base_bdevs_operational": 4, 00:37:53.273 "base_bdevs_list": [ 00:37:53.273 { 00:37:53.273 "name": "pt1", 00:37:53.273 "uuid": "00000000-0000-0000-0000-000000000001", 00:37:53.273 "is_configured": true, 00:37:53.273 "data_offset": 2048, 00:37:53.273 "data_size": 63488 00:37:53.273 }, 00:37:53.273 { 00:37:53.273 "name": null, 00:37:53.273 "uuid": "00000000-0000-0000-0000-000000000002", 00:37:53.273 "is_configured": false, 00:37:53.273 "data_offset": 2048, 00:37:53.273 "data_size": 63488 00:37:53.273 }, 00:37:53.273 { 00:37:53.273 "name": null, 00:37:53.273 "uuid": "00000000-0000-0000-0000-000000000003", 00:37:53.273 "is_configured": false, 00:37:53.273 "data_offset": 2048, 00:37:53.273 "data_size": 63488 00:37:53.273 }, 00:37:53.273 { 00:37:53.273 "name": null, 00:37:53.273 "uuid": "00000000-0000-0000-0000-000000000004", 00:37:53.273 "is_configured": false, 00:37:53.273 "data_offset": 2048, 00:37:53.273 "data_size": 63488 00:37:53.273 } 00:37:53.273 ] 00:37:53.273 }' 00:37:53.273 17:34:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:37:53.273 17:34:30 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:37:53.533 17:34:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@470 -- # '[' 4 -gt 2 ']' 00:37:53.533 17:34:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@472 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:37:53.533 17:34:30 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:53.533 17:34:30 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:37:53.533 [2024-11-26 17:34:30.873733] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:37:53.533 [2024-11-26 17:34:30.875287] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:37:53.533 [2024-11-26 17:34:30.875335] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a880 00:37:53.533 [2024-11-26 17:34:30.875359] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:37:53.533 [2024-11-26 17:34:30.876041] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:37:53.533 [2024-11-26 17:34:30.876120] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:37:53.533 [2024-11-26 17:34:30.876247] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:37:53.533 [2024-11-26 17:34:30.876286] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:37:53.533 pt2 00:37:53.533 17:34:30 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:53.533 17:34:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@473 -- # rpc_cmd bdev_passthru_delete pt2 00:37:53.533 17:34:30 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:53.533 17:34:30 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:37:53.533 [2024-11-26 17:34:30.881719] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: pt2 00:37:53.533 17:34:30 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:53.533 17:34:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@474 -- # verify_raid_bdev_state raid_bdev1 configuring raid5f 64 4 00:37:53.533 17:34:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:37:53.533 17:34:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:37:53.533 17:34:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:37:53.533 17:34:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:37:53.533 17:34:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:37:53.533 17:34:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:37:53.533 17:34:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:37:53.533 17:34:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:37:53.533 17:34:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:37:53.533 17:34:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:37:53.533 17:34:30 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:53.533 17:34:30 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:37:53.533 17:34:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:37:53.533 17:34:30 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:53.533 17:34:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:37:53.533 "name": "raid_bdev1", 00:37:53.533 "uuid": "7d7c1de8-3075-4e39-bcb2-86a0b06a0e16", 00:37:53.533 "strip_size_kb": 64, 00:37:53.533 "state": "configuring", 00:37:53.533 "raid_level": "raid5f", 00:37:53.533 "superblock": true, 00:37:53.533 "num_base_bdevs": 4, 00:37:53.533 "num_base_bdevs_discovered": 1, 00:37:53.533 "num_base_bdevs_operational": 4, 00:37:53.533 "base_bdevs_list": [ 00:37:53.533 { 00:37:53.533 "name": "pt1", 00:37:53.533 "uuid": "00000000-0000-0000-0000-000000000001", 00:37:53.533 "is_configured": true, 00:37:53.533 "data_offset": 2048, 00:37:53.533 "data_size": 63488 00:37:53.533 }, 00:37:53.533 { 00:37:53.533 "name": null, 00:37:53.533 "uuid": "00000000-0000-0000-0000-000000000002", 00:37:53.533 "is_configured": false, 00:37:53.533 "data_offset": 0, 00:37:53.533 "data_size": 63488 00:37:53.533 }, 00:37:53.533 { 00:37:53.533 "name": null, 00:37:53.533 "uuid": "00000000-0000-0000-0000-000000000003", 00:37:53.533 "is_configured": false, 00:37:53.533 "data_offset": 2048, 00:37:53.533 "data_size": 63488 00:37:53.533 }, 00:37:53.533 { 00:37:53.533 "name": null, 00:37:53.533 "uuid": "00000000-0000-0000-0000-000000000004", 00:37:53.533 "is_configured": false, 00:37:53.533 "data_offset": 2048, 00:37:53.533 "data_size": 63488 00:37:53.533 } 00:37:53.533 ] 00:37:53.533 }' 00:37:53.533 17:34:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:37:53.533 17:34:30 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:37:54.102 17:34:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:37:54.102 17:34:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:37:54.103 17:34:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:37:54.103 17:34:31 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:54.103 17:34:31 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:37:54.103 [2024-11-26 17:34:31.329768] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:37:54.103 [2024-11-26 17:34:31.329974] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:37:54.103 [2024-11-26 17:34:31.330036] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ab80 00:37:54.103 [2024-11-26 17:34:31.330163] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:37:54.103 [2024-11-26 17:34:31.330742] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:37:54.103 [2024-11-26 17:34:31.330769] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:37:54.103 [2024-11-26 17:34:31.330864] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:37:54.103 [2024-11-26 17:34:31.330889] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:37:54.103 pt2 00:37:54.103 17:34:31 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:54.103 17:34:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:37:54.103 17:34:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:37:54.103 17:34:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:37:54.103 17:34:31 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:54.103 17:34:31 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:37:54.103 [2024-11-26 17:34:31.337742] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:37:54.103 [2024-11-26 17:34:31.337909] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:37:54.103 [2024-11-26 17:34:31.337971] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ae80 00:37:54.103 [2024-11-26 17:34:31.338113] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:37:54.103 [2024-11-26 17:34:31.338600] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:37:54.103 [2024-11-26 17:34:31.338744] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:37:54.103 [2024-11-26 17:34:31.338911] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:37:54.103 [2024-11-26 17:34:31.338961] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:37:54.103 pt3 00:37:54.103 17:34:31 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:54.103 17:34:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:37:54.103 17:34:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:37:54.103 17:34:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:37:54.103 17:34:31 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:54.103 17:34:31 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:37:54.103 [2024-11-26 17:34:31.345719] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:37:54.103 [2024-11-26 17:34:31.345853] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:37:54.103 [2024-11-26 17:34:31.345904] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b180 00:37:54.103 [2024-11-26 17:34:31.346030] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:37:54.103 [2024-11-26 17:34:31.346472] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:37:54.103 [2024-11-26 17:34:31.346605] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:37:54.103 [2024-11-26 17:34:31.346746] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt4 00:37:54.103 [2024-11-26 17:34:31.346842] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:37:54.103 [2024-11-26 17:34:31.347096] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:37:54.103 [2024-11-26 17:34:31.347195] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:37:54.103 [2024-11-26 17:34:31.347486] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:37:54.103 [2024-11-26 17:34:31.354500] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:37:54.103 [2024-11-26 17:34:31.354620] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:37:54.103 [2024-11-26 17:34:31.354880] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:37:54.103 pt4 00:37:54.103 17:34:31 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:54.103 17:34:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:37:54.103 17:34:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:37:54.103 17:34:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 4 00:37:54.103 17:34:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:37:54.103 17:34:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:37:54.103 17:34:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:37:54.103 17:34:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:37:54.103 17:34:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:37:54.103 17:34:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:37:54.103 17:34:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:37:54.103 17:34:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:37:54.103 17:34:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:37:54.103 17:34:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:37:54.103 17:34:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:37:54.103 17:34:31 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:54.103 17:34:31 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:37:54.103 17:34:31 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:54.103 17:34:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:37:54.103 "name": "raid_bdev1", 00:37:54.103 "uuid": "7d7c1de8-3075-4e39-bcb2-86a0b06a0e16", 00:37:54.103 "strip_size_kb": 64, 00:37:54.103 "state": "online", 00:37:54.103 "raid_level": "raid5f", 00:37:54.103 "superblock": true, 00:37:54.103 "num_base_bdevs": 4, 00:37:54.103 "num_base_bdevs_discovered": 4, 00:37:54.103 "num_base_bdevs_operational": 4, 00:37:54.103 "base_bdevs_list": [ 00:37:54.103 { 00:37:54.103 "name": "pt1", 00:37:54.103 "uuid": "00000000-0000-0000-0000-000000000001", 00:37:54.103 "is_configured": true, 00:37:54.103 "data_offset": 2048, 00:37:54.103 "data_size": 63488 00:37:54.103 }, 00:37:54.103 { 00:37:54.103 "name": "pt2", 00:37:54.103 "uuid": "00000000-0000-0000-0000-000000000002", 00:37:54.103 "is_configured": true, 00:37:54.103 "data_offset": 2048, 00:37:54.103 "data_size": 63488 00:37:54.103 }, 00:37:54.103 { 00:37:54.103 "name": "pt3", 00:37:54.103 "uuid": "00000000-0000-0000-0000-000000000003", 00:37:54.103 "is_configured": true, 00:37:54.103 "data_offset": 2048, 00:37:54.103 "data_size": 63488 00:37:54.103 }, 00:37:54.103 { 00:37:54.103 "name": "pt4", 00:37:54.103 "uuid": "00000000-0000-0000-0000-000000000004", 00:37:54.103 "is_configured": true, 00:37:54.103 "data_offset": 2048, 00:37:54.103 "data_size": 63488 00:37:54.103 } 00:37:54.103 ] 00:37:54.103 }' 00:37:54.103 17:34:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:37:54.103 17:34:31 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:37:54.361 17:34:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:37:54.361 17:34:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:37:54.361 17:34:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:37:54.361 17:34:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:37:54.361 17:34:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:37:54.361 17:34:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:37:54.361 17:34:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:37:54.361 17:34:31 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:54.361 17:34:31 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:37:54.361 17:34:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:37:54.361 [2024-11-26 17:34:31.791799] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:37:54.619 17:34:31 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:54.619 17:34:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:37:54.619 "name": "raid_bdev1", 00:37:54.619 "aliases": [ 00:37:54.619 "7d7c1de8-3075-4e39-bcb2-86a0b06a0e16" 00:37:54.619 ], 00:37:54.619 "product_name": "Raid Volume", 00:37:54.619 "block_size": 512, 00:37:54.619 "num_blocks": 190464, 00:37:54.619 "uuid": "7d7c1de8-3075-4e39-bcb2-86a0b06a0e16", 00:37:54.619 "assigned_rate_limits": { 00:37:54.619 "rw_ios_per_sec": 0, 00:37:54.619 "rw_mbytes_per_sec": 0, 00:37:54.619 "r_mbytes_per_sec": 0, 00:37:54.619 "w_mbytes_per_sec": 0 00:37:54.619 }, 00:37:54.619 "claimed": false, 00:37:54.619 "zoned": false, 00:37:54.619 "supported_io_types": { 00:37:54.619 "read": true, 00:37:54.619 "write": true, 00:37:54.619 "unmap": false, 00:37:54.619 "flush": false, 00:37:54.619 "reset": true, 00:37:54.619 "nvme_admin": false, 00:37:54.619 "nvme_io": false, 00:37:54.619 "nvme_io_md": false, 00:37:54.619 "write_zeroes": true, 00:37:54.619 "zcopy": false, 00:37:54.619 "get_zone_info": false, 00:37:54.619 "zone_management": false, 00:37:54.619 "zone_append": false, 00:37:54.619 "compare": false, 00:37:54.619 "compare_and_write": false, 00:37:54.619 "abort": false, 00:37:54.619 "seek_hole": false, 00:37:54.619 "seek_data": false, 00:37:54.619 "copy": false, 00:37:54.619 "nvme_iov_md": false 00:37:54.619 }, 00:37:54.619 "driver_specific": { 00:37:54.619 "raid": { 00:37:54.619 "uuid": "7d7c1de8-3075-4e39-bcb2-86a0b06a0e16", 00:37:54.619 "strip_size_kb": 64, 00:37:54.619 "state": "online", 00:37:54.619 "raid_level": "raid5f", 00:37:54.619 "superblock": true, 00:37:54.619 "num_base_bdevs": 4, 00:37:54.619 "num_base_bdevs_discovered": 4, 00:37:54.619 "num_base_bdevs_operational": 4, 00:37:54.619 "base_bdevs_list": [ 00:37:54.619 { 00:37:54.619 "name": "pt1", 00:37:54.619 "uuid": "00000000-0000-0000-0000-000000000001", 00:37:54.619 "is_configured": true, 00:37:54.619 "data_offset": 2048, 00:37:54.619 "data_size": 63488 00:37:54.619 }, 00:37:54.619 { 00:37:54.619 "name": "pt2", 00:37:54.619 "uuid": "00000000-0000-0000-0000-000000000002", 00:37:54.619 "is_configured": true, 00:37:54.619 "data_offset": 2048, 00:37:54.619 "data_size": 63488 00:37:54.619 }, 00:37:54.619 { 00:37:54.619 "name": "pt3", 00:37:54.619 "uuid": "00000000-0000-0000-0000-000000000003", 00:37:54.619 "is_configured": true, 00:37:54.619 "data_offset": 2048, 00:37:54.619 "data_size": 63488 00:37:54.620 }, 00:37:54.620 { 00:37:54.620 "name": "pt4", 00:37:54.620 "uuid": "00000000-0000-0000-0000-000000000004", 00:37:54.620 "is_configured": true, 00:37:54.620 "data_offset": 2048, 00:37:54.620 "data_size": 63488 00:37:54.620 } 00:37:54.620 ] 00:37:54.620 } 00:37:54.620 } 00:37:54.620 }' 00:37:54.620 17:34:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:37:54.620 17:34:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:37:54.620 pt2 00:37:54.620 pt3 00:37:54.620 pt4' 00:37:54.620 17:34:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:37:54.620 17:34:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:37:54.620 17:34:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:37:54.620 17:34:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:37:54.620 17:34:31 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:54.620 17:34:31 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:37:54.620 17:34:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:37:54.620 17:34:31 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:54.620 17:34:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:37:54.620 17:34:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:37:54.620 17:34:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:37:54.620 17:34:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:37:54.620 17:34:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:37:54.620 17:34:31 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:54.620 17:34:31 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:37:54.620 17:34:31 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:54.620 17:34:32 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:37:54.620 17:34:32 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:37:54.620 17:34:32 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:37:54.620 17:34:32 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:37:54.620 17:34:32 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:37:54.620 17:34:32 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:54.620 17:34:32 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:37:54.620 17:34:32 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:54.620 17:34:32 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:37:54.620 17:34:32 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:37:54.620 17:34:32 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:37:54.620 17:34:32 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt4 00:37:54.620 17:34:32 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:54.620 17:34:32 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:37:54.620 17:34:32 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:37:54.879 17:34:32 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:54.879 17:34:32 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:37:54.879 17:34:32 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:37:54.879 17:34:32 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:37:54.879 17:34:32 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:54.879 17:34:32 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:37:54.879 17:34:32 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:37:54.879 [2024-11-26 17:34:32.103807] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:37:54.879 17:34:32 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:54.879 17:34:32 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@487 -- # '[' 7d7c1de8-3075-4e39-bcb2-86a0b06a0e16 '!=' 7d7c1de8-3075-4e39-bcb2-86a0b06a0e16 ']' 00:37:54.879 17:34:32 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@491 -- # has_redundancy raid5f 00:37:54.879 17:34:32 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:37:54.879 17:34:32 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@199 -- # return 0 00:37:54.879 17:34:32 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@493 -- # rpc_cmd bdev_passthru_delete pt1 00:37:54.879 17:34:32 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:54.879 17:34:32 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:37:54.879 [2024-11-26 17:34:32.143686] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: pt1 00:37:54.879 17:34:32 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:54.879 17:34:32 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@496 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:37:54.879 17:34:32 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:37:54.879 17:34:32 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:37:54.879 17:34:32 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:37:54.879 17:34:32 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:37:54.879 17:34:32 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:37:54.879 17:34:32 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:37:54.879 17:34:32 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:37:54.879 17:34:32 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:37:54.879 17:34:32 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:37:54.879 17:34:32 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:37:54.879 17:34:32 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:37:54.879 17:34:32 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:54.879 17:34:32 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:37:54.879 17:34:32 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:54.879 17:34:32 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:37:54.879 "name": "raid_bdev1", 00:37:54.879 "uuid": "7d7c1de8-3075-4e39-bcb2-86a0b06a0e16", 00:37:54.879 "strip_size_kb": 64, 00:37:54.879 "state": "online", 00:37:54.879 "raid_level": "raid5f", 00:37:54.879 "superblock": true, 00:37:54.879 "num_base_bdevs": 4, 00:37:54.879 "num_base_bdevs_discovered": 3, 00:37:54.879 "num_base_bdevs_operational": 3, 00:37:54.879 "base_bdevs_list": [ 00:37:54.879 { 00:37:54.879 "name": null, 00:37:54.879 "uuid": "00000000-0000-0000-0000-000000000000", 00:37:54.879 "is_configured": false, 00:37:54.879 "data_offset": 0, 00:37:54.879 "data_size": 63488 00:37:54.879 }, 00:37:54.879 { 00:37:54.879 "name": "pt2", 00:37:54.879 "uuid": "00000000-0000-0000-0000-000000000002", 00:37:54.879 "is_configured": true, 00:37:54.879 "data_offset": 2048, 00:37:54.879 "data_size": 63488 00:37:54.879 }, 00:37:54.879 { 00:37:54.879 "name": "pt3", 00:37:54.879 "uuid": "00000000-0000-0000-0000-000000000003", 00:37:54.879 "is_configured": true, 00:37:54.879 "data_offset": 2048, 00:37:54.879 "data_size": 63488 00:37:54.879 }, 00:37:54.879 { 00:37:54.879 "name": "pt4", 00:37:54.879 "uuid": "00000000-0000-0000-0000-000000000004", 00:37:54.879 "is_configured": true, 00:37:54.879 "data_offset": 2048, 00:37:54.879 "data_size": 63488 00:37:54.879 } 00:37:54.879 ] 00:37:54.879 }' 00:37:54.879 17:34:32 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:37:54.879 17:34:32 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:37:55.447 17:34:32 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@499 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:37:55.447 17:34:32 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:55.447 17:34:32 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:37:55.447 [2024-11-26 17:34:32.591776] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:37:55.447 [2024-11-26 17:34:32.591817] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:37:55.447 [2024-11-26 17:34:32.591901] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:37:55.447 [2024-11-26 17:34:32.591983] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:37:55.447 [2024-11-26 17:34:32.591996] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:37:55.447 17:34:32 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:55.447 17:34:32 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@500 -- # rpc_cmd bdev_raid_get_bdevs all 00:37:55.447 17:34:32 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@500 -- # jq -r '.[]' 00:37:55.447 17:34:32 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:55.447 17:34:32 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:37:55.447 17:34:32 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:55.447 17:34:32 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@500 -- # raid_bdev= 00:37:55.447 17:34:32 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@501 -- # '[' -n '' ']' 00:37:55.447 17:34:32 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i = 1 )) 00:37:55.447 17:34:32 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:37:55.447 17:34:32 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt2 00:37:55.447 17:34:32 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:55.447 17:34:32 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:37:55.447 17:34:32 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:55.447 17:34:32 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:37:55.447 17:34:32 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:37:55.447 17:34:32 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt3 00:37:55.447 17:34:32 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:55.447 17:34:32 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:37:55.447 17:34:32 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:55.447 17:34:32 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:37:55.447 17:34:32 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:37:55.447 17:34:32 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt4 00:37:55.447 17:34:32 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:55.447 17:34:32 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:37:55.447 17:34:32 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:55.447 17:34:32 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:37:55.447 17:34:32 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:37:55.447 17:34:32 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i = 1 )) 00:37:55.447 17:34:32 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:37:55.447 17:34:32 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@512 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:37:55.447 17:34:32 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:55.447 17:34:32 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:37:55.447 [2024-11-26 17:34:32.679793] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:37:55.447 [2024-11-26 17:34:32.679855] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:37:55.448 [2024-11-26 17:34:32.679879] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b480 00:37:55.448 [2024-11-26 17:34:32.679891] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:37:55.448 [2024-11-26 17:34:32.682414] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:37:55.448 [2024-11-26 17:34:32.682454] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:37:55.448 [2024-11-26 17:34:32.682544] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:37:55.448 [2024-11-26 17:34:32.682591] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:37:55.448 pt2 00:37:55.448 17:34:32 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:55.448 17:34:32 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@515 -- # verify_raid_bdev_state raid_bdev1 configuring raid5f 64 3 00:37:55.448 17:34:32 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:37:55.448 17:34:32 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:37:55.448 17:34:32 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:37:55.448 17:34:32 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:37:55.448 17:34:32 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:37:55.448 17:34:32 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:37:55.448 17:34:32 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:37:55.448 17:34:32 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:37:55.448 17:34:32 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:37:55.448 17:34:32 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:37:55.448 17:34:32 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:37:55.448 17:34:32 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:55.448 17:34:32 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:37:55.448 17:34:32 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:55.448 17:34:32 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:37:55.448 "name": "raid_bdev1", 00:37:55.448 "uuid": "7d7c1de8-3075-4e39-bcb2-86a0b06a0e16", 00:37:55.448 "strip_size_kb": 64, 00:37:55.448 "state": "configuring", 00:37:55.448 "raid_level": "raid5f", 00:37:55.448 "superblock": true, 00:37:55.448 "num_base_bdevs": 4, 00:37:55.448 "num_base_bdevs_discovered": 1, 00:37:55.448 "num_base_bdevs_operational": 3, 00:37:55.448 "base_bdevs_list": [ 00:37:55.448 { 00:37:55.448 "name": null, 00:37:55.448 "uuid": "00000000-0000-0000-0000-000000000000", 00:37:55.448 "is_configured": false, 00:37:55.448 "data_offset": 2048, 00:37:55.448 "data_size": 63488 00:37:55.448 }, 00:37:55.448 { 00:37:55.448 "name": "pt2", 00:37:55.448 "uuid": "00000000-0000-0000-0000-000000000002", 00:37:55.448 "is_configured": true, 00:37:55.448 "data_offset": 2048, 00:37:55.448 "data_size": 63488 00:37:55.448 }, 00:37:55.448 { 00:37:55.448 "name": null, 00:37:55.448 "uuid": "00000000-0000-0000-0000-000000000003", 00:37:55.448 "is_configured": false, 00:37:55.448 "data_offset": 2048, 00:37:55.448 "data_size": 63488 00:37:55.448 }, 00:37:55.448 { 00:37:55.448 "name": null, 00:37:55.448 "uuid": "00000000-0000-0000-0000-000000000004", 00:37:55.448 "is_configured": false, 00:37:55.448 "data_offset": 2048, 00:37:55.448 "data_size": 63488 00:37:55.448 } 00:37:55.448 ] 00:37:55.448 }' 00:37:55.448 17:34:32 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:37:55.448 17:34:32 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:37:55.736 17:34:33 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i++ )) 00:37:55.736 17:34:33 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:37:55.736 17:34:33 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@512 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:37:55.736 17:34:33 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:55.736 17:34:33 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:37:55.736 [2024-11-26 17:34:33.143918] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:37:55.736 [2024-11-26 17:34:33.144008] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:37:55.736 [2024-11-26 17:34:33.144038] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ba80 00:37:55.736 [2024-11-26 17:34:33.144062] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:37:55.736 [2024-11-26 17:34:33.144517] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:37:55.736 [2024-11-26 17:34:33.144549] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:37:55.736 [2024-11-26 17:34:33.144641] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:37:55.736 [2024-11-26 17:34:33.144671] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:37:55.736 pt3 00:37:55.736 17:34:33 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:55.736 17:34:33 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@515 -- # verify_raid_bdev_state raid_bdev1 configuring raid5f 64 3 00:37:55.736 17:34:33 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:37:55.736 17:34:33 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:37:55.736 17:34:33 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:37:55.736 17:34:33 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:37:55.736 17:34:33 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:37:55.736 17:34:33 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:37:55.736 17:34:33 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:37:55.736 17:34:33 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:37:55.736 17:34:33 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:37:55.736 17:34:33 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:37:55.736 17:34:33 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:37:55.736 17:34:33 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:55.736 17:34:33 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:37:56.017 17:34:33 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:56.017 17:34:33 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:37:56.017 "name": "raid_bdev1", 00:37:56.017 "uuid": "7d7c1de8-3075-4e39-bcb2-86a0b06a0e16", 00:37:56.017 "strip_size_kb": 64, 00:37:56.017 "state": "configuring", 00:37:56.017 "raid_level": "raid5f", 00:37:56.017 "superblock": true, 00:37:56.017 "num_base_bdevs": 4, 00:37:56.017 "num_base_bdevs_discovered": 2, 00:37:56.017 "num_base_bdevs_operational": 3, 00:37:56.017 "base_bdevs_list": [ 00:37:56.017 { 00:37:56.017 "name": null, 00:37:56.017 "uuid": "00000000-0000-0000-0000-000000000000", 00:37:56.017 "is_configured": false, 00:37:56.017 "data_offset": 2048, 00:37:56.017 "data_size": 63488 00:37:56.017 }, 00:37:56.017 { 00:37:56.017 "name": "pt2", 00:37:56.017 "uuid": "00000000-0000-0000-0000-000000000002", 00:37:56.017 "is_configured": true, 00:37:56.017 "data_offset": 2048, 00:37:56.017 "data_size": 63488 00:37:56.017 }, 00:37:56.017 { 00:37:56.017 "name": "pt3", 00:37:56.017 "uuid": "00000000-0000-0000-0000-000000000003", 00:37:56.017 "is_configured": true, 00:37:56.017 "data_offset": 2048, 00:37:56.017 "data_size": 63488 00:37:56.017 }, 00:37:56.017 { 00:37:56.017 "name": null, 00:37:56.017 "uuid": "00000000-0000-0000-0000-000000000004", 00:37:56.017 "is_configured": false, 00:37:56.017 "data_offset": 2048, 00:37:56.017 "data_size": 63488 00:37:56.017 } 00:37:56.017 ] 00:37:56.017 }' 00:37:56.017 17:34:33 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:37:56.017 17:34:33 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:37:56.275 17:34:33 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i++ )) 00:37:56.275 17:34:33 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:37:56.275 17:34:33 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@519 -- # i=3 00:37:56.275 17:34:33 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@520 -- # rpc_cmd bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:37:56.275 17:34:33 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:56.275 17:34:33 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:37:56.275 [2024-11-26 17:34:33.608035] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:37:56.275 [2024-11-26 17:34:33.608114] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:37:56.275 [2024-11-26 17:34:33.608143] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000bd80 00:37:56.275 [2024-11-26 17:34:33.608156] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:37:56.275 [2024-11-26 17:34:33.608615] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:37:56.275 [2024-11-26 17:34:33.608644] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:37:56.275 [2024-11-26 17:34:33.608735] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt4 00:37:56.275 [2024-11-26 17:34:33.608765] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:37:56.275 [2024-11-26 17:34:33.608893] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:37:56.275 [2024-11-26 17:34:33.608910] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:37:56.275 [2024-11-26 17:34:33.609198] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006080 00:37:56.275 [2024-11-26 17:34:33.616908] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:37:56.275 [2024-11-26 17:34:33.616938] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008200 00:37:56.275 pt4 00:37:56.275 [2024-11-26 17:34:33.617242] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:37:56.275 17:34:33 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:56.275 17:34:33 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@523 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:37:56.275 17:34:33 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:37:56.275 17:34:33 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:37:56.275 17:34:33 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:37:56.275 17:34:33 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:37:56.275 17:34:33 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:37:56.275 17:34:33 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:37:56.275 17:34:33 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:37:56.275 17:34:33 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:37:56.275 17:34:33 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:37:56.275 17:34:33 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:37:56.275 17:34:33 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:56.275 17:34:33 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:37:56.275 17:34:33 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:37:56.275 17:34:33 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:56.275 17:34:33 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:37:56.275 "name": "raid_bdev1", 00:37:56.275 "uuid": "7d7c1de8-3075-4e39-bcb2-86a0b06a0e16", 00:37:56.275 "strip_size_kb": 64, 00:37:56.275 "state": "online", 00:37:56.275 "raid_level": "raid5f", 00:37:56.275 "superblock": true, 00:37:56.275 "num_base_bdevs": 4, 00:37:56.275 "num_base_bdevs_discovered": 3, 00:37:56.275 "num_base_bdevs_operational": 3, 00:37:56.275 "base_bdevs_list": [ 00:37:56.275 { 00:37:56.275 "name": null, 00:37:56.275 "uuid": "00000000-0000-0000-0000-000000000000", 00:37:56.275 "is_configured": false, 00:37:56.275 "data_offset": 2048, 00:37:56.275 "data_size": 63488 00:37:56.275 }, 00:37:56.275 { 00:37:56.275 "name": "pt2", 00:37:56.275 "uuid": "00000000-0000-0000-0000-000000000002", 00:37:56.275 "is_configured": true, 00:37:56.275 "data_offset": 2048, 00:37:56.275 "data_size": 63488 00:37:56.275 }, 00:37:56.275 { 00:37:56.275 "name": "pt3", 00:37:56.275 "uuid": "00000000-0000-0000-0000-000000000003", 00:37:56.275 "is_configured": true, 00:37:56.275 "data_offset": 2048, 00:37:56.275 "data_size": 63488 00:37:56.275 }, 00:37:56.275 { 00:37:56.275 "name": "pt4", 00:37:56.275 "uuid": "00000000-0000-0000-0000-000000000004", 00:37:56.275 "is_configured": true, 00:37:56.275 "data_offset": 2048, 00:37:56.275 "data_size": 63488 00:37:56.275 } 00:37:56.275 ] 00:37:56.275 }' 00:37:56.275 17:34:33 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:37:56.275 17:34:33 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:37:56.842 17:34:34 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@526 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:37:56.842 17:34:34 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:56.842 17:34:34 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:37:56.842 [2024-11-26 17:34:34.070751] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:37:56.842 [2024-11-26 17:34:34.070787] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:37:56.842 [2024-11-26 17:34:34.070874] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:37:56.842 [2024-11-26 17:34:34.070959] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:37:56.842 [2024-11-26 17:34:34.070976] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name raid_bdev1, state offline 00:37:56.842 17:34:34 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:56.842 17:34:34 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@527 -- # rpc_cmd bdev_raid_get_bdevs all 00:37:56.842 17:34:34 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:56.842 17:34:34 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:37:56.842 17:34:34 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@527 -- # jq -r '.[]' 00:37:56.842 17:34:34 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:56.842 17:34:34 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@527 -- # raid_bdev= 00:37:56.842 17:34:34 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@528 -- # '[' -n '' ']' 00:37:56.842 17:34:34 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@532 -- # '[' 4 -gt 2 ']' 00:37:56.842 17:34:34 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@534 -- # i=3 00:37:56.842 17:34:34 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@535 -- # rpc_cmd bdev_passthru_delete pt4 00:37:56.842 17:34:34 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:56.842 17:34:34 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:37:56.842 17:34:34 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:56.843 17:34:34 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@540 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:37:56.843 17:34:34 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:56.843 17:34:34 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:37:56.843 [2024-11-26 17:34:34.134752] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:37:56.843 [2024-11-26 17:34:34.134827] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:37:56.843 [2024-11-26 17:34:34.134862] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000c080 00:37:56.843 [2024-11-26 17:34:34.134885] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:37:56.843 [2024-11-26 17:34:34.137651] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:37:56.843 [2024-11-26 17:34:34.137693] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:37:56.843 [2024-11-26 17:34:34.137783] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:37:56.843 [2024-11-26 17:34:34.137832] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:37:56.843 [2024-11-26 17:34:34.137961] bdev_raid.c:3685:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev pt2 (4) greater than existing raid bdev raid_bdev1 (2) 00:37:56.843 [2024-11-26 17:34:34.137976] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:37:56.843 [2024-11-26 17:34:34.137993] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008580 name raid_bdev1, state configuring 00:37:56.843 [2024-11-26 17:34:34.138113] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:37:56.843 [2024-11-26 17:34:34.138245] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:37:56.843 pt1 00:37:56.843 17:34:34 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:56.843 17:34:34 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@542 -- # '[' 4 -gt 2 ']' 00:37:56.843 17:34:34 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@545 -- # verify_raid_bdev_state raid_bdev1 configuring raid5f 64 3 00:37:56.843 17:34:34 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:37:56.843 17:34:34 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:37:56.843 17:34:34 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:37:56.843 17:34:34 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:37:56.843 17:34:34 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:37:56.843 17:34:34 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:37:56.843 17:34:34 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:37:56.843 17:34:34 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:37:56.843 17:34:34 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:37:56.843 17:34:34 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:37:56.843 17:34:34 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:56.843 17:34:34 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:37:56.843 17:34:34 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:37:56.843 17:34:34 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:56.843 17:34:34 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:37:56.843 "name": "raid_bdev1", 00:37:56.843 "uuid": "7d7c1de8-3075-4e39-bcb2-86a0b06a0e16", 00:37:56.843 "strip_size_kb": 64, 00:37:56.843 "state": "configuring", 00:37:56.843 "raid_level": "raid5f", 00:37:56.843 "superblock": true, 00:37:56.843 "num_base_bdevs": 4, 00:37:56.843 "num_base_bdevs_discovered": 2, 00:37:56.843 "num_base_bdevs_operational": 3, 00:37:56.843 "base_bdevs_list": [ 00:37:56.843 { 00:37:56.843 "name": null, 00:37:56.843 "uuid": "00000000-0000-0000-0000-000000000000", 00:37:56.843 "is_configured": false, 00:37:56.843 "data_offset": 2048, 00:37:56.843 "data_size": 63488 00:37:56.843 }, 00:37:56.843 { 00:37:56.843 "name": "pt2", 00:37:56.843 "uuid": "00000000-0000-0000-0000-000000000002", 00:37:56.843 "is_configured": true, 00:37:56.843 "data_offset": 2048, 00:37:56.843 "data_size": 63488 00:37:56.843 }, 00:37:56.843 { 00:37:56.843 "name": "pt3", 00:37:56.843 "uuid": "00000000-0000-0000-0000-000000000003", 00:37:56.843 "is_configured": true, 00:37:56.843 "data_offset": 2048, 00:37:56.843 "data_size": 63488 00:37:56.843 }, 00:37:56.843 { 00:37:56.843 "name": null, 00:37:56.843 "uuid": "00000000-0000-0000-0000-000000000004", 00:37:56.843 "is_configured": false, 00:37:56.843 "data_offset": 2048, 00:37:56.843 "data_size": 63488 00:37:56.843 } 00:37:56.843 ] 00:37:56.843 }' 00:37:56.843 17:34:34 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:37:56.843 17:34:34 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:37:57.409 17:34:34 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@546 -- # rpc_cmd bdev_raid_get_bdevs configuring 00:37:57.409 17:34:34 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:57.409 17:34:34 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:37:57.409 17:34:34 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@546 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:37:57.409 17:34:34 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:57.409 17:34:34 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@546 -- # [[ false == \f\a\l\s\e ]] 00:37:57.409 17:34:34 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@549 -- # rpc_cmd bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:37:57.409 17:34:34 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:57.410 17:34:34 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:37:57.410 [2024-11-26 17:34:34.614934] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:37:57.410 [2024-11-26 17:34:34.615018] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:37:57.410 [2024-11-26 17:34:34.615079] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000c680 00:37:57.410 [2024-11-26 17:34:34.615098] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:37:57.410 [2024-11-26 17:34:34.615654] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:37:57.410 [2024-11-26 17:34:34.615693] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:37:57.410 [2024-11-26 17:34:34.615795] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt4 00:37:57.410 [2024-11-26 17:34:34.615823] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:37:57.410 [2024-11-26 17:34:34.615985] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008900 00:37:57.410 [2024-11-26 17:34:34.616005] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:37:57.410 [2024-11-26 17:34:34.616347] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006150 00:37:57.410 [2024-11-26 17:34:34.625984] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008900 00:37:57.410 [2024-11-26 17:34:34.626018] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008900 00:37:57.410 pt4 00:37:57.410 [2024-11-26 17:34:34.626379] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:37:57.410 17:34:34 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:57.410 17:34:34 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@554 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:37:57.410 17:34:34 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:37:57.410 17:34:34 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:37:57.410 17:34:34 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:37:57.410 17:34:34 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:37:57.410 17:34:34 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:37:57.410 17:34:34 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:37:57.410 17:34:34 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:37:57.410 17:34:34 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:37:57.410 17:34:34 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:37:57.410 17:34:34 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:37:57.410 17:34:34 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:57.410 17:34:34 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:37:57.410 17:34:34 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:37:57.410 17:34:34 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:57.410 17:34:34 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:37:57.410 "name": "raid_bdev1", 00:37:57.410 "uuid": "7d7c1de8-3075-4e39-bcb2-86a0b06a0e16", 00:37:57.410 "strip_size_kb": 64, 00:37:57.410 "state": "online", 00:37:57.410 "raid_level": "raid5f", 00:37:57.410 "superblock": true, 00:37:57.410 "num_base_bdevs": 4, 00:37:57.410 "num_base_bdevs_discovered": 3, 00:37:57.410 "num_base_bdevs_operational": 3, 00:37:57.410 "base_bdevs_list": [ 00:37:57.410 { 00:37:57.410 "name": null, 00:37:57.410 "uuid": "00000000-0000-0000-0000-000000000000", 00:37:57.410 "is_configured": false, 00:37:57.410 "data_offset": 2048, 00:37:57.410 "data_size": 63488 00:37:57.410 }, 00:37:57.410 { 00:37:57.410 "name": "pt2", 00:37:57.410 "uuid": "00000000-0000-0000-0000-000000000002", 00:37:57.410 "is_configured": true, 00:37:57.410 "data_offset": 2048, 00:37:57.410 "data_size": 63488 00:37:57.410 }, 00:37:57.410 { 00:37:57.410 "name": "pt3", 00:37:57.410 "uuid": "00000000-0000-0000-0000-000000000003", 00:37:57.410 "is_configured": true, 00:37:57.410 "data_offset": 2048, 00:37:57.410 "data_size": 63488 00:37:57.410 }, 00:37:57.410 { 00:37:57.410 "name": "pt4", 00:37:57.410 "uuid": "00000000-0000-0000-0000-000000000004", 00:37:57.410 "is_configured": true, 00:37:57.410 "data_offset": 2048, 00:37:57.410 "data_size": 63488 00:37:57.410 } 00:37:57.410 ] 00:37:57.410 }' 00:37:57.410 17:34:34 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:37:57.410 17:34:34 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:37:57.669 17:34:35 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@555 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:37:57.669 17:34:35 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@555 -- # rpc_cmd bdev_raid_get_bdevs online 00:37:57.669 17:34:35 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:57.669 17:34:35 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:37:57.669 17:34:35 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:57.669 17:34:35 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@555 -- # [[ false == \f\a\l\s\e ]] 00:37:57.928 17:34:35 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@558 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:37:57.928 17:34:35 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@558 -- # jq -r '.[] | .uuid' 00:37:57.928 17:34:35 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:57.928 17:34:35 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:37:57.928 [2024-11-26 17:34:35.121283] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:37:57.928 17:34:35 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:57.928 17:34:35 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@558 -- # '[' 7d7c1de8-3075-4e39-bcb2-86a0b06a0e16 '!=' 7d7c1de8-3075-4e39-bcb2-86a0b06a0e16 ']' 00:37:57.928 17:34:35 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@563 -- # killprocess 84584 00:37:57.928 17:34:35 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@954 -- # '[' -z 84584 ']' 00:37:57.928 17:34:35 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@958 -- # kill -0 84584 00:37:57.928 17:34:35 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@959 -- # uname 00:37:57.928 17:34:35 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:37:57.928 17:34:35 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 84584 00:37:57.928 17:34:35 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:37:57.928 17:34:35 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:37:57.928 killing process with pid 84584 00:37:57.928 17:34:35 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 84584' 00:37:57.928 17:34:35 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@973 -- # kill 84584 00:37:57.928 [2024-11-26 17:34:35.205027] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:37:57.928 [2024-11-26 17:34:35.205145] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:37:57.928 17:34:35 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@978 -- # wait 84584 00:37:57.928 [2024-11-26 17:34:35.205225] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:37:57.928 [2024-11-26 17:34:35.205244] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008900 name raid_bdev1, state offline 00:37:58.187 [2024-11-26 17:34:35.605203] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:37:59.563 17:34:36 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@565 -- # return 0 00:37:59.563 00:37:59.563 real 0m8.644s 00:37:59.563 user 0m13.559s 00:37:59.563 sys 0m1.735s 00:37:59.563 17:34:36 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:37:59.563 ************************************ 00:37:59.563 END TEST raid5f_superblock_test 00:37:59.563 ************************************ 00:37:59.563 17:34:36 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:37:59.563 17:34:36 bdev_raid -- bdev/bdev_raid.sh@989 -- # '[' true = true ']' 00:37:59.563 17:34:36 bdev_raid -- bdev/bdev_raid.sh@990 -- # run_test raid5f_rebuild_test raid_rebuild_test raid5f 4 false false true 00:37:59.563 17:34:36 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 7 -le 1 ']' 00:37:59.563 17:34:36 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:37:59.563 17:34:36 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:37:59.563 ************************************ 00:37:59.563 START TEST raid5f_rebuild_test 00:37:59.563 ************************************ 00:37:59.563 17:34:36 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@1129 -- # raid_rebuild_test raid5f 4 false false true 00:37:59.563 17:34:36 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@569 -- # local raid_level=raid5f 00:37:59.563 17:34:36 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=4 00:37:59.563 17:34:36 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@571 -- # local superblock=false 00:37:59.563 17:34:36 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@572 -- # local background_io=false 00:37:59.563 17:34:36 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@573 -- # local verify=true 00:37:59.563 17:34:36 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:37:59.563 17:34:36 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:37:59.563 17:34:36 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:37:59.563 17:34:36 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:37:59.563 17:34:36 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:37:59.563 17:34:36 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:37:59.563 17:34:36 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:37:59.563 17:34:36 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:37:59.563 17:34:36 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@576 -- # echo BaseBdev3 00:37:59.563 17:34:36 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:37:59.563 17:34:36 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:37:59.563 17:34:36 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@576 -- # echo BaseBdev4 00:37:59.563 17:34:36 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:37:59.563 17:34:36 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:37:59.563 17:34:36 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:37:59.563 17:34:36 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:37:59.563 17:34:36 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:37:59.563 17:34:36 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@576 -- # local strip_size 00:37:59.563 17:34:36 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@577 -- # local create_arg 00:37:59.563 17:34:36 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:37:59.563 17:34:36 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@579 -- # local data_offset 00:37:59.563 17:34:36 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@581 -- # '[' raid5f '!=' raid1 ']' 00:37:59.563 17:34:36 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@582 -- # '[' false = true ']' 00:37:59.563 17:34:36 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@586 -- # strip_size=64 00:37:59.563 17:34:36 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@587 -- # create_arg+=' -z 64' 00:37:59.563 17:34:36 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@592 -- # '[' false = true ']' 00:37:59.563 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:37:59.563 17:34:36 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@597 -- # raid_pid=85075 00:37:59.563 17:34:36 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@598 -- # waitforlisten 85075 00:37:59.563 17:34:36 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@835 -- # '[' -z 85075 ']' 00:37:59.564 17:34:36 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:37:59.564 17:34:36 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:37:59.564 17:34:36 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:37:59.564 17:34:36 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:37:59.564 17:34:36 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:37:59.564 17:34:36 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:37:59.564 I/O size of 3145728 is greater than zero copy threshold (65536). 00:37:59.564 Zero copy mechanism will not be used. 00:37:59.564 [2024-11-26 17:34:36.920409] Starting SPDK v25.01-pre git sha1 f7ce15267 / DPDK 24.03.0 initialization... 00:37:59.564 [2024-11-26 17:34:36.920586] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid85075 ] 00:37:59.821 [2024-11-26 17:34:37.107134] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:37:59.822 [2024-11-26 17:34:37.220664] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:38:00.080 [2024-11-26 17:34:37.422288] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:38:00.080 [2024-11-26 17:34:37.422349] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:38:00.339 17:34:37 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:38:00.339 17:34:37 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@868 -- # return 0 00:38:00.339 17:34:37 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:38:00.339 17:34:37 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:38:00.339 17:34:37 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:00.339 17:34:37 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:38:00.598 BaseBdev1_malloc 00:38:00.598 17:34:37 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:00.598 17:34:37 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:38:00.598 17:34:37 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:00.598 17:34:37 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:38:00.598 [2024-11-26 17:34:37.810491] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:38:00.598 [2024-11-26 17:34:37.810691] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:38:00.598 [2024-11-26 17:34:37.810751] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:38:00.598 [2024-11-26 17:34:37.810841] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:38:00.598 [2024-11-26 17:34:37.813279] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:38:00.598 BaseBdev1 00:38:00.598 [2024-11-26 17:34:37.813429] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:38:00.598 17:34:37 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:00.598 17:34:37 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:38:00.598 17:34:37 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:38:00.598 17:34:37 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:00.598 17:34:37 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:38:00.598 BaseBdev2_malloc 00:38:00.598 17:34:37 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:00.598 17:34:37 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:38:00.598 17:34:37 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:00.598 17:34:37 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:38:00.598 [2024-11-26 17:34:37.864171] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:38:00.598 [2024-11-26 17:34:37.864386] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:38:00.598 [2024-11-26 17:34:37.864451] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:38:00.598 [2024-11-26 17:34:37.864543] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:38:00.598 [2024-11-26 17:34:37.867115] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:38:00.598 [2024-11-26 17:34:37.867267] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:38:00.598 BaseBdev2 00:38:00.598 17:34:37 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:00.598 17:34:37 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:38:00.598 17:34:37 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:38:00.598 17:34:37 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:00.598 17:34:37 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:38:00.598 BaseBdev3_malloc 00:38:00.598 17:34:37 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:00.599 17:34:37 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev3_malloc -p BaseBdev3 00:38:00.599 17:34:37 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:00.599 17:34:37 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:38:00.599 [2024-11-26 17:34:37.926590] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev3_malloc 00:38:00.599 [2024-11-26 17:34:37.926767] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:38:00.599 [2024-11-26 17:34:37.926824] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:38:00.599 [2024-11-26 17:34:37.926904] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:38:00.599 [2024-11-26 17:34:37.929313] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:38:00.599 [2024-11-26 17:34:37.929449] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:38:00.599 BaseBdev3 00:38:00.599 17:34:37 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:00.599 17:34:37 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:38:00.599 17:34:37 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:38:00.599 17:34:37 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:00.599 17:34:37 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:38:00.599 BaseBdev4_malloc 00:38:00.599 17:34:37 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:00.599 17:34:37 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev4_malloc -p BaseBdev4 00:38:00.599 17:34:37 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:00.599 17:34:37 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:38:00.599 [2024-11-26 17:34:37.976661] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev4_malloc 00:38:00.599 [2024-11-26 17:34:37.976849] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:38:00.599 [2024-11-26 17:34:37.976904] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:38:00.599 [2024-11-26 17:34:37.977012] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:38:00.599 [2024-11-26 17:34:37.979409] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:38:00.599 BaseBdev4 00:38:00.599 [2024-11-26 17:34:37.979578] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:38:00.599 17:34:37 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:00.599 17:34:37 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 512 -b spare_malloc 00:38:00.599 17:34:37 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:00.599 17:34:37 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:38:00.599 spare_malloc 00:38:00.599 17:34:38 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:00.599 17:34:38 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:38:00.599 17:34:38 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:00.599 17:34:38 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:38:00.599 spare_delay 00:38:00.599 17:34:38 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:00.599 17:34:38 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:38:00.599 17:34:38 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:00.599 17:34:38 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:38:00.599 [2024-11-26 17:34:38.036654] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:38:00.599 [2024-11-26 17:34:38.036824] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:38:00.599 [2024-11-26 17:34:38.036877] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a880 00:38:00.599 [2024-11-26 17:34:38.036958] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:38:00.599 [2024-11-26 17:34:38.039372] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:38:00.599 [2024-11-26 17:34:38.039515] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:38:00.599 spare 00:38:00.599 17:34:38 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:00.599 17:34:38 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n raid_bdev1 00:38:00.599 17:34:38 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:00.599 17:34:38 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:38:00.857 [2024-11-26 17:34:38.044706] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:38:00.858 [2024-11-26 17:34:38.046896] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:38:00.858 [2024-11-26 17:34:38.046957] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:38:00.858 [2024-11-26 17:34:38.047008] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:38:00.858 [2024-11-26 17:34:38.047119] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:38:00.858 [2024-11-26 17:34:38.047135] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 196608, blocklen 512 00:38:00.858 [2024-11-26 17:34:38.047403] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:38:00.858 [2024-11-26 17:34:38.055574] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:38:00.858 [2024-11-26 17:34:38.055597] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:38:00.858 [2024-11-26 17:34:38.055803] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:38:00.858 17:34:38 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:00.858 17:34:38 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 4 00:38:00.858 17:34:38 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:38:00.858 17:34:38 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:38:00.858 17:34:38 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:38:00.858 17:34:38 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:38:00.858 17:34:38 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:38:00.858 17:34:38 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:38:00.858 17:34:38 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:38:00.858 17:34:38 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:38:00.858 17:34:38 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:38:00.858 17:34:38 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:38:00.858 17:34:38 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:38:00.858 17:34:38 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:00.858 17:34:38 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:38:00.858 17:34:38 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:00.858 17:34:38 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:38:00.858 "name": "raid_bdev1", 00:38:00.858 "uuid": "d69c48fb-68ac-4057-a55f-165f44ffa349", 00:38:00.858 "strip_size_kb": 64, 00:38:00.858 "state": "online", 00:38:00.858 "raid_level": "raid5f", 00:38:00.858 "superblock": false, 00:38:00.858 "num_base_bdevs": 4, 00:38:00.858 "num_base_bdevs_discovered": 4, 00:38:00.858 "num_base_bdevs_operational": 4, 00:38:00.858 "base_bdevs_list": [ 00:38:00.858 { 00:38:00.858 "name": "BaseBdev1", 00:38:00.858 "uuid": "deb8730f-d5a0-5f96-a3fa-4fa37f9e5f11", 00:38:00.858 "is_configured": true, 00:38:00.858 "data_offset": 0, 00:38:00.858 "data_size": 65536 00:38:00.858 }, 00:38:00.858 { 00:38:00.858 "name": "BaseBdev2", 00:38:00.858 "uuid": "cfee6f69-a25c-5b18-9e2d-092c463d5a28", 00:38:00.858 "is_configured": true, 00:38:00.858 "data_offset": 0, 00:38:00.858 "data_size": 65536 00:38:00.858 }, 00:38:00.858 { 00:38:00.858 "name": "BaseBdev3", 00:38:00.858 "uuid": "f1ac2674-5020-5509-bcee-03ffc5805675", 00:38:00.858 "is_configured": true, 00:38:00.858 "data_offset": 0, 00:38:00.858 "data_size": 65536 00:38:00.858 }, 00:38:00.858 { 00:38:00.858 "name": "BaseBdev4", 00:38:00.858 "uuid": "782599af-a387-5f8c-9d23-b8a0ba68f83c", 00:38:00.858 "is_configured": true, 00:38:00.858 "data_offset": 0, 00:38:00.858 "data_size": 65536 00:38:00.858 } 00:38:00.858 ] 00:38:00.858 }' 00:38:00.858 17:34:38 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:38:00.858 17:34:38 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:38:01.116 17:34:38 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:38:01.116 17:34:38 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:38:01.116 17:34:38 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:01.116 17:34:38 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:38:01.116 [2024-11-26 17:34:38.480989] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:38:01.116 17:34:38 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:01.116 17:34:38 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=196608 00:38:01.116 17:34:38 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:38:01.116 17:34:38 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:38:01.116 17:34:38 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:01.116 17:34:38 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:38:01.116 17:34:38 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:01.375 17:34:38 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@619 -- # data_offset=0 00:38:01.375 17:34:38 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@621 -- # '[' false = true ']' 00:38:01.375 17:34:38 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@624 -- # '[' true = true ']' 00:38:01.375 17:34:38 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@625 -- # local write_unit_size 00:38:01.375 17:34:38 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@628 -- # nbd_start_disks /var/tmp/spdk.sock raid_bdev1 /dev/nbd0 00:38:01.375 17:34:38 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:38:01.375 17:34:38 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@10 -- # bdev_list=('raid_bdev1') 00:38:01.375 17:34:38 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@10 -- # local bdev_list 00:38:01.375 17:34:38 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:38:01.375 17:34:38 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@11 -- # local nbd_list 00:38:01.375 17:34:38 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@12 -- # local i 00:38:01.375 17:34:38 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:38:01.375 17:34:38 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:38:01.375 17:34:38 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk raid_bdev1 /dev/nbd0 00:38:01.375 [2024-11-26 17:34:38.768887] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006150 00:38:01.375 /dev/nbd0 00:38:01.375 17:34:38 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:38:01.375 17:34:38 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:38:01.375 17:34:38 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:38:01.375 17:34:38 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@873 -- # local i 00:38:01.375 17:34:38 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:38:01.633 17:34:38 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:38:01.633 17:34:38 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:38:01.633 17:34:38 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@877 -- # break 00:38:01.633 17:34:38 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:38:01.633 17:34:38 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:38:01.633 17:34:38 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:38:01.633 1+0 records in 00:38:01.633 1+0 records out 00:38:01.633 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000281531 s, 14.5 MB/s 00:38:01.633 17:34:38 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:38:01.633 17:34:38 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@890 -- # size=4096 00:38:01.634 17:34:38 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:38:01.634 17:34:38 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:38:01.634 17:34:38 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@893 -- # return 0 00:38:01.634 17:34:38 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:38:01.634 17:34:38 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:38:01.634 17:34:38 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@629 -- # '[' raid5f = raid5f ']' 00:38:01.634 17:34:38 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@630 -- # write_unit_size=384 00:38:01.634 17:34:38 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@631 -- # echo 192 00:38:01.634 17:34:38 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@635 -- # dd if=/dev/urandom of=/dev/nbd0 bs=196608 count=512 oflag=direct 00:38:02.202 512+0 records in 00:38:02.202 512+0 records out 00:38:02.202 100663296 bytes (101 MB, 96 MiB) copied, 0.566024 s, 178 MB/s 00:38:02.202 17:34:39 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@636 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:38:02.202 17:34:39 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:38:02.202 17:34:39 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:38:02.202 17:34:39 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@50 -- # local nbd_list 00:38:02.202 17:34:39 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@51 -- # local i 00:38:02.202 17:34:39 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:38:02.202 17:34:39 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:38:02.202 17:34:39 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:38:02.202 [2024-11-26 17:34:39.617908] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:38:02.202 17:34:39 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:38:02.202 17:34:39 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:38:02.202 17:34:39 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:38:02.202 17:34:39 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:38:02.202 17:34:39 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:38:02.202 17:34:39 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:38:02.202 17:34:39 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:38:02.202 17:34:39 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:38:02.202 17:34:39 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:02.202 17:34:39 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:38:02.202 [2024-11-26 17:34:39.627558] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:38:02.202 17:34:39 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:02.202 17:34:39 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:38:02.202 17:34:39 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:38:02.202 17:34:39 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:38:02.202 17:34:39 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:38:02.202 17:34:39 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:38:02.202 17:34:39 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:38:02.202 17:34:39 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:38:02.202 17:34:39 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:38:02.202 17:34:39 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:38:02.202 17:34:39 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:38:02.202 17:34:39 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:38:02.202 17:34:39 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:02.202 17:34:39 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:38:02.202 17:34:39 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:38:02.460 17:34:39 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:02.460 17:34:39 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:38:02.460 "name": "raid_bdev1", 00:38:02.460 "uuid": "d69c48fb-68ac-4057-a55f-165f44ffa349", 00:38:02.460 "strip_size_kb": 64, 00:38:02.460 "state": "online", 00:38:02.460 "raid_level": "raid5f", 00:38:02.460 "superblock": false, 00:38:02.460 "num_base_bdevs": 4, 00:38:02.460 "num_base_bdevs_discovered": 3, 00:38:02.460 "num_base_bdevs_operational": 3, 00:38:02.460 "base_bdevs_list": [ 00:38:02.460 { 00:38:02.460 "name": null, 00:38:02.460 "uuid": "00000000-0000-0000-0000-000000000000", 00:38:02.460 "is_configured": false, 00:38:02.460 "data_offset": 0, 00:38:02.460 "data_size": 65536 00:38:02.460 }, 00:38:02.460 { 00:38:02.460 "name": "BaseBdev2", 00:38:02.460 "uuid": "cfee6f69-a25c-5b18-9e2d-092c463d5a28", 00:38:02.460 "is_configured": true, 00:38:02.460 "data_offset": 0, 00:38:02.460 "data_size": 65536 00:38:02.460 }, 00:38:02.460 { 00:38:02.460 "name": "BaseBdev3", 00:38:02.460 "uuid": "f1ac2674-5020-5509-bcee-03ffc5805675", 00:38:02.460 "is_configured": true, 00:38:02.460 "data_offset": 0, 00:38:02.460 "data_size": 65536 00:38:02.460 }, 00:38:02.460 { 00:38:02.460 "name": "BaseBdev4", 00:38:02.460 "uuid": "782599af-a387-5f8c-9d23-b8a0ba68f83c", 00:38:02.460 "is_configured": true, 00:38:02.460 "data_offset": 0, 00:38:02.460 "data_size": 65536 00:38:02.460 } 00:38:02.460 ] 00:38:02.460 }' 00:38:02.460 17:34:39 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:38:02.460 17:34:39 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:38:02.719 17:34:40 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:38:02.719 17:34:40 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:02.719 17:34:40 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:38:02.719 [2024-11-26 17:34:40.055659] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:38:02.719 [2024-11-26 17:34:40.072421] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00002b750 00:38:02.719 17:34:40 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:02.719 17:34:40 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@647 -- # sleep 1 00:38:02.719 [2024-11-26 17:34:40.082854] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:38:03.650 17:34:41 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:38:03.650 17:34:41 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:38:03.650 17:34:41 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:38:03.650 17:34:41 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:38:03.650 17:34:41 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:38:03.650 17:34:41 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:38:03.650 17:34:41 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:03.650 17:34:41 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:38:03.650 17:34:41 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:38:03.908 17:34:41 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:03.908 17:34:41 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:38:03.908 "name": "raid_bdev1", 00:38:03.908 "uuid": "d69c48fb-68ac-4057-a55f-165f44ffa349", 00:38:03.908 "strip_size_kb": 64, 00:38:03.908 "state": "online", 00:38:03.908 "raid_level": "raid5f", 00:38:03.908 "superblock": false, 00:38:03.908 "num_base_bdevs": 4, 00:38:03.908 "num_base_bdevs_discovered": 4, 00:38:03.908 "num_base_bdevs_operational": 4, 00:38:03.908 "process": { 00:38:03.908 "type": "rebuild", 00:38:03.908 "target": "spare", 00:38:03.908 "progress": { 00:38:03.908 "blocks": 17280, 00:38:03.908 "percent": 8 00:38:03.908 } 00:38:03.908 }, 00:38:03.908 "base_bdevs_list": [ 00:38:03.908 { 00:38:03.908 "name": "spare", 00:38:03.908 "uuid": "928b6b94-112b-5bb8-9e65-23ec21ac84b0", 00:38:03.908 "is_configured": true, 00:38:03.908 "data_offset": 0, 00:38:03.908 "data_size": 65536 00:38:03.908 }, 00:38:03.908 { 00:38:03.908 "name": "BaseBdev2", 00:38:03.908 "uuid": "cfee6f69-a25c-5b18-9e2d-092c463d5a28", 00:38:03.908 "is_configured": true, 00:38:03.908 "data_offset": 0, 00:38:03.908 "data_size": 65536 00:38:03.908 }, 00:38:03.908 { 00:38:03.908 "name": "BaseBdev3", 00:38:03.908 "uuid": "f1ac2674-5020-5509-bcee-03ffc5805675", 00:38:03.908 "is_configured": true, 00:38:03.908 "data_offset": 0, 00:38:03.908 "data_size": 65536 00:38:03.908 }, 00:38:03.908 { 00:38:03.908 "name": "BaseBdev4", 00:38:03.908 "uuid": "782599af-a387-5f8c-9d23-b8a0ba68f83c", 00:38:03.908 "is_configured": true, 00:38:03.908 "data_offset": 0, 00:38:03.908 "data_size": 65536 00:38:03.908 } 00:38:03.908 ] 00:38:03.908 }' 00:38:03.908 17:34:41 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:38:03.908 17:34:41 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:38:03.908 17:34:41 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:38:03.908 17:34:41 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:38:03.908 17:34:41 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:38:03.908 17:34:41 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:03.908 17:34:41 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:38:03.908 [2024-11-26 17:34:41.227807] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:38:03.908 [2024-11-26 17:34:41.293687] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:38:03.908 [2024-11-26 17:34:41.293771] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:38:03.908 [2024-11-26 17:34:41.293792] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:38:03.908 [2024-11-26 17:34:41.293804] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:38:03.908 17:34:41 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:03.908 17:34:41 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:38:03.908 17:34:41 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:38:03.908 17:34:41 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:38:03.908 17:34:41 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:38:03.908 17:34:41 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:38:03.908 17:34:41 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:38:03.908 17:34:41 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:38:03.908 17:34:41 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:38:03.908 17:34:41 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:38:03.908 17:34:41 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:38:03.908 17:34:41 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:38:03.908 17:34:41 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:38:03.908 17:34:41 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:03.909 17:34:41 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:38:03.909 17:34:41 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:04.167 17:34:41 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:38:04.167 "name": "raid_bdev1", 00:38:04.167 "uuid": "d69c48fb-68ac-4057-a55f-165f44ffa349", 00:38:04.167 "strip_size_kb": 64, 00:38:04.167 "state": "online", 00:38:04.167 "raid_level": "raid5f", 00:38:04.167 "superblock": false, 00:38:04.167 "num_base_bdevs": 4, 00:38:04.167 "num_base_bdevs_discovered": 3, 00:38:04.167 "num_base_bdevs_operational": 3, 00:38:04.167 "base_bdevs_list": [ 00:38:04.167 { 00:38:04.167 "name": null, 00:38:04.167 "uuid": "00000000-0000-0000-0000-000000000000", 00:38:04.167 "is_configured": false, 00:38:04.167 "data_offset": 0, 00:38:04.167 "data_size": 65536 00:38:04.167 }, 00:38:04.167 { 00:38:04.167 "name": "BaseBdev2", 00:38:04.167 "uuid": "cfee6f69-a25c-5b18-9e2d-092c463d5a28", 00:38:04.167 "is_configured": true, 00:38:04.167 "data_offset": 0, 00:38:04.167 "data_size": 65536 00:38:04.167 }, 00:38:04.167 { 00:38:04.167 "name": "BaseBdev3", 00:38:04.167 "uuid": "f1ac2674-5020-5509-bcee-03ffc5805675", 00:38:04.167 "is_configured": true, 00:38:04.167 "data_offset": 0, 00:38:04.167 "data_size": 65536 00:38:04.167 }, 00:38:04.167 { 00:38:04.167 "name": "BaseBdev4", 00:38:04.167 "uuid": "782599af-a387-5f8c-9d23-b8a0ba68f83c", 00:38:04.167 "is_configured": true, 00:38:04.167 "data_offset": 0, 00:38:04.167 "data_size": 65536 00:38:04.167 } 00:38:04.167 ] 00:38:04.167 }' 00:38:04.167 17:34:41 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:38:04.167 17:34:41 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:38:04.426 17:34:41 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:38:04.426 17:34:41 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:38:04.426 17:34:41 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:38:04.426 17:34:41 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=none 00:38:04.426 17:34:41 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:38:04.426 17:34:41 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:38:04.426 17:34:41 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:04.426 17:34:41 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:38:04.426 17:34:41 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:38:04.426 17:34:41 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:04.426 17:34:41 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:38:04.426 "name": "raid_bdev1", 00:38:04.426 "uuid": "d69c48fb-68ac-4057-a55f-165f44ffa349", 00:38:04.426 "strip_size_kb": 64, 00:38:04.426 "state": "online", 00:38:04.426 "raid_level": "raid5f", 00:38:04.426 "superblock": false, 00:38:04.426 "num_base_bdevs": 4, 00:38:04.426 "num_base_bdevs_discovered": 3, 00:38:04.426 "num_base_bdevs_operational": 3, 00:38:04.426 "base_bdevs_list": [ 00:38:04.426 { 00:38:04.426 "name": null, 00:38:04.426 "uuid": "00000000-0000-0000-0000-000000000000", 00:38:04.426 "is_configured": false, 00:38:04.426 "data_offset": 0, 00:38:04.426 "data_size": 65536 00:38:04.426 }, 00:38:04.426 { 00:38:04.426 "name": "BaseBdev2", 00:38:04.426 "uuid": "cfee6f69-a25c-5b18-9e2d-092c463d5a28", 00:38:04.426 "is_configured": true, 00:38:04.426 "data_offset": 0, 00:38:04.426 "data_size": 65536 00:38:04.426 }, 00:38:04.426 { 00:38:04.426 "name": "BaseBdev3", 00:38:04.426 "uuid": "f1ac2674-5020-5509-bcee-03ffc5805675", 00:38:04.426 "is_configured": true, 00:38:04.426 "data_offset": 0, 00:38:04.426 "data_size": 65536 00:38:04.426 }, 00:38:04.426 { 00:38:04.426 "name": "BaseBdev4", 00:38:04.426 "uuid": "782599af-a387-5f8c-9d23-b8a0ba68f83c", 00:38:04.426 "is_configured": true, 00:38:04.426 "data_offset": 0, 00:38:04.426 "data_size": 65536 00:38:04.426 } 00:38:04.426 ] 00:38:04.426 }' 00:38:04.426 17:34:41 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:38:04.426 17:34:41 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:38:04.426 17:34:41 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:38:04.686 17:34:41 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:38:04.686 17:34:41 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:38:04.686 17:34:41 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:04.686 17:34:41 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:38:04.686 [2024-11-26 17:34:41.897399] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:38:04.686 [2024-11-26 17:34:41.912590] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00002b820 00:38:04.686 17:34:41 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:04.686 17:34:41 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@663 -- # sleep 1 00:38:04.686 [2024-11-26 17:34:41.922114] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:38:05.623 17:34:42 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:38:05.623 17:34:42 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:38:05.623 17:34:42 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:38:05.623 17:34:42 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:38:05.623 17:34:42 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:38:05.623 17:34:42 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:38:05.623 17:34:42 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:38:05.623 17:34:42 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:05.623 17:34:42 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:38:05.623 17:34:42 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:05.623 17:34:42 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:38:05.623 "name": "raid_bdev1", 00:38:05.623 "uuid": "d69c48fb-68ac-4057-a55f-165f44ffa349", 00:38:05.623 "strip_size_kb": 64, 00:38:05.623 "state": "online", 00:38:05.623 "raid_level": "raid5f", 00:38:05.623 "superblock": false, 00:38:05.623 "num_base_bdevs": 4, 00:38:05.623 "num_base_bdevs_discovered": 4, 00:38:05.623 "num_base_bdevs_operational": 4, 00:38:05.623 "process": { 00:38:05.623 "type": "rebuild", 00:38:05.623 "target": "spare", 00:38:05.623 "progress": { 00:38:05.623 "blocks": 17280, 00:38:05.623 "percent": 8 00:38:05.623 } 00:38:05.623 }, 00:38:05.624 "base_bdevs_list": [ 00:38:05.624 { 00:38:05.624 "name": "spare", 00:38:05.624 "uuid": "928b6b94-112b-5bb8-9e65-23ec21ac84b0", 00:38:05.624 "is_configured": true, 00:38:05.624 "data_offset": 0, 00:38:05.624 "data_size": 65536 00:38:05.624 }, 00:38:05.624 { 00:38:05.624 "name": "BaseBdev2", 00:38:05.624 "uuid": "cfee6f69-a25c-5b18-9e2d-092c463d5a28", 00:38:05.624 "is_configured": true, 00:38:05.624 "data_offset": 0, 00:38:05.624 "data_size": 65536 00:38:05.624 }, 00:38:05.624 { 00:38:05.624 "name": "BaseBdev3", 00:38:05.624 "uuid": "f1ac2674-5020-5509-bcee-03ffc5805675", 00:38:05.624 "is_configured": true, 00:38:05.624 "data_offset": 0, 00:38:05.624 "data_size": 65536 00:38:05.624 }, 00:38:05.624 { 00:38:05.624 "name": "BaseBdev4", 00:38:05.624 "uuid": "782599af-a387-5f8c-9d23-b8a0ba68f83c", 00:38:05.624 "is_configured": true, 00:38:05.624 "data_offset": 0, 00:38:05.624 "data_size": 65536 00:38:05.624 } 00:38:05.624 ] 00:38:05.624 }' 00:38:05.624 17:34:42 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:38:05.624 17:34:43 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:38:05.624 17:34:43 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:38:05.624 17:34:43 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:38:05.624 17:34:43 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@666 -- # '[' false = true ']' 00:38:05.624 17:34:43 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=4 00:38:05.624 17:34:43 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@693 -- # '[' raid5f = raid1 ']' 00:38:05.624 17:34:43 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@706 -- # local timeout=637 00:38:05.624 17:34:43 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:38:05.624 17:34:43 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:38:05.624 17:34:43 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:38:05.624 17:34:43 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:38:05.624 17:34:43 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:38:05.624 17:34:43 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:38:05.624 17:34:43 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:38:05.624 17:34:43 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:38:05.882 17:34:43 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:05.882 17:34:43 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:38:05.882 17:34:43 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:05.882 17:34:43 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:38:05.883 "name": "raid_bdev1", 00:38:05.883 "uuid": "d69c48fb-68ac-4057-a55f-165f44ffa349", 00:38:05.883 "strip_size_kb": 64, 00:38:05.883 "state": "online", 00:38:05.883 "raid_level": "raid5f", 00:38:05.883 "superblock": false, 00:38:05.883 "num_base_bdevs": 4, 00:38:05.883 "num_base_bdevs_discovered": 4, 00:38:05.883 "num_base_bdevs_operational": 4, 00:38:05.883 "process": { 00:38:05.883 "type": "rebuild", 00:38:05.883 "target": "spare", 00:38:05.883 "progress": { 00:38:05.883 "blocks": 21120, 00:38:05.883 "percent": 10 00:38:05.883 } 00:38:05.883 }, 00:38:05.883 "base_bdevs_list": [ 00:38:05.883 { 00:38:05.883 "name": "spare", 00:38:05.883 "uuid": "928b6b94-112b-5bb8-9e65-23ec21ac84b0", 00:38:05.883 "is_configured": true, 00:38:05.883 "data_offset": 0, 00:38:05.883 "data_size": 65536 00:38:05.883 }, 00:38:05.883 { 00:38:05.883 "name": "BaseBdev2", 00:38:05.883 "uuid": "cfee6f69-a25c-5b18-9e2d-092c463d5a28", 00:38:05.883 "is_configured": true, 00:38:05.883 "data_offset": 0, 00:38:05.883 "data_size": 65536 00:38:05.883 }, 00:38:05.883 { 00:38:05.883 "name": "BaseBdev3", 00:38:05.883 "uuid": "f1ac2674-5020-5509-bcee-03ffc5805675", 00:38:05.883 "is_configured": true, 00:38:05.883 "data_offset": 0, 00:38:05.883 "data_size": 65536 00:38:05.883 }, 00:38:05.883 { 00:38:05.883 "name": "BaseBdev4", 00:38:05.883 "uuid": "782599af-a387-5f8c-9d23-b8a0ba68f83c", 00:38:05.883 "is_configured": true, 00:38:05.883 "data_offset": 0, 00:38:05.883 "data_size": 65536 00:38:05.883 } 00:38:05.883 ] 00:38:05.883 }' 00:38:05.883 17:34:43 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:38:05.883 17:34:43 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:38:05.883 17:34:43 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:38:05.883 17:34:43 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:38:05.883 17:34:43 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:38:06.819 17:34:44 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:38:06.819 17:34:44 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:38:06.819 17:34:44 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:38:06.819 17:34:44 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:38:06.819 17:34:44 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:38:06.819 17:34:44 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:38:06.819 17:34:44 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:38:06.819 17:34:44 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:06.819 17:34:44 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:38:06.819 17:34:44 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:38:06.819 17:34:44 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:06.819 17:34:44 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:38:06.819 "name": "raid_bdev1", 00:38:06.819 "uuid": "d69c48fb-68ac-4057-a55f-165f44ffa349", 00:38:06.819 "strip_size_kb": 64, 00:38:06.819 "state": "online", 00:38:06.819 "raid_level": "raid5f", 00:38:06.819 "superblock": false, 00:38:06.819 "num_base_bdevs": 4, 00:38:06.819 "num_base_bdevs_discovered": 4, 00:38:06.819 "num_base_bdevs_operational": 4, 00:38:06.819 "process": { 00:38:06.819 "type": "rebuild", 00:38:06.819 "target": "spare", 00:38:06.819 "progress": { 00:38:06.819 "blocks": 42240, 00:38:06.819 "percent": 21 00:38:06.819 } 00:38:06.820 }, 00:38:06.820 "base_bdevs_list": [ 00:38:06.820 { 00:38:06.820 "name": "spare", 00:38:06.820 "uuid": "928b6b94-112b-5bb8-9e65-23ec21ac84b0", 00:38:06.820 "is_configured": true, 00:38:06.820 "data_offset": 0, 00:38:06.820 "data_size": 65536 00:38:06.820 }, 00:38:06.820 { 00:38:06.820 "name": "BaseBdev2", 00:38:06.820 "uuid": "cfee6f69-a25c-5b18-9e2d-092c463d5a28", 00:38:06.820 "is_configured": true, 00:38:06.820 "data_offset": 0, 00:38:06.820 "data_size": 65536 00:38:06.820 }, 00:38:06.820 { 00:38:06.820 "name": "BaseBdev3", 00:38:06.820 "uuid": "f1ac2674-5020-5509-bcee-03ffc5805675", 00:38:06.820 "is_configured": true, 00:38:06.820 "data_offset": 0, 00:38:06.820 "data_size": 65536 00:38:06.820 }, 00:38:06.820 { 00:38:06.820 "name": "BaseBdev4", 00:38:06.820 "uuid": "782599af-a387-5f8c-9d23-b8a0ba68f83c", 00:38:06.820 "is_configured": true, 00:38:06.820 "data_offset": 0, 00:38:06.820 "data_size": 65536 00:38:06.820 } 00:38:06.820 ] 00:38:06.820 }' 00:38:06.820 17:34:44 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:38:07.079 17:34:44 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:38:07.079 17:34:44 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:38:07.079 17:34:44 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:38:07.079 17:34:44 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:38:08.016 17:34:45 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:38:08.016 17:34:45 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:38:08.016 17:34:45 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:38:08.016 17:34:45 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:38:08.016 17:34:45 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:38:08.016 17:34:45 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:38:08.016 17:34:45 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:38:08.016 17:34:45 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:08.016 17:34:45 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:38:08.016 17:34:45 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:38:08.016 17:34:45 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:08.016 17:34:45 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:38:08.016 "name": "raid_bdev1", 00:38:08.016 "uuid": "d69c48fb-68ac-4057-a55f-165f44ffa349", 00:38:08.016 "strip_size_kb": 64, 00:38:08.016 "state": "online", 00:38:08.016 "raid_level": "raid5f", 00:38:08.016 "superblock": false, 00:38:08.016 "num_base_bdevs": 4, 00:38:08.016 "num_base_bdevs_discovered": 4, 00:38:08.016 "num_base_bdevs_operational": 4, 00:38:08.016 "process": { 00:38:08.016 "type": "rebuild", 00:38:08.016 "target": "spare", 00:38:08.016 "progress": { 00:38:08.016 "blocks": 65280, 00:38:08.016 "percent": 33 00:38:08.016 } 00:38:08.016 }, 00:38:08.016 "base_bdevs_list": [ 00:38:08.016 { 00:38:08.016 "name": "spare", 00:38:08.016 "uuid": "928b6b94-112b-5bb8-9e65-23ec21ac84b0", 00:38:08.016 "is_configured": true, 00:38:08.016 "data_offset": 0, 00:38:08.016 "data_size": 65536 00:38:08.016 }, 00:38:08.016 { 00:38:08.016 "name": "BaseBdev2", 00:38:08.016 "uuid": "cfee6f69-a25c-5b18-9e2d-092c463d5a28", 00:38:08.016 "is_configured": true, 00:38:08.016 "data_offset": 0, 00:38:08.016 "data_size": 65536 00:38:08.016 }, 00:38:08.016 { 00:38:08.016 "name": "BaseBdev3", 00:38:08.016 "uuid": "f1ac2674-5020-5509-bcee-03ffc5805675", 00:38:08.016 "is_configured": true, 00:38:08.016 "data_offset": 0, 00:38:08.016 "data_size": 65536 00:38:08.016 }, 00:38:08.016 { 00:38:08.016 "name": "BaseBdev4", 00:38:08.016 "uuid": "782599af-a387-5f8c-9d23-b8a0ba68f83c", 00:38:08.016 "is_configured": true, 00:38:08.016 "data_offset": 0, 00:38:08.016 "data_size": 65536 00:38:08.016 } 00:38:08.016 ] 00:38:08.016 }' 00:38:08.016 17:34:45 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:38:08.016 17:34:45 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:38:08.016 17:34:45 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:38:08.276 17:34:45 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:38:08.276 17:34:45 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:38:09.213 17:34:46 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:38:09.213 17:34:46 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:38:09.213 17:34:46 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:38:09.213 17:34:46 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:38:09.213 17:34:46 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:38:09.213 17:34:46 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:38:09.213 17:34:46 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:38:09.213 17:34:46 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:09.213 17:34:46 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:38:09.213 17:34:46 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:38:09.213 17:34:46 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:09.213 17:34:46 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:38:09.213 "name": "raid_bdev1", 00:38:09.213 "uuid": "d69c48fb-68ac-4057-a55f-165f44ffa349", 00:38:09.213 "strip_size_kb": 64, 00:38:09.213 "state": "online", 00:38:09.213 "raid_level": "raid5f", 00:38:09.213 "superblock": false, 00:38:09.213 "num_base_bdevs": 4, 00:38:09.213 "num_base_bdevs_discovered": 4, 00:38:09.213 "num_base_bdevs_operational": 4, 00:38:09.213 "process": { 00:38:09.213 "type": "rebuild", 00:38:09.213 "target": "spare", 00:38:09.213 "progress": { 00:38:09.213 "blocks": 86400, 00:38:09.213 "percent": 43 00:38:09.213 } 00:38:09.213 }, 00:38:09.213 "base_bdevs_list": [ 00:38:09.213 { 00:38:09.213 "name": "spare", 00:38:09.213 "uuid": "928b6b94-112b-5bb8-9e65-23ec21ac84b0", 00:38:09.213 "is_configured": true, 00:38:09.213 "data_offset": 0, 00:38:09.213 "data_size": 65536 00:38:09.213 }, 00:38:09.214 { 00:38:09.214 "name": "BaseBdev2", 00:38:09.214 "uuid": "cfee6f69-a25c-5b18-9e2d-092c463d5a28", 00:38:09.214 "is_configured": true, 00:38:09.214 "data_offset": 0, 00:38:09.214 "data_size": 65536 00:38:09.214 }, 00:38:09.214 { 00:38:09.214 "name": "BaseBdev3", 00:38:09.214 "uuid": "f1ac2674-5020-5509-bcee-03ffc5805675", 00:38:09.214 "is_configured": true, 00:38:09.214 "data_offset": 0, 00:38:09.214 "data_size": 65536 00:38:09.214 }, 00:38:09.214 { 00:38:09.214 "name": "BaseBdev4", 00:38:09.214 "uuid": "782599af-a387-5f8c-9d23-b8a0ba68f83c", 00:38:09.214 "is_configured": true, 00:38:09.214 "data_offset": 0, 00:38:09.214 "data_size": 65536 00:38:09.214 } 00:38:09.214 ] 00:38:09.214 }' 00:38:09.214 17:34:46 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:38:09.214 17:34:46 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:38:09.214 17:34:46 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:38:09.214 17:34:46 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:38:09.214 17:34:46 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:38:10.593 17:34:47 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:38:10.593 17:34:47 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:38:10.593 17:34:47 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:38:10.593 17:34:47 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:38:10.593 17:34:47 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:38:10.593 17:34:47 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:38:10.593 17:34:47 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:38:10.593 17:34:47 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:10.593 17:34:47 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:38:10.593 17:34:47 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:38:10.593 17:34:47 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:10.593 17:34:47 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:38:10.593 "name": "raid_bdev1", 00:38:10.593 "uuid": "d69c48fb-68ac-4057-a55f-165f44ffa349", 00:38:10.593 "strip_size_kb": 64, 00:38:10.593 "state": "online", 00:38:10.593 "raid_level": "raid5f", 00:38:10.593 "superblock": false, 00:38:10.593 "num_base_bdevs": 4, 00:38:10.593 "num_base_bdevs_discovered": 4, 00:38:10.593 "num_base_bdevs_operational": 4, 00:38:10.593 "process": { 00:38:10.593 "type": "rebuild", 00:38:10.593 "target": "spare", 00:38:10.593 "progress": { 00:38:10.593 "blocks": 107520, 00:38:10.593 "percent": 54 00:38:10.593 } 00:38:10.593 }, 00:38:10.593 "base_bdevs_list": [ 00:38:10.593 { 00:38:10.593 "name": "spare", 00:38:10.593 "uuid": "928b6b94-112b-5bb8-9e65-23ec21ac84b0", 00:38:10.593 "is_configured": true, 00:38:10.593 "data_offset": 0, 00:38:10.593 "data_size": 65536 00:38:10.593 }, 00:38:10.593 { 00:38:10.593 "name": "BaseBdev2", 00:38:10.593 "uuid": "cfee6f69-a25c-5b18-9e2d-092c463d5a28", 00:38:10.593 "is_configured": true, 00:38:10.593 "data_offset": 0, 00:38:10.593 "data_size": 65536 00:38:10.593 }, 00:38:10.593 { 00:38:10.593 "name": "BaseBdev3", 00:38:10.593 "uuid": "f1ac2674-5020-5509-bcee-03ffc5805675", 00:38:10.593 "is_configured": true, 00:38:10.593 "data_offset": 0, 00:38:10.593 "data_size": 65536 00:38:10.593 }, 00:38:10.593 { 00:38:10.593 "name": "BaseBdev4", 00:38:10.593 "uuid": "782599af-a387-5f8c-9d23-b8a0ba68f83c", 00:38:10.593 "is_configured": true, 00:38:10.593 "data_offset": 0, 00:38:10.593 "data_size": 65536 00:38:10.593 } 00:38:10.593 ] 00:38:10.593 }' 00:38:10.593 17:34:47 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:38:10.593 17:34:47 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:38:10.593 17:34:47 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:38:10.593 17:34:47 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:38:10.593 17:34:47 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:38:11.530 17:34:48 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:38:11.530 17:34:48 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:38:11.530 17:34:48 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:38:11.530 17:34:48 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:38:11.530 17:34:48 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:38:11.530 17:34:48 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:38:11.530 17:34:48 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:38:11.530 17:34:48 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:11.530 17:34:48 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:38:11.530 17:34:48 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:38:11.530 17:34:48 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:11.530 17:34:48 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:38:11.530 "name": "raid_bdev1", 00:38:11.530 "uuid": "d69c48fb-68ac-4057-a55f-165f44ffa349", 00:38:11.530 "strip_size_kb": 64, 00:38:11.530 "state": "online", 00:38:11.530 "raid_level": "raid5f", 00:38:11.530 "superblock": false, 00:38:11.530 "num_base_bdevs": 4, 00:38:11.530 "num_base_bdevs_discovered": 4, 00:38:11.530 "num_base_bdevs_operational": 4, 00:38:11.530 "process": { 00:38:11.530 "type": "rebuild", 00:38:11.530 "target": "spare", 00:38:11.530 "progress": { 00:38:11.530 "blocks": 130560, 00:38:11.530 "percent": 66 00:38:11.530 } 00:38:11.530 }, 00:38:11.530 "base_bdevs_list": [ 00:38:11.530 { 00:38:11.530 "name": "spare", 00:38:11.530 "uuid": "928b6b94-112b-5bb8-9e65-23ec21ac84b0", 00:38:11.530 "is_configured": true, 00:38:11.530 "data_offset": 0, 00:38:11.530 "data_size": 65536 00:38:11.530 }, 00:38:11.530 { 00:38:11.530 "name": "BaseBdev2", 00:38:11.530 "uuid": "cfee6f69-a25c-5b18-9e2d-092c463d5a28", 00:38:11.530 "is_configured": true, 00:38:11.530 "data_offset": 0, 00:38:11.530 "data_size": 65536 00:38:11.530 }, 00:38:11.530 { 00:38:11.530 "name": "BaseBdev3", 00:38:11.530 "uuid": "f1ac2674-5020-5509-bcee-03ffc5805675", 00:38:11.530 "is_configured": true, 00:38:11.530 "data_offset": 0, 00:38:11.530 "data_size": 65536 00:38:11.530 }, 00:38:11.530 { 00:38:11.530 "name": "BaseBdev4", 00:38:11.530 "uuid": "782599af-a387-5f8c-9d23-b8a0ba68f83c", 00:38:11.530 "is_configured": true, 00:38:11.530 "data_offset": 0, 00:38:11.530 "data_size": 65536 00:38:11.530 } 00:38:11.530 ] 00:38:11.530 }' 00:38:11.530 17:34:48 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:38:11.530 17:34:48 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:38:11.530 17:34:48 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:38:11.530 17:34:48 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:38:11.530 17:34:48 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:38:12.908 17:34:49 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:38:12.908 17:34:49 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:38:12.908 17:34:49 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:38:12.908 17:34:49 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:38:12.908 17:34:49 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:38:12.908 17:34:49 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:38:12.908 17:34:49 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:38:12.908 17:34:49 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:12.908 17:34:49 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:38:12.908 17:34:49 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:38:12.908 17:34:49 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:12.908 17:34:49 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:38:12.908 "name": "raid_bdev1", 00:38:12.908 "uuid": "d69c48fb-68ac-4057-a55f-165f44ffa349", 00:38:12.908 "strip_size_kb": 64, 00:38:12.908 "state": "online", 00:38:12.908 "raid_level": "raid5f", 00:38:12.908 "superblock": false, 00:38:12.908 "num_base_bdevs": 4, 00:38:12.908 "num_base_bdevs_discovered": 4, 00:38:12.908 "num_base_bdevs_operational": 4, 00:38:12.908 "process": { 00:38:12.908 "type": "rebuild", 00:38:12.908 "target": "spare", 00:38:12.908 "progress": { 00:38:12.908 "blocks": 151680, 00:38:12.908 "percent": 77 00:38:12.908 } 00:38:12.908 }, 00:38:12.908 "base_bdevs_list": [ 00:38:12.908 { 00:38:12.908 "name": "spare", 00:38:12.908 "uuid": "928b6b94-112b-5bb8-9e65-23ec21ac84b0", 00:38:12.908 "is_configured": true, 00:38:12.908 "data_offset": 0, 00:38:12.908 "data_size": 65536 00:38:12.908 }, 00:38:12.908 { 00:38:12.908 "name": "BaseBdev2", 00:38:12.908 "uuid": "cfee6f69-a25c-5b18-9e2d-092c463d5a28", 00:38:12.908 "is_configured": true, 00:38:12.908 "data_offset": 0, 00:38:12.908 "data_size": 65536 00:38:12.908 }, 00:38:12.908 { 00:38:12.908 "name": "BaseBdev3", 00:38:12.908 "uuid": "f1ac2674-5020-5509-bcee-03ffc5805675", 00:38:12.908 "is_configured": true, 00:38:12.908 "data_offset": 0, 00:38:12.908 "data_size": 65536 00:38:12.908 }, 00:38:12.908 { 00:38:12.908 "name": "BaseBdev4", 00:38:12.908 "uuid": "782599af-a387-5f8c-9d23-b8a0ba68f83c", 00:38:12.908 "is_configured": true, 00:38:12.908 "data_offset": 0, 00:38:12.908 "data_size": 65536 00:38:12.908 } 00:38:12.908 ] 00:38:12.908 }' 00:38:12.908 17:34:49 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:38:12.908 17:34:50 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:38:12.908 17:34:50 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:38:12.908 17:34:50 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:38:12.908 17:34:50 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:38:13.845 17:34:51 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:38:13.845 17:34:51 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:38:13.845 17:34:51 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:38:13.845 17:34:51 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:38:13.845 17:34:51 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:38:13.846 17:34:51 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:38:13.846 17:34:51 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:38:13.846 17:34:51 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:38:13.846 17:34:51 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:13.846 17:34:51 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:38:13.846 17:34:51 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:13.846 17:34:51 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:38:13.846 "name": "raid_bdev1", 00:38:13.846 "uuid": "d69c48fb-68ac-4057-a55f-165f44ffa349", 00:38:13.846 "strip_size_kb": 64, 00:38:13.846 "state": "online", 00:38:13.846 "raid_level": "raid5f", 00:38:13.846 "superblock": false, 00:38:13.846 "num_base_bdevs": 4, 00:38:13.846 "num_base_bdevs_discovered": 4, 00:38:13.846 "num_base_bdevs_operational": 4, 00:38:13.846 "process": { 00:38:13.846 "type": "rebuild", 00:38:13.846 "target": "spare", 00:38:13.846 "progress": { 00:38:13.846 "blocks": 172800, 00:38:13.846 "percent": 87 00:38:13.846 } 00:38:13.846 }, 00:38:13.846 "base_bdevs_list": [ 00:38:13.846 { 00:38:13.846 "name": "spare", 00:38:13.846 "uuid": "928b6b94-112b-5bb8-9e65-23ec21ac84b0", 00:38:13.846 "is_configured": true, 00:38:13.846 "data_offset": 0, 00:38:13.846 "data_size": 65536 00:38:13.846 }, 00:38:13.846 { 00:38:13.846 "name": "BaseBdev2", 00:38:13.846 "uuid": "cfee6f69-a25c-5b18-9e2d-092c463d5a28", 00:38:13.846 "is_configured": true, 00:38:13.846 "data_offset": 0, 00:38:13.846 "data_size": 65536 00:38:13.846 }, 00:38:13.846 { 00:38:13.846 "name": "BaseBdev3", 00:38:13.846 "uuid": "f1ac2674-5020-5509-bcee-03ffc5805675", 00:38:13.846 "is_configured": true, 00:38:13.846 "data_offset": 0, 00:38:13.846 "data_size": 65536 00:38:13.846 }, 00:38:13.846 { 00:38:13.846 "name": "BaseBdev4", 00:38:13.846 "uuid": "782599af-a387-5f8c-9d23-b8a0ba68f83c", 00:38:13.846 "is_configured": true, 00:38:13.846 "data_offset": 0, 00:38:13.846 "data_size": 65536 00:38:13.846 } 00:38:13.846 ] 00:38:13.846 }' 00:38:13.846 17:34:51 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:38:13.846 17:34:51 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:38:13.846 17:34:51 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:38:13.846 17:34:51 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:38:13.846 17:34:51 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:38:14.825 17:34:52 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:38:14.825 17:34:52 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:38:14.825 17:34:52 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:38:14.825 17:34:52 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:38:14.825 17:34:52 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:38:14.825 17:34:52 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:38:14.825 17:34:52 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:38:14.825 17:34:52 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:14.825 17:34:52 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:38:14.825 17:34:52 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:38:14.825 17:34:52 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:14.825 17:34:52 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:38:14.825 "name": "raid_bdev1", 00:38:14.825 "uuid": "d69c48fb-68ac-4057-a55f-165f44ffa349", 00:38:14.825 "strip_size_kb": 64, 00:38:14.825 "state": "online", 00:38:14.825 "raid_level": "raid5f", 00:38:14.825 "superblock": false, 00:38:14.825 "num_base_bdevs": 4, 00:38:14.825 "num_base_bdevs_discovered": 4, 00:38:14.825 "num_base_bdevs_operational": 4, 00:38:14.825 "process": { 00:38:14.825 "type": "rebuild", 00:38:14.825 "target": "spare", 00:38:14.825 "progress": { 00:38:14.825 "blocks": 193920, 00:38:14.825 "percent": 98 00:38:14.825 } 00:38:14.825 }, 00:38:14.825 "base_bdevs_list": [ 00:38:14.825 { 00:38:14.825 "name": "spare", 00:38:14.825 "uuid": "928b6b94-112b-5bb8-9e65-23ec21ac84b0", 00:38:14.825 "is_configured": true, 00:38:14.825 "data_offset": 0, 00:38:14.825 "data_size": 65536 00:38:14.825 }, 00:38:14.825 { 00:38:14.825 "name": "BaseBdev2", 00:38:14.825 "uuid": "cfee6f69-a25c-5b18-9e2d-092c463d5a28", 00:38:14.825 "is_configured": true, 00:38:14.825 "data_offset": 0, 00:38:14.825 "data_size": 65536 00:38:14.825 }, 00:38:14.825 { 00:38:14.825 "name": "BaseBdev3", 00:38:14.825 "uuid": "f1ac2674-5020-5509-bcee-03ffc5805675", 00:38:14.825 "is_configured": true, 00:38:14.825 "data_offset": 0, 00:38:14.825 "data_size": 65536 00:38:14.825 }, 00:38:14.825 { 00:38:14.825 "name": "BaseBdev4", 00:38:14.825 "uuid": "782599af-a387-5f8c-9d23-b8a0ba68f83c", 00:38:14.825 "is_configured": true, 00:38:14.825 "data_offset": 0, 00:38:14.825 "data_size": 65536 00:38:14.825 } 00:38:14.825 ] 00:38:14.825 }' 00:38:14.825 17:34:52 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:38:15.091 17:34:52 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:38:15.091 17:34:52 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:38:15.091 [2024-11-26 17:34:52.309671] bdev_raid.c:2900:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:38:15.091 [2024-11-26 17:34:52.309744] bdev_raid.c:2562:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:38:15.091 [2024-11-26 17:34:52.309795] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:38:15.091 17:34:52 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:38:15.091 17:34:52 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:38:16.030 17:34:53 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:38:16.030 17:34:53 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:38:16.030 17:34:53 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:38:16.030 17:34:53 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:38:16.030 17:34:53 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:38:16.030 17:34:53 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:38:16.030 17:34:53 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:38:16.030 17:34:53 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:16.030 17:34:53 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:38:16.030 17:34:53 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:38:16.030 17:34:53 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:16.030 17:34:53 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:38:16.030 "name": "raid_bdev1", 00:38:16.030 "uuid": "d69c48fb-68ac-4057-a55f-165f44ffa349", 00:38:16.030 "strip_size_kb": 64, 00:38:16.030 "state": "online", 00:38:16.030 "raid_level": "raid5f", 00:38:16.030 "superblock": false, 00:38:16.030 "num_base_bdevs": 4, 00:38:16.030 "num_base_bdevs_discovered": 4, 00:38:16.030 "num_base_bdevs_operational": 4, 00:38:16.030 "base_bdevs_list": [ 00:38:16.030 { 00:38:16.030 "name": "spare", 00:38:16.030 "uuid": "928b6b94-112b-5bb8-9e65-23ec21ac84b0", 00:38:16.030 "is_configured": true, 00:38:16.030 "data_offset": 0, 00:38:16.030 "data_size": 65536 00:38:16.030 }, 00:38:16.030 { 00:38:16.030 "name": "BaseBdev2", 00:38:16.030 "uuid": "cfee6f69-a25c-5b18-9e2d-092c463d5a28", 00:38:16.030 "is_configured": true, 00:38:16.030 "data_offset": 0, 00:38:16.030 "data_size": 65536 00:38:16.030 }, 00:38:16.030 { 00:38:16.031 "name": "BaseBdev3", 00:38:16.031 "uuid": "f1ac2674-5020-5509-bcee-03ffc5805675", 00:38:16.031 "is_configured": true, 00:38:16.031 "data_offset": 0, 00:38:16.031 "data_size": 65536 00:38:16.031 }, 00:38:16.031 { 00:38:16.031 "name": "BaseBdev4", 00:38:16.031 "uuid": "782599af-a387-5f8c-9d23-b8a0ba68f83c", 00:38:16.031 "is_configured": true, 00:38:16.031 "data_offset": 0, 00:38:16.031 "data_size": 65536 00:38:16.031 } 00:38:16.031 ] 00:38:16.031 }' 00:38:16.031 17:34:53 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:38:16.031 17:34:53 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:38:16.031 17:34:53 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:38:16.290 17:34:53 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:38:16.290 17:34:53 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@709 -- # break 00:38:16.290 17:34:53 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:38:16.290 17:34:53 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:38:16.290 17:34:53 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:38:16.290 17:34:53 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=none 00:38:16.290 17:34:53 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:38:16.290 17:34:53 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:38:16.290 17:34:53 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:38:16.290 17:34:53 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:16.290 17:34:53 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:38:16.290 17:34:53 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:16.290 17:34:53 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:38:16.290 "name": "raid_bdev1", 00:38:16.290 "uuid": "d69c48fb-68ac-4057-a55f-165f44ffa349", 00:38:16.290 "strip_size_kb": 64, 00:38:16.290 "state": "online", 00:38:16.290 "raid_level": "raid5f", 00:38:16.290 "superblock": false, 00:38:16.290 "num_base_bdevs": 4, 00:38:16.291 "num_base_bdevs_discovered": 4, 00:38:16.291 "num_base_bdevs_operational": 4, 00:38:16.291 "base_bdevs_list": [ 00:38:16.291 { 00:38:16.291 "name": "spare", 00:38:16.291 "uuid": "928b6b94-112b-5bb8-9e65-23ec21ac84b0", 00:38:16.291 "is_configured": true, 00:38:16.291 "data_offset": 0, 00:38:16.291 "data_size": 65536 00:38:16.291 }, 00:38:16.291 { 00:38:16.291 "name": "BaseBdev2", 00:38:16.291 "uuid": "cfee6f69-a25c-5b18-9e2d-092c463d5a28", 00:38:16.291 "is_configured": true, 00:38:16.291 "data_offset": 0, 00:38:16.291 "data_size": 65536 00:38:16.291 }, 00:38:16.291 { 00:38:16.291 "name": "BaseBdev3", 00:38:16.291 "uuid": "f1ac2674-5020-5509-bcee-03ffc5805675", 00:38:16.291 "is_configured": true, 00:38:16.291 "data_offset": 0, 00:38:16.291 "data_size": 65536 00:38:16.291 }, 00:38:16.291 { 00:38:16.291 "name": "BaseBdev4", 00:38:16.291 "uuid": "782599af-a387-5f8c-9d23-b8a0ba68f83c", 00:38:16.291 "is_configured": true, 00:38:16.291 "data_offset": 0, 00:38:16.291 "data_size": 65536 00:38:16.291 } 00:38:16.291 ] 00:38:16.291 }' 00:38:16.291 17:34:53 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:38:16.291 17:34:53 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:38:16.291 17:34:53 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:38:16.291 17:34:53 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:38:16.291 17:34:53 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 4 00:38:16.291 17:34:53 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:38:16.291 17:34:53 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:38:16.291 17:34:53 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:38:16.291 17:34:53 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:38:16.291 17:34:53 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:38:16.291 17:34:53 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:38:16.291 17:34:53 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:38:16.291 17:34:53 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:38:16.291 17:34:53 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:38:16.291 17:34:53 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:38:16.291 17:34:53 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:16.291 17:34:53 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:38:16.291 17:34:53 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:38:16.291 17:34:53 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:16.291 17:34:53 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:38:16.291 "name": "raid_bdev1", 00:38:16.291 "uuid": "d69c48fb-68ac-4057-a55f-165f44ffa349", 00:38:16.291 "strip_size_kb": 64, 00:38:16.291 "state": "online", 00:38:16.291 "raid_level": "raid5f", 00:38:16.291 "superblock": false, 00:38:16.291 "num_base_bdevs": 4, 00:38:16.291 "num_base_bdevs_discovered": 4, 00:38:16.291 "num_base_bdevs_operational": 4, 00:38:16.291 "base_bdevs_list": [ 00:38:16.291 { 00:38:16.291 "name": "spare", 00:38:16.291 "uuid": "928b6b94-112b-5bb8-9e65-23ec21ac84b0", 00:38:16.291 "is_configured": true, 00:38:16.291 "data_offset": 0, 00:38:16.291 "data_size": 65536 00:38:16.291 }, 00:38:16.291 { 00:38:16.291 "name": "BaseBdev2", 00:38:16.291 "uuid": "cfee6f69-a25c-5b18-9e2d-092c463d5a28", 00:38:16.291 "is_configured": true, 00:38:16.291 "data_offset": 0, 00:38:16.291 "data_size": 65536 00:38:16.291 }, 00:38:16.291 { 00:38:16.291 "name": "BaseBdev3", 00:38:16.291 "uuid": "f1ac2674-5020-5509-bcee-03ffc5805675", 00:38:16.291 "is_configured": true, 00:38:16.291 "data_offset": 0, 00:38:16.291 "data_size": 65536 00:38:16.291 }, 00:38:16.291 { 00:38:16.291 "name": "BaseBdev4", 00:38:16.291 "uuid": "782599af-a387-5f8c-9d23-b8a0ba68f83c", 00:38:16.291 "is_configured": true, 00:38:16.291 "data_offset": 0, 00:38:16.291 "data_size": 65536 00:38:16.291 } 00:38:16.291 ] 00:38:16.291 }' 00:38:16.291 17:34:53 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:38:16.291 17:34:53 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:38:16.858 17:34:54 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:38:16.858 17:34:54 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:16.858 17:34:54 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:38:16.858 [2024-11-26 17:34:54.059382] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:38:16.858 [2024-11-26 17:34:54.059682] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:38:16.858 [2024-11-26 17:34:54.059949] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:38:16.858 [2024-11-26 17:34:54.060149] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:38:16.858 [2024-11-26 17:34:54.060174] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:38:16.858 17:34:54 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:16.858 17:34:54 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:38:16.858 17:34:54 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:16.858 17:34:54 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@720 -- # jq length 00:38:16.858 17:34:54 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:38:16.858 17:34:54 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:16.858 17:34:54 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:38:16.859 17:34:54 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:38:16.859 17:34:54 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@723 -- # '[' false = true ']' 00:38:16.859 17:34:54 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@737 -- # nbd_start_disks /var/tmp/spdk.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 00:38:16.859 17:34:54 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:38:16.859 17:34:54 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev1' 'spare') 00:38:16.859 17:34:54 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@10 -- # local bdev_list 00:38:16.859 17:34:54 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:38:16.859 17:34:54 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@11 -- # local nbd_list 00:38:16.859 17:34:54 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@12 -- # local i 00:38:16.859 17:34:54 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:38:16.859 17:34:54 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:38:16.859 17:34:54 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev1 /dev/nbd0 00:38:17.117 /dev/nbd0 00:38:17.117 17:34:54 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:38:17.117 17:34:54 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:38:17.117 17:34:54 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:38:17.117 17:34:54 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@873 -- # local i 00:38:17.117 17:34:54 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:38:17.117 17:34:54 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:38:17.117 17:34:54 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:38:17.117 17:34:54 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@877 -- # break 00:38:17.117 17:34:54 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:38:17.117 17:34:54 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:38:17.117 17:34:54 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:38:17.117 1+0 records in 00:38:17.117 1+0 records out 00:38:17.117 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000354009 s, 11.6 MB/s 00:38:17.117 17:34:54 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:38:17.117 17:34:54 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@890 -- # size=4096 00:38:17.117 17:34:54 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:38:17.117 17:34:54 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:38:17.117 17:34:54 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@893 -- # return 0 00:38:17.117 17:34:54 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:38:17.117 17:34:54 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:38:17.117 17:34:54 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd1 00:38:17.376 /dev/nbd1 00:38:17.376 17:34:54 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:38:17.376 17:34:54 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:38:17.376 17:34:54 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:38:17.376 17:34:54 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@873 -- # local i 00:38:17.376 17:34:54 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:38:17.376 17:34:54 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:38:17.376 17:34:54 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:38:17.376 17:34:54 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@877 -- # break 00:38:17.376 17:34:54 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:38:17.376 17:34:54 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:38:17.376 17:34:54 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:38:17.376 1+0 records in 00:38:17.376 1+0 records out 00:38:17.376 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000442046 s, 9.3 MB/s 00:38:17.376 17:34:54 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:38:17.376 17:34:54 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@890 -- # size=4096 00:38:17.376 17:34:54 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:38:17.376 17:34:54 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:38:17.376 17:34:54 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@893 -- # return 0 00:38:17.376 17:34:54 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:38:17.376 17:34:54 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:38:17.376 17:34:54 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@738 -- # cmp -i 0 /dev/nbd0 /dev/nbd1 00:38:17.634 17:34:54 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@739 -- # nbd_stop_disks /var/tmp/spdk.sock '/dev/nbd0 /dev/nbd1' 00:38:17.634 17:34:54 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:38:17.634 17:34:54 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:38:17.634 17:34:54 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@50 -- # local nbd_list 00:38:17.634 17:34:54 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@51 -- # local i 00:38:17.634 17:34:54 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:38:17.634 17:34:54 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:38:17.892 17:34:55 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:38:17.892 17:34:55 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:38:17.892 17:34:55 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:38:17.892 17:34:55 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:38:17.892 17:34:55 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:38:17.892 17:34:55 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:38:17.892 17:34:55 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:38:17.892 17:34:55 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:38:17.892 17:34:55 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:38:17.892 17:34:55 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:38:18.152 17:34:55 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:38:18.152 17:34:55 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:38:18.152 17:34:55 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:38:18.152 17:34:55 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:38:18.152 17:34:55 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:38:18.152 17:34:55 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:38:18.152 17:34:55 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:38:18.152 17:34:55 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:38:18.152 17:34:55 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@743 -- # '[' false = true ']' 00:38:18.152 17:34:55 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@784 -- # killprocess 85075 00:38:18.152 17:34:55 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@954 -- # '[' -z 85075 ']' 00:38:18.152 17:34:55 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@958 -- # kill -0 85075 00:38:18.152 17:34:55 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@959 -- # uname 00:38:18.152 17:34:55 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:38:18.152 17:34:55 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 85075 00:38:18.152 17:34:55 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:38:18.152 17:34:55 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:38:18.152 17:34:55 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 85075' 00:38:18.152 killing process with pid 85075 00:38:18.152 Received shutdown signal, test time was about 60.000000 seconds 00:38:18.152 00:38:18.152 Latency(us) 00:38:18.152 [2024-11-26T17:34:55.599Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:38:18.152 [2024-11-26T17:34:55.599Z] =================================================================================================================== 00:38:18.152 [2024-11-26T17:34:55.599Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:38:18.152 17:34:55 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@973 -- # kill 85075 00:38:18.152 [2024-11-26 17:34:55.444052] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:38:18.152 17:34:55 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@978 -- # wait 85075 00:38:18.718 [2024-11-26 17:34:55.991714] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:38:20.095 17:34:57 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@786 -- # return 0 00:38:20.095 00:38:20.095 real 0m20.442s 00:38:20.095 user 0m24.286s 00:38:20.095 sys 0m2.539s 00:38:20.095 17:34:57 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:38:20.095 ************************************ 00:38:20.095 END TEST raid5f_rebuild_test 00:38:20.095 ************************************ 00:38:20.095 17:34:57 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:38:20.095 17:34:57 bdev_raid -- bdev/bdev_raid.sh@991 -- # run_test raid5f_rebuild_test_sb raid_rebuild_test raid5f 4 true false true 00:38:20.095 17:34:57 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 7 -le 1 ']' 00:38:20.095 17:34:57 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:38:20.095 17:34:57 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:38:20.095 ************************************ 00:38:20.095 START TEST raid5f_rebuild_test_sb 00:38:20.095 ************************************ 00:38:20.095 17:34:57 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@1129 -- # raid_rebuild_test raid5f 4 true false true 00:38:20.095 17:34:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@569 -- # local raid_level=raid5f 00:38:20.095 17:34:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=4 00:38:20.095 17:34:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@571 -- # local superblock=true 00:38:20.095 17:34:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@572 -- # local background_io=false 00:38:20.095 17:34:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@573 -- # local verify=true 00:38:20.095 17:34:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:38:20.095 17:34:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:38:20.095 17:34:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:38:20.095 17:34:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:38:20.095 17:34:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:38:20.095 17:34:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:38:20.095 17:34:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:38:20.095 17:34:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:38:20.095 17:34:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # echo BaseBdev3 00:38:20.095 17:34:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:38:20.095 17:34:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:38:20.095 17:34:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # echo BaseBdev4 00:38:20.095 17:34:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:38:20.096 17:34:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:38:20.096 17:34:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:38:20.096 17:34:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:38:20.096 17:34:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:38:20.096 17:34:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # local strip_size 00:38:20.096 17:34:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@577 -- # local create_arg 00:38:20.096 17:34:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:38:20.096 17:34:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@579 -- # local data_offset 00:38:20.096 17:34:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@581 -- # '[' raid5f '!=' raid1 ']' 00:38:20.096 17:34:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@582 -- # '[' false = true ']' 00:38:20.096 17:34:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@586 -- # strip_size=64 00:38:20.096 17:34:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@587 -- # create_arg+=' -z 64' 00:38:20.096 17:34:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@592 -- # '[' true = true ']' 00:38:20.096 17:34:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@593 -- # create_arg+=' -s' 00:38:20.096 17:34:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@597 -- # raid_pid=85592 00:38:20.096 17:34:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@598 -- # waitforlisten 85592 00:38:20.096 17:34:57 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@835 -- # '[' -z 85592 ']' 00:38:20.096 17:34:57 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:38:20.096 17:34:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:38:20.096 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:38:20.096 17:34:57 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@840 -- # local max_retries=100 00:38:20.096 17:34:57 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:38:20.096 17:34:57 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@844 -- # xtrace_disable 00:38:20.096 17:34:57 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:38:20.096 I/O size of 3145728 is greater than zero copy threshold (65536). 00:38:20.096 Zero copy mechanism will not be used. 00:38:20.096 [2024-11-26 17:34:57.436383] Starting SPDK v25.01-pre git sha1 f7ce15267 / DPDK 24.03.0 initialization... 00:38:20.096 [2024-11-26 17:34:57.436563] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid85592 ] 00:38:20.356 [2024-11-26 17:34:57.625388] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:38:20.356 [2024-11-26 17:34:57.767472] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:38:20.616 [2024-11-26 17:34:58.017466] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:38:20.616 [2024-11-26 17:34:58.017562] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:38:20.876 17:34:58 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:38:20.876 17:34:58 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@868 -- # return 0 00:38:20.876 17:34:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:38:20.876 17:34:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:38:20.876 17:34:58 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:20.876 17:34:58 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:38:21.136 BaseBdev1_malloc 00:38:21.136 17:34:58 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:21.136 17:34:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:38:21.136 17:34:58 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:21.136 17:34:58 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:38:21.136 [2024-11-26 17:34:58.352658] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:38:21.136 [2024-11-26 17:34:58.352737] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:38:21.136 [2024-11-26 17:34:58.352767] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:38:21.136 [2024-11-26 17:34:58.352783] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:38:21.136 [2024-11-26 17:34:58.355500] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:38:21.136 [2024-11-26 17:34:58.355544] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:38:21.136 BaseBdev1 00:38:21.136 17:34:58 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:21.136 17:34:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:38:21.136 17:34:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:38:21.136 17:34:58 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:21.136 17:34:58 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:38:21.136 BaseBdev2_malloc 00:38:21.136 17:34:58 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:21.136 17:34:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:38:21.136 17:34:58 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:21.136 17:34:58 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:38:21.136 [2024-11-26 17:34:58.410375] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:38:21.136 [2024-11-26 17:34:58.410444] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:38:21.136 [2024-11-26 17:34:58.410475] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:38:21.136 [2024-11-26 17:34:58.410490] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:38:21.136 [2024-11-26 17:34:58.413140] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:38:21.136 [2024-11-26 17:34:58.413403] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:38:21.136 BaseBdev2 00:38:21.136 17:34:58 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:21.136 17:34:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:38:21.136 17:34:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:38:21.136 17:34:58 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:21.136 17:34:58 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:38:21.136 BaseBdev3_malloc 00:38:21.136 17:34:58 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:21.136 17:34:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev3_malloc -p BaseBdev3 00:38:21.136 17:34:58 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:21.136 17:34:58 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:38:21.136 [2024-11-26 17:34:58.481654] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev3_malloc 00:38:21.136 [2024-11-26 17:34:58.481891] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:38:21.136 [2024-11-26 17:34:58.481927] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:38:21.136 [2024-11-26 17:34:58.481945] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:38:21.136 [2024-11-26 17:34:58.484721] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:38:21.136 [2024-11-26 17:34:58.484768] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:38:21.136 BaseBdev3 00:38:21.136 17:34:58 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:21.136 17:34:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:38:21.136 17:34:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:38:21.136 17:34:58 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:21.136 17:34:58 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:38:21.136 BaseBdev4_malloc 00:38:21.136 17:34:58 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:21.136 17:34:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev4_malloc -p BaseBdev4 00:38:21.136 17:34:58 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:21.136 17:34:58 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:38:21.136 [2024-11-26 17:34:58.545445] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev4_malloc 00:38:21.136 [2024-11-26 17:34:58.545682] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:38:21.136 [2024-11-26 17:34:58.545715] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:38:21.136 [2024-11-26 17:34:58.545732] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:38:21.136 [2024-11-26 17:34:58.548495] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:38:21.136 [2024-11-26 17:34:58.548546] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:38:21.136 BaseBdev4 00:38:21.136 17:34:58 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:21.136 17:34:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 512 -b spare_malloc 00:38:21.136 17:34:58 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:21.136 17:34:58 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:38:21.395 spare_malloc 00:38:21.395 17:34:58 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:21.395 17:34:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:38:21.395 17:34:58 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:21.396 17:34:58 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:38:21.396 spare_delay 00:38:21.396 17:34:58 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:21.396 17:34:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:38:21.396 17:34:58 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:21.396 17:34:58 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:38:21.396 [2024-11-26 17:34:58.616625] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:38:21.396 [2024-11-26 17:34:58.616858] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:38:21.396 [2024-11-26 17:34:58.616914] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a880 00:38:21.396 [2024-11-26 17:34:58.616996] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:38:21.396 [2024-11-26 17:34:58.619868] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:38:21.396 [2024-11-26 17:34:58.620013] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:38:21.396 spare 00:38:21.396 17:34:58 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:21.396 17:34:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n raid_bdev1 00:38:21.396 17:34:58 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:21.396 17:34:58 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:38:21.396 [2024-11-26 17:34:58.628726] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:38:21.396 [2024-11-26 17:34:58.631317] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:38:21.396 [2024-11-26 17:34:58.631383] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:38:21.396 [2024-11-26 17:34:58.631449] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:38:21.396 [2024-11-26 17:34:58.631664] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:38:21.396 [2024-11-26 17:34:58.631681] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:38:21.396 [2024-11-26 17:34:58.631946] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:38:21.396 [2024-11-26 17:34:58.639926] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:38:21.396 [2024-11-26 17:34:58.640083] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:38:21.396 [2024-11-26 17:34:58.640311] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:38:21.396 17:34:58 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:21.396 17:34:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 4 00:38:21.396 17:34:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:38:21.396 17:34:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:38:21.396 17:34:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:38:21.396 17:34:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:38:21.396 17:34:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:38:21.396 17:34:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:38:21.396 17:34:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:38:21.396 17:34:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:38:21.396 17:34:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:38:21.396 17:34:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:38:21.396 17:34:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:38:21.396 17:34:58 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:21.396 17:34:58 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:38:21.396 17:34:58 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:21.396 17:34:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:38:21.396 "name": "raid_bdev1", 00:38:21.396 "uuid": "d305ef10-adcc-493b-844a-89564e6539a3", 00:38:21.396 "strip_size_kb": 64, 00:38:21.396 "state": "online", 00:38:21.396 "raid_level": "raid5f", 00:38:21.396 "superblock": true, 00:38:21.396 "num_base_bdevs": 4, 00:38:21.396 "num_base_bdevs_discovered": 4, 00:38:21.396 "num_base_bdevs_operational": 4, 00:38:21.396 "base_bdevs_list": [ 00:38:21.396 { 00:38:21.396 "name": "BaseBdev1", 00:38:21.396 "uuid": "8982b947-ce25-53e7-ae75-6d145c99978a", 00:38:21.396 "is_configured": true, 00:38:21.396 "data_offset": 2048, 00:38:21.396 "data_size": 63488 00:38:21.396 }, 00:38:21.396 { 00:38:21.396 "name": "BaseBdev2", 00:38:21.396 "uuid": "fe770283-ed3a-54c7-b6d0-0b2005a5e35d", 00:38:21.396 "is_configured": true, 00:38:21.396 "data_offset": 2048, 00:38:21.396 "data_size": 63488 00:38:21.396 }, 00:38:21.396 { 00:38:21.396 "name": "BaseBdev3", 00:38:21.396 "uuid": "3736043f-88ea-58b3-a6ec-47f22938a79e", 00:38:21.396 "is_configured": true, 00:38:21.396 "data_offset": 2048, 00:38:21.396 "data_size": 63488 00:38:21.396 }, 00:38:21.396 { 00:38:21.396 "name": "BaseBdev4", 00:38:21.396 "uuid": "1298b0db-c61c-5d5a-93e7-18239ee93f41", 00:38:21.396 "is_configured": true, 00:38:21.396 "data_offset": 2048, 00:38:21.396 "data_size": 63488 00:38:21.396 } 00:38:21.396 ] 00:38:21.396 }' 00:38:21.396 17:34:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:38:21.396 17:34:58 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:38:21.656 17:34:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:38:21.656 17:34:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:38:21.656 17:34:59 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:21.656 17:34:59 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:38:21.656 [2024-11-26 17:34:59.049889] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:38:21.656 17:34:59 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:21.656 17:34:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=190464 00:38:21.656 17:34:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:38:21.656 17:34:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:38:21.656 17:34:59 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:21.656 17:34:59 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:38:21.915 17:34:59 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:21.915 17:34:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@619 -- # data_offset=2048 00:38:21.915 17:34:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@621 -- # '[' false = true ']' 00:38:21.915 17:34:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@624 -- # '[' true = true ']' 00:38:21.915 17:34:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@625 -- # local write_unit_size 00:38:21.915 17:34:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@628 -- # nbd_start_disks /var/tmp/spdk.sock raid_bdev1 /dev/nbd0 00:38:21.915 17:34:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:38:21.915 17:34:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # bdev_list=('raid_bdev1') 00:38:21.915 17:34:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # local bdev_list 00:38:21.915 17:34:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:38:21.915 17:34:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # local nbd_list 00:38:21.915 17:34:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@12 -- # local i 00:38:21.915 17:34:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:38:21.915 17:34:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:38:21.915 17:34:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk raid_bdev1 /dev/nbd0 00:38:22.174 [2024-11-26 17:34:59.413798] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006150 00:38:22.174 /dev/nbd0 00:38:22.174 17:34:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:38:22.174 17:34:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:38:22.174 17:34:59 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:38:22.174 17:34:59 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@873 -- # local i 00:38:22.174 17:34:59 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:38:22.174 17:34:59 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:38:22.174 17:34:59 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:38:22.174 17:34:59 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@877 -- # break 00:38:22.174 17:34:59 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:38:22.174 17:34:59 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:38:22.174 17:34:59 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:38:22.174 1+0 records in 00:38:22.174 1+0 records out 00:38:22.174 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000322394 s, 12.7 MB/s 00:38:22.174 17:34:59 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:38:22.174 17:34:59 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@890 -- # size=4096 00:38:22.174 17:34:59 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:38:22.174 17:34:59 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:38:22.174 17:34:59 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@893 -- # return 0 00:38:22.174 17:34:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:38:22.174 17:34:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:38:22.174 17:34:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@629 -- # '[' raid5f = raid5f ']' 00:38:22.174 17:34:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@630 -- # write_unit_size=384 00:38:22.174 17:34:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@631 -- # echo 192 00:38:22.174 17:34:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@635 -- # dd if=/dev/urandom of=/dev/nbd0 bs=196608 count=496 oflag=direct 00:38:22.742 496+0 records in 00:38:22.742 496+0 records out 00:38:22.742 97517568 bytes (98 MB, 93 MiB) copied, 0.571934 s, 171 MB/s 00:38:22.742 17:35:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@636 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:38:22.742 17:35:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:38:22.742 17:35:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:38:22.742 17:35:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # local nbd_list 00:38:22.742 17:35:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@51 -- # local i 00:38:22.742 17:35:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:38:22.742 17:35:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:38:23.001 17:35:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:38:23.001 [2024-11-26 17:35:00.352717] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:38:23.001 17:35:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:38:23.001 17:35:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:38:23.001 17:35:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:38:23.001 17:35:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:38:23.001 17:35:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:38:23.001 17:35:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:38:23.001 17:35:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:38:23.001 17:35:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:38:23.001 17:35:00 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:23.001 17:35:00 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:38:23.001 [2024-11-26 17:35:00.366457] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:38:23.001 17:35:00 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:23.001 17:35:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:38:23.001 17:35:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:38:23.001 17:35:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:38:23.001 17:35:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:38:23.001 17:35:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:38:23.001 17:35:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:38:23.001 17:35:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:38:23.001 17:35:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:38:23.001 17:35:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:38:23.001 17:35:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:38:23.001 17:35:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:38:23.001 17:35:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:38:23.001 17:35:00 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:23.001 17:35:00 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:38:23.001 17:35:00 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:23.001 17:35:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:38:23.001 "name": "raid_bdev1", 00:38:23.001 "uuid": "d305ef10-adcc-493b-844a-89564e6539a3", 00:38:23.001 "strip_size_kb": 64, 00:38:23.001 "state": "online", 00:38:23.001 "raid_level": "raid5f", 00:38:23.001 "superblock": true, 00:38:23.001 "num_base_bdevs": 4, 00:38:23.001 "num_base_bdevs_discovered": 3, 00:38:23.001 "num_base_bdevs_operational": 3, 00:38:23.001 "base_bdevs_list": [ 00:38:23.001 { 00:38:23.001 "name": null, 00:38:23.001 "uuid": "00000000-0000-0000-0000-000000000000", 00:38:23.001 "is_configured": false, 00:38:23.001 "data_offset": 0, 00:38:23.001 "data_size": 63488 00:38:23.001 }, 00:38:23.001 { 00:38:23.001 "name": "BaseBdev2", 00:38:23.001 "uuid": "fe770283-ed3a-54c7-b6d0-0b2005a5e35d", 00:38:23.001 "is_configured": true, 00:38:23.001 "data_offset": 2048, 00:38:23.001 "data_size": 63488 00:38:23.001 }, 00:38:23.001 { 00:38:23.001 "name": "BaseBdev3", 00:38:23.001 "uuid": "3736043f-88ea-58b3-a6ec-47f22938a79e", 00:38:23.001 "is_configured": true, 00:38:23.001 "data_offset": 2048, 00:38:23.001 "data_size": 63488 00:38:23.001 }, 00:38:23.001 { 00:38:23.001 "name": "BaseBdev4", 00:38:23.001 "uuid": "1298b0db-c61c-5d5a-93e7-18239ee93f41", 00:38:23.001 "is_configured": true, 00:38:23.001 "data_offset": 2048, 00:38:23.001 "data_size": 63488 00:38:23.001 } 00:38:23.001 ] 00:38:23.001 }' 00:38:23.001 17:35:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:38:23.002 17:35:00 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:38:23.569 17:35:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:38:23.569 17:35:00 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:23.569 17:35:00 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:38:23.569 [2024-11-26 17:35:00.822599] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:38:23.569 [2024-11-26 17:35:00.839806] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00002aa50 00:38:23.569 17:35:00 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:23.569 17:35:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@647 -- # sleep 1 00:38:23.569 [2024-11-26 17:35:00.849887] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:38:24.506 17:35:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:38:24.506 17:35:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:38:24.506 17:35:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:38:24.506 17:35:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:38:24.506 17:35:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:38:24.506 17:35:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:38:24.506 17:35:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:38:24.506 17:35:01 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:24.506 17:35:01 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:38:24.506 17:35:01 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:24.506 17:35:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:38:24.506 "name": "raid_bdev1", 00:38:24.506 "uuid": "d305ef10-adcc-493b-844a-89564e6539a3", 00:38:24.506 "strip_size_kb": 64, 00:38:24.506 "state": "online", 00:38:24.506 "raid_level": "raid5f", 00:38:24.506 "superblock": true, 00:38:24.506 "num_base_bdevs": 4, 00:38:24.506 "num_base_bdevs_discovered": 4, 00:38:24.506 "num_base_bdevs_operational": 4, 00:38:24.506 "process": { 00:38:24.506 "type": "rebuild", 00:38:24.506 "target": "spare", 00:38:24.506 "progress": { 00:38:24.506 "blocks": 17280, 00:38:24.506 "percent": 9 00:38:24.506 } 00:38:24.506 }, 00:38:24.506 "base_bdevs_list": [ 00:38:24.506 { 00:38:24.506 "name": "spare", 00:38:24.506 "uuid": "df4b26c9-64fb-5ec7-86f3-567c2628e043", 00:38:24.506 "is_configured": true, 00:38:24.506 "data_offset": 2048, 00:38:24.506 "data_size": 63488 00:38:24.506 }, 00:38:24.506 { 00:38:24.506 "name": "BaseBdev2", 00:38:24.506 "uuid": "fe770283-ed3a-54c7-b6d0-0b2005a5e35d", 00:38:24.506 "is_configured": true, 00:38:24.506 "data_offset": 2048, 00:38:24.506 "data_size": 63488 00:38:24.506 }, 00:38:24.506 { 00:38:24.506 "name": "BaseBdev3", 00:38:24.506 "uuid": "3736043f-88ea-58b3-a6ec-47f22938a79e", 00:38:24.506 "is_configured": true, 00:38:24.506 "data_offset": 2048, 00:38:24.506 "data_size": 63488 00:38:24.506 }, 00:38:24.506 { 00:38:24.506 "name": "BaseBdev4", 00:38:24.506 "uuid": "1298b0db-c61c-5d5a-93e7-18239ee93f41", 00:38:24.506 "is_configured": true, 00:38:24.506 "data_offset": 2048, 00:38:24.506 "data_size": 63488 00:38:24.506 } 00:38:24.506 ] 00:38:24.506 }' 00:38:24.506 17:35:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:38:24.506 17:35:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:38:24.506 17:35:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:38:24.765 17:35:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:38:24.765 17:35:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:38:24.765 17:35:01 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:24.765 17:35:01 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:38:24.765 [2024-11-26 17:35:01.975028] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:38:24.765 [2024-11-26 17:35:02.061011] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:38:24.765 [2024-11-26 17:35:02.061117] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:38:24.765 [2024-11-26 17:35:02.061137] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:38:24.765 [2024-11-26 17:35:02.061150] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:38:24.765 17:35:02 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:24.765 17:35:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:38:24.765 17:35:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:38:24.765 17:35:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:38:24.765 17:35:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:38:24.765 17:35:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:38:24.765 17:35:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:38:24.766 17:35:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:38:24.766 17:35:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:38:24.766 17:35:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:38:24.766 17:35:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:38:24.766 17:35:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:38:24.766 17:35:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:38:24.766 17:35:02 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:24.766 17:35:02 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:38:24.766 17:35:02 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:24.766 17:35:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:38:24.766 "name": "raid_bdev1", 00:38:24.766 "uuid": "d305ef10-adcc-493b-844a-89564e6539a3", 00:38:24.766 "strip_size_kb": 64, 00:38:24.766 "state": "online", 00:38:24.766 "raid_level": "raid5f", 00:38:24.766 "superblock": true, 00:38:24.766 "num_base_bdevs": 4, 00:38:24.766 "num_base_bdevs_discovered": 3, 00:38:24.766 "num_base_bdevs_operational": 3, 00:38:24.766 "base_bdevs_list": [ 00:38:24.766 { 00:38:24.766 "name": null, 00:38:24.766 "uuid": "00000000-0000-0000-0000-000000000000", 00:38:24.766 "is_configured": false, 00:38:24.766 "data_offset": 0, 00:38:24.766 "data_size": 63488 00:38:24.766 }, 00:38:24.766 { 00:38:24.766 "name": "BaseBdev2", 00:38:24.766 "uuid": "fe770283-ed3a-54c7-b6d0-0b2005a5e35d", 00:38:24.766 "is_configured": true, 00:38:24.766 "data_offset": 2048, 00:38:24.766 "data_size": 63488 00:38:24.766 }, 00:38:24.766 { 00:38:24.766 "name": "BaseBdev3", 00:38:24.766 "uuid": "3736043f-88ea-58b3-a6ec-47f22938a79e", 00:38:24.766 "is_configured": true, 00:38:24.766 "data_offset": 2048, 00:38:24.766 "data_size": 63488 00:38:24.766 }, 00:38:24.766 { 00:38:24.766 "name": "BaseBdev4", 00:38:24.766 "uuid": "1298b0db-c61c-5d5a-93e7-18239ee93f41", 00:38:24.766 "is_configured": true, 00:38:24.766 "data_offset": 2048, 00:38:24.766 "data_size": 63488 00:38:24.766 } 00:38:24.766 ] 00:38:24.766 }' 00:38:24.766 17:35:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:38:24.766 17:35:02 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:38:25.334 17:35:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:38:25.334 17:35:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:38:25.334 17:35:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:38:25.334 17:35:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:38:25.334 17:35:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:38:25.334 17:35:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:38:25.334 17:35:02 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:25.334 17:35:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:38:25.334 17:35:02 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:38:25.334 17:35:02 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:25.334 17:35:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:38:25.334 "name": "raid_bdev1", 00:38:25.334 "uuid": "d305ef10-adcc-493b-844a-89564e6539a3", 00:38:25.334 "strip_size_kb": 64, 00:38:25.334 "state": "online", 00:38:25.334 "raid_level": "raid5f", 00:38:25.334 "superblock": true, 00:38:25.334 "num_base_bdevs": 4, 00:38:25.334 "num_base_bdevs_discovered": 3, 00:38:25.334 "num_base_bdevs_operational": 3, 00:38:25.334 "base_bdevs_list": [ 00:38:25.334 { 00:38:25.334 "name": null, 00:38:25.334 "uuid": "00000000-0000-0000-0000-000000000000", 00:38:25.334 "is_configured": false, 00:38:25.334 "data_offset": 0, 00:38:25.334 "data_size": 63488 00:38:25.334 }, 00:38:25.334 { 00:38:25.334 "name": "BaseBdev2", 00:38:25.334 "uuid": "fe770283-ed3a-54c7-b6d0-0b2005a5e35d", 00:38:25.334 "is_configured": true, 00:38:25.334 "data_offset": 2048, 00:38:25.334 "data_size": 63488 00:38:25.334 }, 00:38:25.334 { 00:38:25.334 "name": "BaseBdev3", 00:38:25.334 "uuid": "3736043f-88ea-58b3-a6ec-47f22938a79e", 00:38:25.334 "is_configured": true, 00:38:25.334 "data_offset": 2048, 00:38:25.334 "data_size": 63488 00:38:25.334 }, 00:38:25.334 { 00:38:25.334 "name": "BaseBdev4", 00:38:25.334 "uuid": "1298b0db-c61c-5d5a-93e7-18239ee93f41", 00:38:25.334 "is_configured": true, 00:38:25.334 "data_offset": 2048, 00:38:25.335 "data_size": 63488 00:38:25.335 } 00:38:25.335 ] 00:38:25.335 }' 00:38:25.335 17:35:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:38:25.335 17:35:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:38:25.335 17:35:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:38:25.335 17:35:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:38:25.335 17:35:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:38:25.335 17:35:02 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:25.335 17:35:02 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:38:25.335 [2024-11-26 17:35:02.652185] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:38:25.335 [2024-11-26 17:35:02.668353] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00002ab20 00:38:25.335 17:35:02 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:25.335 17:35:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@663 -- # sleep 1 00:38:25.335 [2024-11-26 17:35:02.679246] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:38:26.273 17:35:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:38:26.273 17:35:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:38:26.273 17:35:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:38:26.273 17:35:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:38:26.273 17:35:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:38:26.273 17:35:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:38:26.273 17:35:03 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:26.273 17:35:03 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:38:26.273 17:35:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:38:26.273 17:35:03 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:26.533 17:35:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:38:26.533 "name": "raid_bdev1", 00:38:26.533 "uuid": "d305ef10-adcc-493b-844a-89564e6539a3", 00:38:26.533 "strip_size_kb": 64, 00:38:26.533 "state": "online", 00:38:26.533 "raid_level": "raid5f", 00:38:26.533 "superblock": true, 00:38:26.533 "num_base_bdevs": 4, 00:38:26.533 "num_base_bdevs_discovered": 4, 00:38:26.533 "num_base_bdevs_operational": 4, 00:38:26.533 "process": { 00:38:26.533 "type": "rebuild", 00:38:26.533 "target": "spare", 00:38:26.533 "progress": { 00:38:26.533 "blocks": 19200, 00:38:26.533 "percent": 10 00:38:26.533 } 00:38:26.533 }, 00:38:26.533 "base_bdevs_list": [ 00:38:26.533 { 00:38:26.533 "name": "spare", 00:38:26.533 "uuid": "df4b26c9-64fb-5ec7-86f3-567c2628e043", 00:38:26.533 "is_configured": true, 00:38:26.533 "data_offset": 2048, 00:38:26.533 "data_size": 63488 00:38:26.533 }, 00:38:26.533 { 00:38:26.533 "name": "BaseBdev2", 00:38:26.533 "uuid": "fe770283-ed3a-54c7-b6d0-0b2005a5e35d", 00:38:26.533 "is_configured": true, 00:38:26.533 "data_offset": 2048, 00:38:26.533 "data_size": 63488 00:38:26.533 }, 00:38:26.533 { 00:38:26.533 "name": "BaseBdev3", 00:38:26.533 "uuid": "3736043f-88ea-58b3-a6ec-47f22938a79e", 00:38:26.533 "is_configured": true, 00:38:26.533 "data_offset": 2048, 00:38:26.533 "data_size": 63488 00:38:26.533 }, 00:38:26.533 { 00:38:26.533 "name": "BaseBdev4", 00:38:26.533 "uuid": "1298b0db-c61c-5d5a-93e7-18239ee93f41", 00:38:26.533 "is_configured": true, 00:38:26.533 "data_offset": 2048, 00:38:26.533 "data_size": 63488 00:38:26.533 } 00:38:26.533 ] 00:38:26.533 }' 00:38:26.533 17:35:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:38:26.533 17:35:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:38:26.533 17:35:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:38:26.533 17:35:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:38:26.533 17:35:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@666 -- # '[' true = true ']' 00:38:26.533 17:35:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@666 -- # '[' = false ']' 00:38:26.533 /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh: line 666: [: =: unary operator expected 00:38:26.533 17:35:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=4 00:38:26.533 17:35:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@693 -- # '[' raid5f = raid1 ']' 00:38:26.533 17:35:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@706 -- # local timeout=657 00:38:26.533 17:35:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:38:26.533 17:35:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:38:26.533 17:35:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:38:26.533 17:35:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:38:26.533 17:35:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:38:26.533 17:35:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:38:26.533 17:35:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:38:26.533 17:35:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:38:26.533 17:35:03 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:26.533 17:35:03 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:38:26.533 17:35:03 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:26.533 17:35:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:38:26.533 "name": "raid_bdev1", 00:38:26.533 "uuid": "d305ef10-adcc-493b-844a-89564e6539a3", 00:38:26.533 "strip_size_kb": 64, 00:38:26.533 "state": "online", 00:38:26.533 "raid_level": "raid5f", 00:38:26.533 "superblock": true, 00:38:26.533 "num_base_bdevs": 4, 00:38:26.533 "num_base_bdevs_discovered": 4, 00:38:26.533 "num_base_bdevs_operational": 4, 00:38:26.533 "process": { 00:38:26.533 "type": "rebuild", 00:38:26.533 "target": "spare", 00:38:26.533 "progress": { 00:38:26.533 "blocks": 21120, 00:38:26.533 "percent": 11 00:38:26.533 } 00:38:26.533 }, 00:38:26.533 "base_bdevs_list": [ 00:38:26.533 { 00:38:26.533 "name": "spare", 00:38:26.534 "uuid": "df4b26c9-64fb-5ec7-86f3-567c2628e043", 00:38:26.534 "is_configured": true, 00:38:26.534 "data_offset": 2048, 00:38:26.534 "data_size": 63488 00:38:26.534 }, 00:38:26.534 { 00:38:26.534 "name": "BaseBdev2", 00:38:26.534 "uuid": "fe770283-ed3a-54c7-b6d0-0b2005a5e35d", 00:38:26.534 "is_configured": true, 00:38:26.534 "data_offset": 2048, 00:38:26.534 "data_size": 63488 00:38:26.534 }, 00:38:26.534 { 00:38:26.534 "name": "BaseBdev3", 00:38:26.534 "uuid": "3736043f-88ea-58b3-a6ec-47f22938a79e", 00:38:26.534 "is_configured": true, 00:38:26.534 "data_offset": 2048, 00:38:26.534 "data_size": 63488 00:38:26.534 }, 00:38:26.534 { 00:38:26.534 "name": "BaseBdev4", 00:38:26.534 "uuid": "1298b0db-c61c-5d5a-93e7-18239ee93f41", 00:38:26.534 "is_configured": true, 00:38:26.534 "data_offset": 2048, 00:38:26.534 "data_size": 63488 00:38:26.534 } 00:38:26.534 ] 00:38:26.534 }' 00:38:26.534 17:35:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:38:26.534 17:35:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:38:26.534 17:35:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:38:26.534 17:35:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:38:26.534 17:35:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:38:27.942 17:35:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:38:27.942 17:35:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:38:27.942 17:35:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:38:27.942 17:35:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:38:27.942 17:35:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:38:27.942 17:35:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:38:27.942 17:35:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:38:27.942 17:35:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:38:27.942 17:35:04 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:27.942 17:35:04 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:38:27.942 17:35:04 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:27.942 17:35:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:38:27.942 "name": "raid_bdev1", 00:38:27.942 "uuid": "d305ef10-adcc-493b-844a-89564e6539a3", 00:38:27.942 "strip_size_kb": 64, 00:38:27.942 "state": "online", 00:38:27.942 "raid_level": "raid5f", 00:38:27.942 "superblock": true, 00:38:27.942 "num_base_bdevs": 4, 00:38:27.942 "num_base_bdevs_discovered": 4, 00:38:27.943 "num_base_bdevs_operational": 4, 00:38:27.943 "process": { 00:38:27.943 "type": "rebuild", 00:38:27.943 "target": "spare", 00:38:27.943 "progress": { 00:38:27.943 "blocks": 42240, 00:38:27.943 "percent": 22 00:38:27.943 } 00:38:27.943 }, 00:38:27.943 "base_bdevs_list": [ 00:38:27.943 { 00:38:27.943 "name": "spare", 00:38:27.943 "uuid": "df4b26c9-64fb-5ec7-86f3-567c2628e043", 00:38:27.943 "is_configured": true, 00:38:27.943 "data_offset": 2048, 00:38:27.943 "data_size": 63488 00:38:27.943 }, 00:38:27.943 { 00:38:27.943 "name": "BaseBdev2", 00:38:27.943 "uuid": "fe770283-ed3a-54c7-b6d0-0b2005a5e35d", 00:38:27.943 "is_configured": true, 00:38:27.943 "data_offset": 2048, 00:38:27.943 "data_size": 63488 00:38:27.943 }, 00:38:27.943 { 00:38:27.943 "name": "BaseBdev3", 00:38:27.943 "uuid": "3736043f-88ea-58b3-a6ec-47f22938a79e", 00:38:27.943 "is_configured": true, 00:38:27.943 "data_offset": 2048, 00:38:27.943 "data_size": 63488 00:38:27.943 }, 00:38:27.943 { 00:38:27.943 "name": "BaseBdev4", 00:38:27.943 "uuid": "1298b0db-c61c-5d5a-93e7-18239ee93f41", 00:38:27.943 "is_configured": true, 00:38:27.943 "data_offset": 2048, 00:38:27.943 "data_size": 63488 00:38:27.943 } 00:38:27.943 ] 00:38:27.943 }' 00:38:27.943 17:35:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:38:27.943 17:35:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:38:27.943 17:35:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:38:27.943 17:35:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:38:27.943 17:35:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:38:28.882 17:35:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:38:28.882 17:35:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:38:28.882 17:35:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:38:28.882 17:35:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:38:28.882 17:35:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:38:28.882 17:35:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:38:28.882 17:35:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:38:28.882 17:35:06 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:28.882 17:35:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:38:28.882 17:35:06 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:38:28.882 17:35:06 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:28.882 17:35:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:38:28.882 "name": "raid_bdev1", 00:38:28.882 "uuid": "d305ef10-adcc-493b-844a-89564e6539a3", 00:38:28.882 "strip_size_kb": 64, 00:38:28.882 "state": "online", 00:38:28.882 "raid_level": "raid5f", 00:38:28.882 "superblock": true, 00:38:28.882 "num_base_bdevs": 4, 00:38:28.882 "num_base_bdevs_discovered": 4, 00:38:28.882 "num_base_bdevs_operational": 4, 00:38:28.882 "process": { 00:38:28.882 "type": "rebuild", 00:38:28.882 "target": "spare", 00:38:28.882 "progress": { 00:38:28.882 "blocks": 65280, 00:38:28.882 "percent": 34 00:38:28.882 } 00:38:28.882 }, 00:38:28.882 "base_bdevs_list": [ 00:38:28.882 { 00:38:28.882 "name": "spare", 00:38:28.882 "uuid": "df4b26c9-64fb-5ec7-86f3-567c2628e043", 00:38:28.882 "is_configured": true, 00:38:28.882 "data_offset": 2048, 00:38:28.882 "data_size": 63488 00:38:28.882 }, 00:38:28.882 { 00:38:28.882 "name": "BaseBdev2", 00:38:28.882 "uuid": "fe770283-ed3a-54c7-b6d0-0b2005a5e35d", 00:38:28.882 "is_configured": true, 00:38:28.882 "data_offset": 2048, 00:38:28.882 "data_size": 63488 00:38:28.882 }, 00:38:28.882 { 00:38:28.882 "name": "BaseBdev3", 00:38:28.882 "uuid": "3736043f-88ea-58b3-a6ec-47f22938a79e", 00:38:28.882 "is_configured": true, 00:38:28.882 "data_offset": 2048, 00:38:28.882 "data_size": 63488 00:38:28.882 }, 00:38:28.882 { 00:38:28.882 "name": "BaseBdev4", 00:38:28.883 "uuid": "1298b0db-c61c-5d5a-93e7-18239ee93f41", 00:38:28.883 "is_configured": true, 00:38:28.883 "data_offset": 2048, 00:38:28.883 "data_size": 63488 00:38:28.883 } 00:38:28.883 ] 00:38:28.883 }' 00:38:28.883 17:35:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:38:28.883 17:35:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:38:28.883 17:35:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:38:28.883 17:35:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:38:28.883 17:35:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:38:30.260 17:35:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:38:30.260 17:35:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:38:30.260 17:35:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:38:30.260 17:35:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:38:30.260 17:35:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:38:30.260 17:35:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:38:30.260 17:35:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:38:30.260 17:35:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:38:30.260 17:35:07 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:30.260 17:35:07 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:38:30.260 17:35:07 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:30.260 17:35:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:38:30.260 "name": "raid_bdev1", 00:38:30.260 "uuid": "d305ef10-adcc-493b-844a-89564e6539a3", 00:38:30.260 "strip_size_kb": 64, 00:38:30.260 "state": "online", 00:38:30.260 "raid_level": "raid5f", 00:38:30.260 "superblock": true, 00:38:30.260 "num_base_bdevs": 4, 00:38:30.260 "num_base_bdevs_discovered": 4, 00:38:30.260 "num_base_bdevs_operational": 4, 00:38:30.260 "process": { 00:38:30.260 "type": "rebuild", 00:38:30.260 "target": "spare", 00:38:30.260 "progress": { 00:38:30.260 "blocks": 86400, 00:38:30.260 "percent": 45 00:38:30.260 } 00:38:30.260 }, 00:38:30.260 "base_bdevs_list": [ 00:38:30.260 { 00:38:30.260 "name": "spare", 00:38:30.260 "uuid": "df4b26c9-64fb-5ec7-86f3-567c2628e043", 00:38:30.260 "is_configured": true, 00:38:30.260 "data_offset": 2048, 00:38:30.260 "data_size": 63488 00:38:30.260 }, 00:38:30.260 { 00:38:30.260 "name": "BaseBdev2", 00:38:30.260 "uuid": "fe770283-ed3a-54c7-b6d0-0b2005a5e35d", 00:38:30.260 "is_configured": true, 00:38:30.260 "data_offset": 2048, 00:38:30.260 "data_size": 63488 00:38:30.260 }, 00:38:30.260 { 00:38:30.260 "name": "BaseBdev3", 00:38:30.260 "uuid": "3736043f-88ea-58b3-a6ec-47f22938a79e", 00:38:30.260 "is_configured": true, 00:38:30.260 "data_offset": 2048, 00:38:30.260 "data_size": 63488 00:38:30.260 }, 00:38:30.260 { 00:38:30.260 "name": "BaseBdev4", 00:38:30.260 "uuid": "1298b0db-c61c-5d5a-93e7-18239ee93f41", 00:38:30.260 "is_configured": true, 00:38:30.260 "data_offset": 2048, 00:38:30.260 "data_size": 63488 00:38:30.260 } 00:38:30.260 ] 00:38:30.260 }' 00:38:30.260 17:35:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:38:30.260 17:35:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:38:30.260 17:35:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:38:30.260 17:35:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:38:30.260 17:35:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:38:31.195 17:35:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:38:31.196 17:35:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:38:31.196 17:35:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:38:31.196 17:35:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:38:31.196 17:35:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:38:31.196 17:35:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:38:31.196 17:35:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:38:31.196 17:35:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:38:31.196 17:35:08 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:31.196 17:35:08 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:38:31.196 17:35:08 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:31.196 17:35:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:38:31.196 "name": "raid_bdev1", 00:38:31.196 "uuid": "d305ef10-adcc-493b-844a-89564e6539a3", 00:38:31.196 "strip_size_kb": 64, 00:38:31.196 "state": "online", 00:38:31.196 "raid_level": "raid5f", 00:38:31.196 "superblock": true, 00:38:31.196 "num_base_bdevs": 4, 00:38:31.196 "num_base_bdevs_discovered": 4, 00:38:31.196 "num_base_bdevs_operational": 4, 00:38:31.196 "process": { 00:38:31.196 "type": "rebuild", 00:38:31.196 "target": "spare", 00:38:31.196 "progress": { 00:38:31.196 "blocks": 109440, 00:38:31.196 "percent": 57 00:38:31.196 } 00:38:31.196 }, 00:38:31.196 "base_bdevs_list": [ 00:38:31.196 { 00:38:31.196 "name": "spare", 00:38:31.196 "uuid": "df4b26c9-64fb-5ec7-86f3-567c2628e043", 00:38:31.196 "is_configured": true, 00:38:31.196 "data_offset": 2048, 00:38:31.196 "data_size": 63488 00:38:31.196 }, 00:38:31.196 { 00:38:31.196 "name": "BaseBdev2", 00:38:31.196 "uuid": "fe770283-ed3a-54c7-b6d0-0b2005a5e35d", 00:38:31.196 "is_configured": true, 00:38:31.196 "data_offset": 2048, 00:38:31.196 "data_size": 63488 00:38:31.196 }, 00:38:31.196 { 00:38:31.196 "name": "BaseBdev3", 00:38:31.196 "uuid": "3736043f-88ea-58b3-a6ec-47f22938a79e", 00:38:31.196 "is_configured": true, 00:38:31.196 "data_offset": 2048, 00:38:31.196 "data_size": 63488 00:38:31.196 }, 00:38:31.196 { 00:38:31.196 "name": "BaseBdev4", 00:38:31.196 "uuid": "1298b0db-c61c-5d5a-93e7-18239ee93f41", 00:38:31.196 "is_configured": true, 00:38:31.196 "data_offset": 2048, 00:38:31.196 "data_size": 63488 00:38:31.196 } 00:38:31.196 ] 00:38:31.196 }' 00:38:31.196 17:35:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:38:31.196 17:35:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:38:31.196 17:35:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:38:31.196 17:35:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:38:31.196 17:35:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:38:32.133 17:35:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:38:32.133 17:35:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:38:32.133 17:35:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:38:32.133 17:35:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:38:32.133 17:35:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:38:32.133 17:35:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:38:32.133 17:35:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:38:32.133 17:35:09 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:32.133 17:35:09 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:38:32.134 17:35:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:38:32.393 17:35:09 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:32.393 17:35:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:38:32.393 "name": "raid_bdev1", 00:38:32.393 "uuid": "d305ef10-adcc-493b-844a-89564e6539a3", 00:38:32.393 "strip_size_kb": 64, 00:38:32.393 "state": "online", 00:38:32.393 "raid_level": "raid5f", 00:38:32.393 "superblock": true, 00:38:32.393 "num_base_bdevs": 4, 00:38:32.393 "num_base_bdevs_discovered": 4, 00:38:32.393 "num_base_bdevs_operational": 4, 00:38:32.393 "process": { 00:38:32.393 "type": "rebuild", 00:38:32.393 "target": "spare", 00:38:32.393 "progress": { 00:38:32.393 "blocks": 130560, 00:38:32.393 "percent": 68 00:38:32.393 } 00:38:32.393 }, 00:38:32.393 "base_bdevs_list": [ 00:38:32.393 { 00:38:32.393 "name": "spare", 00:38:32.393 "uuid": "df4b26c9-64fb-5ec7-86f3-567c2628e043", 00:38:32.393 "is_configured": true, 00:38:32.393 "data_offset": 2048, 00:38:32.393 "data_size": 63488 00:38:32.393 }, 00:38:32.393 { 00:38:32.393 "name": "BaseBdev2", 00:38:32.393 "uuid": "fe770283-ed3a-54c7-b6d0-0b2005a5e35d", 00:38:32.393 "is_configured": true, 00:38:32.393 "data_offset": 2048, 00:38:32.393 "data_size": 63488 00:38:32.393 }, 00:38:32.393 { 00:38:32.393 "name": "BaseBdev3", 00:38:32.393 "uuid": "3736043f-88ea-58b3-a6ec-47f22938a79e", 00:38:32.393 "is_configured": true, 00:38:32.393 "data_offset": 2048, 00:38:32.393 "data_size": 63488 00:38:32.393 }, 00:38:32.393 { 00:38:32.393 "name": "BaseBdev4", 00:38:32.393 "uuid": "1298b0db-c61c-5d5a-93e7-18239ee93f41", 00:38:32.393 "is_configured": true, 00:38:32.393 "data_offset": 2048, 00:38:32.393 "data_size": 63488 00:38:32.393 } 00:38:32.393 ] 00:38:32.393 }' 00:38:32.393 17:35:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:38:32.393 17:35:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:38:32.393 17:35:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:38:32.393 17:35:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:38:32.393 17:35:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:38:33.329 17:35:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:38:33.329 17:35:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:38:33.329 17:35:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:38:33.329 17:35:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:38:33.329 17:35:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:38:33.329 17:35:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:38:33.329 17:35:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:38:33.329 17:35:10 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:33.329 17:35:10 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:38:33.329 17:35:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:38:33.329 17:35:10 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:33.329 17:35:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:38:33.329 "name": "raid_bdev1", 00:38:33.329 "uuid": "d305ef10-adcc-493b-844a-89564e6539a3", 00:38:33.329 "strip_size_kb": 64, 00:38:33.329 "state": "online", 00:38:33.329 "raid_level": "raid5f", 00:38:33.329 "superblock": true, 00:38:33.329 "num_base_bdevs": 4, 00:38:33.329 "num_base_bdevs_discovered": 4, 00:38:33.329 "num_base_bdevs_operational": 4, 00:38:33.329 "process": { 00:38:33.329 "type": "rebuild", 00:38:33.329 "target": "spare", 00:38:33.329 "progress": { 00:38:33.329 "blocks": 151680, 00:38:33.329 "percent": 79 00:38:33.329 } 00:38:33.329 }, 00:38:33.329 "base_bdevs_list": [ 00:38:33.329 { 00:38:33.329 "name": "spare", 00:38:33.329 "uuid": "df4b26c9-64fb-5ec7-86f3-567c2628e043", 00:38:33.329 "is_configured": true, 00:38:33.329 "data_offset": 2048, 00:38:33.329 "data_size": 63488 00:38:33.329 }, 00:38:33.329 { 00:38:33.329 "name": "BaseBdev2", 00:38:33.329 "uuid": "fe770283-ed3a-54c7-b6d0-0b2005a5e35d", 00:38:33.329 "is_configured": true, 00:38:33.329 "data_offset": 2048, 00:38:33.329 "data_size": 63488 00:38:33.329 }, 00:38:33.329 { 00:38:33.329 "name": "BaseBdev3", 00:38:33.329 "uuid": "3736043f-88ea-58b3-a6ec-47f22938a79e", 00:38:33.329 "is_configured": true, 00:38:33.329 "data_offset": 2048, 00:38:33.329 "data_size": 63488 00:38:33.329 }, 00:38:33.329 { 00:38:33.329 "name": "BaseBdev4", 00:38:33.329 "uuid": "1298b0db-c61c-5d5a-93e7-18239ee93f41", 00:38:33.329 "is_configured": true, 00:38:33.329 "data_offset": 2048, 00:38:33.329 "data_size": 63488 00:38:33.329 } 00:38:33.329 ] 00:38:33.329 }' 00:38:33.329 17:35:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:38:33.587 17:35:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:38:33.587 17:35:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:38:33.587 17:35:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:38:33.587 17:35:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:38:34.522 17:35:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:38:34.522 17:35:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:38:34.522 17:35:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:38:34.522 17:35:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:38:34.522 17:35:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:38:34.522 17:35:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:38:34.522 17:35:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:38:34.522 17:35:11 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:34.522 17:35:11 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:38:34.522 17:35:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:38:34.522 17:35:11 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:34.522 17:35:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:38:34.522 "name": "raid_bdev1", 00:38:34.522 "uuid": "d305ef10-adcc-493b-844a-89564e6539a3", 00:38:34.522 "strip_size_kb": 64, 00:38:34.522 "state": "online", 00:38:34.522 "raid_level": "raid5f", 00:38:34.522 "superblock": true, 00:38:34.522 "num_base_bdevs": 4, 00:38:34.522 "num_base_bdevs_discovered": 4, 00:38:34.522 "num_base_bdevs_operational": 4, 00:38:34.522 "process": { 00:38:34.522 "type": "rebuild", 00:38:34.522 "target": "spare", 00:38:34.522 "progress": { 00:38:34.522 "blocks": 174720, 00:38:34.522 "percent": 91 00:38:34.522 } 00:38:34.522 }, 00:38:34.522 "base_bdevs_list": [ 00:38:34.522 { 00:38:34.522 "name": "spare", 00:38:34.522 "uuid": "df4b26c9-64fb-5ec7-86f3-567c2628e043", 00:38:34.522 "is_configured": true, 00:38:34.522 "data_offset": 2048, 00:38:34.522 "data_size": 63488 00:38:34.522 }, 00:38:34.522 { 00:38:34.522 "name": "BaseBdev2", 00:38:34.522 "uuid": "fe770283-ed3a-54c7-b6d0-0b2005a5e35d", 00:38:34.522 "is_configured": true, 00:38:34.522 "data_offset": 2048, 00:38:34.522 "data_size": 63488 00:38:34.522 }, 00:38:34.522 { 00:38:34.522 "name": "BaseBdev3", 00:38:34.522 "uuid": "3736043f-88ea-58b3-a6ec-47f22938a79e", 00:38:34.522 "is_configured": true, 00:38:34.522 "data_offset": 2048, 00:38:34.522 "data_size": 63488 00:38:34.522 }, 00:38:34.522 { 00:38:34.522 "name": "BaseBdev4", 00:38:34.522 "uuid": "1298b0db-c61c-5d5a-93e7-18239ee93f41", 00:38:34.522 "is_configured": true, 00:38:34.522 "data_offset": 2048, 00:38:34.522 "data_size": 63488 00:38:34.522 } 00:38:34.522 ] 00:38:34.522 }' 00:38:34.522 17:35:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:38:34.522 17:35:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:38:34.780 17:35:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:38:34.780 17:35:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:38:34.780 17:35:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:38:35.346 [2024-11-26 17:35:12.763574] bdev_raid.c:2900:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:38:35.346 [2024-11-26 17:35:12.763678] bdev_raid.c:2562:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:38:35.346 [2024-11-26 17:35:12.763861] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:38:35.604 17:35:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:38:35.604 17:35:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:38:35.604 17:35:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:38:35.604 17:35:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:38:35.604 17:35:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:38:35.604 17:35:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:38:35.604 17:35:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:38:35.604 17:35:13 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:35.604 17:35:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:38:35.604 17:35:13 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:38:35.604 17:35:13 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:35.863 17:35:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:38:35.863 "name": "raid_bdev1", 00:38:35.863 "uuid": "d305ef10-adcc-493b-844a-89564e6539a3", 00:38:35.863 "strip_size_kb": 64, 00:38:35.863 "state": "online", 00:38:35.863 "raid_level": "raid5f", 00:38:35.863 "superblock": true, 00:38:35.863 "num_base_bdevs": 4, 00:38:35.863 "num_base_bdevs_discovered": 4, 00:38:35.863 "num_base_bdevs_operational": 4, 00:38:35.863 "base_bdevs_list": [ 00:38:35.863 { 00:38:35.863 "name": "spare", 00:38:35.863 "uuid": "df4b26c9-64fb-5ec7-86f3-567c2628e043", 00:38:35.863 "is_configured": true, 00:38:35.863 "data_offset": 2048, 00:38:35.863 "data_size": 63488 00:38:35.863 }, 00:38:35.863 { 00:38:35.863 "name": "BaseBdev2", 00:38:35.863 "uuid": "fe770283-ed3a-54c7-b6d0-0b2005a5e35d", 00:38:35.863 "is_configured": true, 00:38:35.863 "data_offset": 2048, 00:38:35.863 "data_size": 63488 00:38:35.863 }, 00:38:35.863 { 00:38:35.863 "name": "BaseBdev3", 00:38:35.863 "uuid": "3736043f-88ea-58b3-a6ec-47f22938a79e", 00:38:35.863 "is_configured": true, 00:38:35.863 "data_offset": 2048, 00:38:35.863 "data_size": 63488 00:38:35.863 }, 00:38:35.863 { 00:38:35.863 "name": "BaseBdev4", 00:38:35.863 "uuid": "1298b0db-c61c-5d5a-93e7-18239ee93f41", 00:38:35.863 "is_configured": true, 00:38:35.863 "data_offset": 2048, 00:38:35.863 "data_size": 63488 00:38:35.863 } 00:38:35.863 ] 00:38:35.863 }' 00:38:35.863 17:35:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:38:35.863 17:35:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:38:35.863 17:35:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:38:35.863 17:35:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:38:35.863 17:35:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@709 -- # break 00:38:35.863 17:35:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:38:35.863 17:35:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:38:35.863 17:35:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:38:35.863 17:35:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:38:35.863 17:35:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:38:35.863 17:35:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:38:35.863 17:35:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:38:35.863 17:35:13 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:35.863 17:35:13 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:38:35.863 17:35:13 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:35.863 17:35:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:38:35.863 "name": "raid_bdev1", 00:38:35.863 "uuid": "d305ef10-adcc-493b-844a-89564e6539a3", 00:38:35.863 "strip_size_kb": 64, 00:38:35.863 "state": "online", 00:38:35.863 "raid_level": "raid5f", 00:38:35.863 "superblock": true, 00:38:35.863 "num_base_bdevs": 4, 00:38:35.863 "num_base_bdevs_discovered": 4, 00:38:35.863 "num_base_bdevs_operational": 4, 00:38:35.863 "base_bdevs_list": [ 00:38:35.863 { 00:38:35.863 "name": "spare", 00:38:35.864 "uuid": "df4b26c9-64fb-5ec7-86f3-567c2628e043", 00:38:35.864 "is_configured": true, 00:38:35.864 "data_offset": 2048, 00:38:35.864 "data_size": 63488 00:38:35.864 }, 00:38:35.864 { 00:38:35.864 "name": "BaseBdev2", 00:38:35.864 "uuid": "fe770283-ed3a-54c7-b6d0-0b2005a5e35d", 00:38:35.864 "is_configured": true, 00:38:35.864 "data_offset": 2048, 00:38:35.864 "data_size": 63488 00:38:35.864 }, 00:38:35.864 { 00:38:35.864 "name": "BaseBdev3", 00:38:35.864 "uuid": "3736043f-88ea-58b3-a6ec-47f22938a79e", 00:38:35.864 "is_configured": true, 00:38:35.864 "data_offset": 2048, 00:38:35.864 "data_size": 63488 00:38:35.864 }, 00:38:35.864 { 00:38:35.864 "name": "BaseBdev4", 00:38:35.864 "uuid": "1298b0db-c61c-5d5a-93e7-18239ee93f41", 00:38:35.864 "is_configured": true, 00:38:35.864 "data_offset": 2048, 00:38:35.864 "data_size": 63488 00:38:35.864 } 00:38:35.864 ] 00:38:35.864 }' 00:38:35.864 17:35:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:38:35.864 17:35:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:38:35.864 17:35:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:38:35.864 17:35:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:38:35.864 17:35:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 4 00:38:35.864 17:35:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:38:35.864 17:35:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:38:35.864 17:35:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:38:35.864 17:35:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:38:35.864 17:35:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:38:35.864 17:35:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:38:35.864 17:35:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:38:35.864 17:35:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:38:35.864 17:35:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:38:35.864 17:35:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:38:35.864 17:35:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:38:35.864 17:35:13 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:35.864 17:35:13 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:38:36.123 17:35:13 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:36.123 17:35:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:38:36.123 "name": "raid_bdev1", 00:38:36.123 "uuid": "d305ef10-adcc-493b-844a-89564e6539a3", 00:38:36.123 "strip_size_kb": 64, 00:38:36.123 "state": "online", 00:38:36.123 "raid_level": "raid5f", 00:38:36.123 "superblock": true, 00:38:36.123 "num_base_bdevs": 4, 00:38:36.123 "num_base_bdevs_discovered": 4, 00:38:36.123 "num_base_bdevs_operational": 4, 00:38:36.123 "base_bdevs_list": [ 00:38:36.123 { 00:38:36.123 "name": "spare", 00:38:36.123 "uuid": "df4b26c9-64fb-5ec7-86f3-567c2628e043", 00:38:36.123 "is_configured": true, 00:38:36.123 "data_offset": 2048, 00:38:36.123 "data_size": 63488 00:38:36.123 }, 00:38:36.123 { 00:38:36.123 "name": "BaseBdev2", 00:38:36.123 "uuid": "fe770283-ed3a-54c7-b6d0-0b2005a5e35d", 00:38:36.123 "is_configured": true, 00:38:36.123 "data_offset": 2048, 00:38:36.123 "data_size": 63488 00:38:36.123 }, 00:38:36.123 { 00:38:36.123 "name": "BaseBdev3", 00:38:36.123 "uuid": "3736043f-88ea-58b3-a6ec-47f22938a79e", 00:38:36.123 "is_configured": true, 00:38:36.123 "data_offset": 2048, 00:38:36.123 "data_size": 63488 00:38:36.123 }, 00:38:36.123 { 00:38:36.123 "name": "BaseBdev4", 00:38:36.123 "uuid": "1298b0db-c61c-5d5a-93e7-18239ee93f41", 00:38:36.123 "is_configured": true, 00:38:36.123 "data_offset": 2048, 00:38:36.123 "data_size": 63488 00:38:36.123 } 00:38:36.123 ] 00:38:36.123 }' 00:38:36.123 17:35:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:38:36.123 17:35:13 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:38:36.382 17:35:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:38:36.382 17:35:13 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:36.382 17:35:13 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:38:36.382 [2024-11-26 17:35:13.742930] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:38:36.382 [2024-11-26 17:35:13.742985] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:38:36.382 [2024-11-26 17:35:13.743171] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:38:36.382 [2024-11-26 17:35:13.743305] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:38:36.382 [2024-11-26 17:35:13.743347] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:38:36.382 17:35:13 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:36.382 17:35:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:38:36.382 17:35:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@720 -- # jq length 00:38:36.382 17:35:13 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:36.382 17:35:13 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:38:36.382 17:35:13 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:36.382 17:35:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:38:36.382 17:35:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:38:36.382 17:35:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@723 -- # '[' false = true ']' 00:38:36.382 17:35:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@737 -- # nbd_start_disks /var/tmp/spdk.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 00:38:36.382 17:35:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:38:36.382 17:35:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev1' 'spare') 00:38:36.382 17:35:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # local bdev_list 00:38:36.382 17:35:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:38:36.382 17:35:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # local nbd_list 00:38:36.382 17:35:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@12 -- # local i 00:38:36.382 17:35:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:38:36.382 17:35:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:38:36.382 17:35:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev1 /dev/nbd0 00:38:36.641 /dev/nbd0 00:38:36.641 17:35:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:38:36.899 17:35:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:38:36.899 17:35:14 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:38:36.899 17:35:14 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@873 -- # local i 00:38:36.899 17:35:14 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:38:36.899 17:35:14 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:38:36.899 17:35:14 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:38:36.899 17:35:14 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@877 -- # break 00:38:36.899 17:35:14 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:38:36.899 17:35:14 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:38:36.900 17:35:14 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:38:36.900 1+0 records in 00:38:36.900 1+0 records out 00:38:36.900 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000477624 s, 8.6 MB/s 00:38:36.900 17:35:14 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:38:36.900 17:35:14 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@890 -- # size=4096 00:38:36.900 17:35:14 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:38:36.900 17:35:14 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:38:36.900 17:35:14 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@893 -- # return 0 00:38:36.900 17:35:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:38:36.900 17:35:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:38:36.900 17:35:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd1 00:38:37.167 /dev/nbd1 00:38:37.167 17:35:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:38:37.167 17:35:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:38:37.167 17:35:14 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:38:37.167 17:35:14 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@873 -- # local i 00:38:37.167 17:35:14 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:38:37.167 17:35:14 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:38:37.167 17:35:14 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:38:37.167 17:35:14 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@877 -- # break 00:38:37.167 17:35:14 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:38:37.167 17:35:14 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:38:37.167 17:35:14 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:38:37.167 1+0 records in 00:38:37.167 1+0 records out 00:38:37.167 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000506626 s, 8.1 MB/s 00:38:37.167 17:35:14 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:38:37.167 17:35:14 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@890 -- # size=4096 00:38:37.167 17:35:14 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:38:37.167 17:35:14 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:38:37.167 17:35:14 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@893 -- # return 0 00:38:37.167 17:35:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:38:37.167 17:35:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:38:37.167 17:35:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@738 -- # cmp -i 1048576 /dev/nbd0 /dev/nbd1 00:38:37.425 17:35:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@739 -- # nbd_stop_disks /var/tmp/spdk.sock '/dev/nbd0 /dev/nbd1' 00:38:37.425 17:35:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:38:37.425 17:35:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:38:37.425 17:35:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # local nbd_list 00:38:37.425 17:35:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@51 -- # local i 00:38:37.425 17:35:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:38:37.425 17:35:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:38:37.684 17:35:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:38:37.684 17:35:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:38:37.684 17:35:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:38:37.684 17:35:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:38:37.684 17:35:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:38:37.684 17:35:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:38:37.684 17:35:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:38:37.684 17:35:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:38:37.684 17:35:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:38:37.684 17:35:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:38:37.942 17:35:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:38:37.942 17:35:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:38:37.942 17:35:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:38:37.942 17:35:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:38:37.942 17:35:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:38:37.942 17:35:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:38:37.942 17:35:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:38:37.942 17:35:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:38:37.942 17:35:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@743 -- # '[' true = true ']' 00:38:37.942 17:35:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@745 -- # rpc_cmd bdev_passthru_delete spare 00:38:37.942 17:35:15 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:37.942 17:35:15 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:38:37.942 17:35:15 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:37.942 17:35:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@746 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:38:37.942 17:35:15 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:37.942 17:35:15 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:38:37.942 [2024-11-26 17:35:15.330625] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:38:37.942 [2024-11-26 17:35:15.330881] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:38:37.942 [2024-11-26 17:35:15.330921] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ba80 00:38:37.942 [2024-11-26 17:35:15.330935] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:38:37.942 [2024-11-26 17:35:15.334085] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:38:37.942 [2024-11-26 17:35:15.334123] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:38:37.942 [2024-11-26 17:35:15.334254] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:38:37.942 [2024-11-26 17:35:15.334358] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:38:37.942 [2024-11-26 17:35:15.334554] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:38:37.942 [2024-11-26 17:35:15.334675] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:38:37.942 [2024-11-26 17:35:15.334802] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:38:37.942 spare 00:38:37.942 17:35:15 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:37.942 17:35:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@747 -- # rpc_cmd bdev_wait_for_examine 00:38:37.942 17:35:15 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:37.942 17:35:15 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:38:38.201 [2024-11-26 17:35:15.434935] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007b00 00:38:38.201 [2024-11-26 17:35:15.434984] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:38:38.201 [2024-11-26 17:35:15.435424] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000491d0 00:38:38.201 [2024-11-26 17:35:15.444086] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007b00 00:38:38.201 [2024-11-26 17:35:15.444109] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007b00 00:38:38.201 [2024-11-26 17:35:15.444340] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:38:38.201 17:35:15 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:38.201 17:35:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@749 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 4 00:38:38.201 17:35:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:38:38.201 17:35:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:38:38.201 17:35:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:38:38.201 17:35:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:38:38.201 17:35:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:38:38.201 17:35:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:38:38.201 17:35:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:38:38.201 17:35:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:38:38.201 17:35:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:38:38.201 17:35:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:38:38.201 17:35:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:38:38.201 17:35:15 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:38.201 17:35:15 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:38:38.201 17:35:15 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:38.201 17:35:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:38:38.201 "name": "raid_bdev1", 00:38:38.201 "uuid": "d305ef10-adcc-493b-844a-89564e6539a3", 00:38:38.201 "strip_size_kb": 64, 00:38:38.201 "state": "online", 00:38:38.201 "raid_level": "raid5f", 00:38:38.201 "superblock": true, 00:38:38.201 "num_base_bdevs": 4, 00:38:38.201 "num_base_bdevs_discovered": 4, 00:38:38.201 "num_base_bdevs_operational": 4, 00:38:38.201 "base_bdevs_list": [ 00:38:38.201 { 00:38:38.201 "name": "spare", 00:38:38.201 "uuid": "df4b26c9-64fb-5ec7-86f3-567c2628e043", 00:38:38.201 "is_configured": true, 00:38:38.201 "data_offset": 2048, 00:38:38.201 "data_size": 63488 00:38:38.201 }, 00:38:38.201 { 00:38:38.201 "name": "BaseBdev2", 00:38:38.201 "uuid": "fe770283-ed3a-54c7-b6d0-0b2005a5e35d", 00:38:38.201 "is_configured": true, 00:38:38.201 "data_offset": 2048, 00:38:38.201 "data_size": 63488 00:38:38.201 }, 00:38:38.201 { 00:38:38.201 "name": "BaseBdev3", 00:38:38.201 "uuid": "3736043f-88ea-58b3-a6ec-47f22938a79e", 00:38:38.201 "is_configured": true, 00:38:38.201 "data_offset": 2048, 00:38:38.201 "data_size": 63488 00:38:38.201 }, 00:38:38.201 { 00:38:38.201 "name": "BaseBdev4", 00:38:38.201 "uuid": "1298b0db-c61c-5d5a-93e7-18239ee93f41", 00:38:38.201 "is_configured": true, 00:38:38.201 "data_offset": 2048, 00:38:38.201 "data_size": 63488 00:38:38.201 } 00:38:38.201 ] 00:38:38.201 }' 00:38:38.201 17:35:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:38:38.201 17:35:15 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:38:38.769 17:35:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@750 -- # verify_raid_bdev_process raid_bdev1 none none 00:38:38.769 17:35:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:38:38.769 17:35:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:38:38.769 17:35:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:38:38.769 17:35:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:38:38.769 17:35:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:38:38.769 17:35:15 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:38.769 17:35:15 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:38:38.769 17:35:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:38:38.769 17:35:15 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:38.769 17:35:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:38:38.769 "name": "raid_bdev1", 00:38:38.769 "uuid": "d305ef10-adcc-493b-844a-89564e6539a3", 00:38:38.769 "strip_size_kb": 64, 00:38:38.769 "state": "online", 00:38:38.769 "raid_level": "raid5f", 00:38:38.769 "superblock": true, 00:38:38.769 "num_base_bdevs": 4, 00:38:38.769 "num_base_bdevs_discovered": 4, 00:38:38.769 "num_base_bdevs_operational": 4, 00:38:38.769 "base_bdevs_list": [ 00:38:38.769 { 00:38:38.769 "name": "spare", 00:38:38.769 "uuid": "df4b26c9-64fb-5ec7-86f3-567c2628e043", 00:38:38.769 "is_configured": true, 00:38:38.769 "data_offset": 2048, 00:38:38.769 "data_size": 63488 00:38:38.769 }, 00:38:38.769 { 00:38:38.769 "name": "BaseBdev2", 00:38:38.769 "uuid": "fe770283-ed3a-54c7-b6d0-0b2005a5e35d", 00:38:38.769 "is_configured": true, 00:38:38.769 "data_offset": 2048, 00:38:38.769 "data_size": 63488 00:38:38.769 }, 00:38:38.769 { 00:38:38.769 "name": "BaseBdev3", 00:38:38.769 "uuid": "3736043f-88ea-58b3-a6ec-47f22938a79e", 00:38:38.769 "is_configured": true, 00:38:38.769 "data_offset": 2048, 00:38:38.769 "data_size": 63488 00:38:38.769 }, 00:38:38.769 { 00:38:38.769 "name": "BaseBdev4", 00:38:38.769 "uuid": "1298b0db-c61c-5d5a-93e7-18239ee93f41", 00:38:38.769 "is_configured": true, 00:38:38.769 "data_offset": 2048, 00:38:38.769 "data_size": 63488 00:38:38.769 } 00:38:38.769 ] 00:38:38.769 }' 00:38:38.769 17:35:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:38:38.769 17:35:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:38:38.769 17:35:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:38:38.769 17:35:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:38:38.769 17:35:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@751 -- # rpc_cmd bdev_raid_get_bdevs all 00:38:38.769 17:35:16 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:38.769 17:35:16 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:38:38.769 17:35:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@751 -- # jq -r '.[].base_bdevs_list[0].name' 00:38:38.769 17:35:16 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:38.769 17:35:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@751 -- # [[ spare == \s\p\a\r\e ]] 00:38:38.769 17:35:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@754 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:38:38.769 17:35:16 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:38.769 17:35:16 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:38:38.769 [2024-11-26 17:35:16.114131] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:38:38.769 17:35:16 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:38.769 17:35:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@755 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:38:38.769 17:35:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:38:38.769 17:35:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:38:38.769 17:35:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:38:38.769 17:35:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:38:38.769 17:35:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:38:38.769 17:35:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:38:38.769 17:35:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:38:38.770 17:35:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:38:38.770 17:35:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:38:38.770 17:35:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:38:38.770 17:35:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:38:38.770 17:35:16 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:38.770 17:35:16 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:38:38.770 17:35:16 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:38.770 17:35:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:38:38.770 "name": "raid_bdev1", 00:38:38.770 "uuid": "d305ef10-adcc-493b-844a-89564e6539a3", 00:38:38.770 "strip_size_kb": 64, 00:38:38.770 "state": "online", 00:38:38.770 "raid_level": "raid5f", 00:38:38.770 "superblock": true, 00:38:38.770 "num_base_bdevs": 4, 00:38:38.770 "num_base_bdevs_discovered": 3, 00:38:38.770 "num_base_bdevs_operational": 3, 00:38:38.770 "base_bdevs_list": [ 00:38:38.770 { 00:38:38.770 "name": null, 00:38:38.770 "uuid": "00000000-0000-0000-0000-000000000000", 00:38:38.770 "is_configured": false, 00:38:38.770 "data_offset": 0, 00:38:38.770 "data_size": 63488 00:38:38.770 }, 00:38:38.770 { 00:38:38.770 "name": "BaseBdev2", 00:38:38.770 "uuid": "fe770283-ed3a-54c7-b6d0-0b2005a5e35d", 00:38:38.770 "is_configured": true, 00:38:38.770 "data_offset": 2048, 00:38:38.770 "data_size": 63488 00:38:38.770 }, 00:38:38.770 { 00:38:38.770 "name": "BaseBdev3", 00:38:38.770 "uuid": "3736043f-88ea-58b3-a6ec-47f22938a79e", 00:38:38.770 "is_configured": true, 00:38:38.770 "data_offset": 2048, 00:38:38.770 "data_size": 63488 00:38:38.770 }, 00:38:38.770 { 00:38:38.770 "name": "BaseBdev4", 00:38:38.770 "uuid": "1298b0db-c61c-5d5a-93e7-18239ee93f41", 00:38:38.770 "is_configured": true, 00:38:38.770 "data_offset": 2048, 00:38:38.770 "data_size": 63488 00:38:38.770 } 00:38:38.770 ] 00:38:38.770 }' 00:38:38.770 17:35:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:38:38.770 17:35:16 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:38:39.338 17:35:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@756 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:38:39.338 17:35:16 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:39.338 17:35:16 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:38:39.338 [2024-11-26 17:35:16.570274] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:38:39.338 [2024-11-26 17:35:16.570802] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:38:39.338 [2024-11-26 17:35:16.570841] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:38:39.338 [2024-11-26 17:35:16.570893] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:38:39.338 [2024-11-26 17:35:16.586806] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000492a0 00:38:39.339 17:35:16 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:39.339 17:35:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@757 -- # sleep 1 00:38:39.339 [2024-11-26 17:35:16.596964] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:38:40.275 17:35:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@758 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:38:40.275 17:35:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:38:40.275 17:35:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:38:40.275 17:35:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:38:40.275 17:35:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:38:40.275 17:35:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:38:40.275 17:35:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:38:40.276 17:35:17 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:40.276 17:35:17 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:38:40.276 17:35:17 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:40.276 17:35:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:38:40.276 "name": "raid_bdev1", 00:38:40.276 "uuid": "d305ef10-adcc-493b-844a-89564e6539a3", 00:38:40.276 "strip_size_kb": 64, 00:38:40.276 "state": "online", 00:38:40.276 "raid_level": "raid5f", 00:38:40.276 "superblock": true, 00:38:40.276 "num_base_bdevs": 4, 00:38:40.276 "num_base_bdevs_discovered": 4, 00:38:40.276 "num_base_bdevs_operational": 4, 00:38:40.276 "process": { 00:38:40.276 "type": "rebuild", 00:38:40.276 "target": "spare", 00:38:40.276 "progress": { 00:38:40.276 "blocks": 17280, 00:38:40.276 "percent": 9 00:38:40.276 } 00:38:40.276 }, 00:38:40.276 "base_bdevs_list": [ 00:38:40.276 { 00:38:40.276 "name": "spare", 00:38:40.276 "uuid": "df4b26c9-64fb-5ec7-86f3-567c2628e043", 00:38:40.276 "is_configured": true, 00:38:40.276 "data_offset": 2048, 00:38:40.276 "data_size": 63488 00:38:40.276 }, 00:38:40.276 { 00:38:40.276 "name": "BaseBdev2", 00:38:40.276 "uuid": "fe770283-ed3a-54c7-b6d0-0b2005a5e35d", 00:38:40.276 "is_configured": true, 00:38:40.276 "data_offset": 2048, 00:38:40.276 "data_size": 63488 00:38:40.276 }, 00:38:40.276 { 00:38:40.276 "name": "BaseBdev3", 00:38:40.276 "uuid": "3736043f-88ea-58b3-a6ec-47f22938a79e", 00:38:40.276 "is_configured": true, 00:38:40.276 "data_offset": 2048, 00:38:40.276 "data_size": 63488 00:38:40.276 }, 00:38:40.276 { 00:38:40.276 "name": "BaseBdev4", 00:38:40.276 "uuid": "1298b0db-c61c-5d5a-93e7-18239ee93f41", 00:38:40.276 "is_configured": true, 00:38:40.276 "data_offset": 2048, 00:38:40.276 "data_size": 63488 00:38:40.276 } 00:38:40.276 ] 00:38:40.276 }' 00:38:40.276 17:35:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:38:40.276 17:35:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:38:40.276 17:35:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:38:40.535 17:35:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:38:40.535 17:35:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@761 -- # rpc_cmd bdev_passthru_delete spare 00:38:40.535 17:35:17 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:40.535 17:35:17 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:38:40.535 [2024-11-26 17:35:17.734508] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:38:40.535 [2024-11-26 17:35:17.808132] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:38:40.535 [2024-11-26 17:35:17.808366] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:38:40.535 [2024-11-26 17:35:17.808468] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:38:40.535 [2024-11-26 17:35:17.808516] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:38:40.535 17:35:17 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:40.535 17:35:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@762 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:38:40.535 17:35:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:38:40.535 17:35:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:38:40.535 17:35:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:38:40.535 17:35:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:38:40.535 17:35:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:38:40.535 17:35:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:38:40.536 17:35:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:38:40.536 17:35:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:38:40.536 17:35:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:38:40.536 17:35:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:38:40.536 17:35:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:38:40.536 17:35:17 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:40.536 17:35:17 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:38:40.536 17:35:17 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:40.536 17:35:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:38:40.536 "name": "raid_bdev1", 00:38:40.536 "uuid": "d305ef10-adcc-493b-844a-89564e6539a3", 00:38:40.536 "strip_size_kb": 64, 00:38:40.536 "state": "online", 00:38:40.536 "raid_level": "raid5f", 00:38:40.536 "superblock": true, 00:38:40.536 "num_base_bdevs": 4, 00:38:40.536 "num_base_bdevs_discovered": 3, 00:38:40.536 "num_base_bdevs_operational": 3, 00:38:40.536 "base_bdevs_list": [ 00:38:40.536 { 00:38:40.536 "name": null, 00:38:40.536 "uuid": "00000000-0000-0000-0000-000000000000", 00:38:40.536 "is_configured": false, 00:38:40.536 "data_offset": 0, 00:38:40.536 "data_size": 63488 00:38:40.536 }, 00:38:40.536 { 00:38:40.536 "name": "BaseBdev2", 00:38:40.536 "uuid": "fe770283-ed3a-54c7-b6d0-0b2005a5e35d", 00:38:40.536 "is_configured": true, 00:38:40.536 "data_offset": 2048, 00:38:40.536 "data_size": 63488 00:38:40.536 }, 00:38:40.536 { 00:38:40.536 "name": "BaseBdev3", 00:38:40.536 "uuid": "3736043f-88ea-58b3-a6ec-47f22938a79e", 00:38:40.536 "is_configured": true, 00:38:40.536 "data_offset": 2048, 00:38:40.536 "data_size": 63488 00:38:40.536 }, 00:38:40.536 { 00:38:40.536 "name": "BaseBdev4", 00:38:40.536 "uuid": "1298b0db-c61c-5d5a-93e7-18239ee93f41", 00:38:40.536 "is_configured": true, 00:38:40.536 "data_offset": 2048, 00:38:40.536 "data_size": 63488 00:38:40.536 } 00:38:40.536 ] 00:38:40.536 }' 00:38:40.536 17:35:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:38:40.536 17:35:17 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:38:41.103 17:35:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@763 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:38:41.103 17:35:18 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:41.103 17:35:18 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:38:41.103 [2024-11-26 17:35:18.302876] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:38:41.103 [2024-11-26 17:35:18.302984] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:38:41.103 [2024-11-26 17:35:18.303030] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000c380 00:38:41.103 [2024-11-26 17:35:18.303062] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:38:41.103 [2024-11-26 17:35:18.303786] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:38:41.103 [2024-11-26 17:35:18.303823] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:38:41.103 [2024-11-26 17:35:18.303953] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:38:41.103 [2024-11-26 17:35:18.303975] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:38:41.103 [2024-11-26 17:35:18.303991] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:38:41.103 [2024-11-26 17:35:18.304031] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:38:41.103 [2024-11-26 17:35:18.321331] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000049370 00:38:41.103 spare 00:38:41.103 17:35:18 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:41.103 17:35:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@764 -- # sleep 1 00:38:41.103 [2024-11-26 17:35:18.332774] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:38:42.038 17:35:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@765 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:38:42.038 17:35:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:38:42.038 17:35:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:38:42.038 17:35:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:38:42.038 17:35:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:38:42.038 17:35:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:38:42.038 17:35:19 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:42.038 17:35:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:38:42.039 17:35:19 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:38:42.039 17:35:19 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:42.039 17:35:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:38:42.039 "name": "raid_bdev1", 00:38:42.039 "uuid": "d305ef10-adcc-493b-844a-89564e6539a3", 00:38:42.039 "strip_size_kb": 64, 00:38:42.039 "state": "online", 00:38:42.039 "raid_level": "raid5f", 00:38:42.039 "superblock": true, 00:38:42.039 "num_base_bdevs": 4, 00:38:42.039 "num_base_bdevs_discovered": 4, 00:38:42.039 "num_base_bdevs_operational": 4, 00:38:42.039 "process": { 00:38:42.039 "type": "rebuild", 00:38:42.039 "target": "spare", 00:38:42.039 "progress": { 00:38:42.039 "blocks": 17280, 00:38:42.039 "percent": 9 00:38:42.039 } 00:38:42.039 }, 00:38:42.039 "base_bdevs_list": [ 00:38:42.039 { 00:38:42.039 "name": "spare", 00:38:42.039 "uuid": "df4b26c9-64fb-5ec7-86f3-567c2628e043", 00:38:42.039 "is_configured": true, 00:38:42.039 "data_offset": 2048, 00:38:42.039 "data_size": 63488 00:38:42.039 }, 00:38:42.039 { 00:38:42.039 "name": "BaseBdev2", 00:38:42.039 "uuid": "fe770283-ed3a-54c7-b6d0-0b2005a5e35d", 00:38:42.039 "is_configured": true, 00:38:42.039 "data_offset": 2048, 00:38:42.039 "data_size": 63488 00:38:42.039 }, 00:38:42.039 { 00:38:42.039 "name": "BaseBdev3", 00:38:42.039 "uuid": "3736043f-88ea-58b3-a6ec-47f22938a79e", 00:38:42.039 "is_configured": true, 00:38:42.039 "data_offset": 2048, 00:38:42.039 "data_size": 63488 00:38:42.039 }, 00:38:42.039 { 00:38:42.039 "name": "BaseBdev4", 00:38:42.039 "uuid": "1298b0db-c61c-5d5a-93e7-18239ee93f41", 00:38:42.039 "is_configured": true, 00:38:42.039 "data_offset": 2048, 00:38:42.039 "data_size": 63488 00:38:42.039 } 00:38:42.039 ] 00:38:42.039 }' 00:38:42.039 17:35:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:38:42.039 17:35:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:38:42.039 17:35:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:38:42.297 17:35:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:38:42.297 17:35:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@768 -- # rpc_cmd bdev_passthru_delete spare 00:38:42.297 17:35:19 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:42.297 17:35:19 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:38:42.297 [2024-11-26 17:35:19.502321] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:38:42.297 [2024-11-26 17:35:19.546428] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:38:42.297 [2024-11-26 17:35:19.546505] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:38:42.297 [2024-11-26 17:35:19.546537] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:38:42.297 [2024-11-26 17:35:19.546549] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:38:42.297 17:35:19 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:42.297 17:35:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@769 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:38:42.297 17:35:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:38:42.297 17:35:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:38:42.297 17:35:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:38:42.297 17:35:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:38:42.297 17:35:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:38:42.297 17:35:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:38:42.297 17:35:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:38:42.297 17:35:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:38:42.297 17:35:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:38:42.297 17:35:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:38:42.297 17:35:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:38:42.297 17:35:19 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:42.297 17:35:19 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:38:42.297 17:35:19 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:42.297 17:35:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:38:42.297 "name": "raid_bdev1", 00:38:42.297 "uuid": "d305ef10-adcc-493b-844a-89564e6539a3", 00:38:42.297 "strip_size_kb": 64, 00:38:42.297 "state": "online", 00:38:42.297 "raid_level": "raid5f", 00:38:42.297 "superblock": true, 00:38:42.297 "num_base_bdevs": 4, 00:38:42.297 "num_base_bdevs_discovered": 3, 00:38:42.297 "num_base_bdevs_operational": 3, 00:38:42.297 "base_bdevs_list": [ 00:38:42.297 { 00:38:42.297 "name": null, 00:38:42.297 "uuid": "00000000-0000-0000-0000-000000000000", 00:38:42.297 "is_configured": false, 00:38:42.297 "data_offset": 0, 00:38:42.297 "data_size": 63488 00:38:42.297 }, 00:38:42.297 { 00:38:42.297 "name": "BaseBdev2", 00:38:42.297 "uuid": "fe770283-ed3a-54c7-b6d0-0b2005a5e35d", 00:38:42.297 "is_configured": true, 00:38:42.297 "data_offset": 2048, 00:38:42.297 "data_size": 63488 00:38:42.297 }, 00:38:42.297 { 00:38:42.297 "name": "BaseBdev3", 00:38:42.297 "uuid": "3736043f-88ea-58b3-a6ec-47f22938a79e", 00:38:42.297 "is_configured": true, 00:38:42.297 "data_offset": 2048, 00:38:42.297 "data_size": 63488 00:38:42.297 }, 00:38:42.297 { 00:38:42.297 "name": "BaseBdev4", 00:38:42.297 "uuid": "1298b0db-c61c-5d5a-93e7-18239ee93f41", 00:38:42.297 "is_configured": true, 00:38:42.297 "data_offset": 2048, 00:38:42.297 "data_size": 63488 00:38:42.297 } 00:38:42.297 ] 00:38:42.297 }' 00:38:42.297 17:35:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:38:42.298 17:35:19 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:38:42.863 17:35:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@770 -- # verify_raid_bdev_process raid_bdev1 none none 00:38:42.863 17:35:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:38:42.863 17:35:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:38:42.863 17:35:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:38:42.863 17:35:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:38:42.863 17:35:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:38:42.863 17:35:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:38:42.863 17:35:20 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:42.863 17:35:20 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:38:42.863 17:35:20 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:42.863 17:35:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:38:42.863 "name": "raid_bdev1", 00:38:42.863 "uuid": "d305ef10-adcc-493b-844a-89564e6539a3", 00:38:42.863 "strip_size_kb": 64, 00:38:42.863 "state": "online", 00:38:42.863 "raid_level": "raid5f", 00:38:42.863 "superblock": true, 00:38:42.863 "num_base_bdevs": 4, 00:38:42.863 "num_base_bdevs_discovered": 3, 00:38:42.863 "num_base_bdevs_operational": 3, 00:38:42.863 "base_bdevs_list": [ 00:38:42.863 { 00:38:42.863 "name": null, 00:38:42.863 "uuid": "00000000-0000-0000-0000-000000000000", 00:38:42.863 "is_configured": false, 00:38:42.863 "data_offset": 0, 00:38:42.863 "data_size": 63488 00:38:42.863 }, 00:38:42.863 { 00:38:42.863 "name": "BaseBdev2", 00:38:42.863 "uuid": "fe770283-ed3a-54c7-b6d0-0b2005a5e35d", 00:38:42.863 "is_configured": true, 00:38:42.863 "data_offset": 2048, 00:38:42.863 "data_size": 63488 00:38:42.863 }, 00:38:42.863 { 00:38:42.863 "name": "BaseBdev3", 00:38:42.863 "uuid": "3736043f-88ea-58b3-a6ec-47f22938a79e", 00:38:42.863 "is_configured": true, 00:38:42.863 "data_offset": 2048, 00:38:42.863 "data_size": 63488 00:38:42.863 }, 00:38:42.863 { 00:38:42.863 "name": "BaseBdev4", 00:38:42.863 "uuid": "1298b0db-c61c-5d5a-93e7-18239ee93f41", 00:38:42.863 "is_configured": true, 00:38:42.863 "data_offset": 2048, 00:38:42.863 "data_size": 63488 00:38:42.863 } 00:38:42.863 ] 00:38:42.863 }' 00:38:42.863 17:35:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:38:42.863 17:35:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:38:42.863 17:35:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:38:42.863 17:35:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:38:42.863 17:35:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@773 -- # rpc_cmd bdev_passthru_delete BaseBdev1 00:38:42.863 17:35:20 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:42.863 17:35:20 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:38:42.863 17:35:20 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:42.863 17:35:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@774 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:38:42.863 17:35:20 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:42.863 17:35:20 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:38:42.863 [2024-11-26 17:35:20.184112] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:38:42.863 [2024-11-26 17:35:20.184177] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:38:42.863 [2024-11-26 17:35:20.184209] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000c980 00:38:42.863 [2024-11-26 17:35:20.184222] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:38:42.863 [2024-11-26 17:35:20.184831] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:38:42.863 [2024-11-26 17:35:20.184851] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:38:42.863 [2024-11-26 17:35:20.184948] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev BaseBdev1 00:38:42.863 [2024-11-26 17:35:20.184965] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:38:42.863 [2024-11-26 17:35:20.184983] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:38:42.863 [2024-11-26 17:35:20.184996] bdev_raid.c:3894:raid_bdev_examine_done: *ERROR*: Failed to examine bdev BaseBdev1: Invalid argument 00:38:42.863 BaseBdev1 00:38:42.863 17:35:20 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:42.863 17:35:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@775 -- # sleep 1 00:38:43.795 17:35:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@776 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:38:43.795 17:35:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:38:43.795 17:35:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:38:43.795 17:35:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:38:43.795 17:35:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:38:43.795 17:35:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:38:43.795 17:35:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:38:43.795 17:35:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:38:43.795 17:35:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:38:43.795 17:35:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:38:43.795 17:35:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:38:43.795 17:35:21 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:43.795 17:35:21 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:38:43.795 17:35:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:38:43.796 17:35:21 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:44.054 17:35:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:38:44.054 "name": "raid_bdev1", 00:38:44.054 "uuid": "d305ef10-adcc-493b-844a-89564e6539a3", 00:38:44.054 "strip_size_kb": 64, 00:38:44.054 "state": "online", 00:38:44.054 "raid_level": "raid5f", 00:38:44.054 "superblock": true, 00:38:44.054 "num_base_bdevs": 4, 00:38:44.054 "num_base_bdevs_discovered": 3, 00:38:44.054 "num_base_bdevs_operational": 3, 00:38:44.054 "base_bdevs_list": [ 00:38:44.054 { 00:38:44.054 "name": null, 00:38:44.054 "uuid": "00000000-0000-0000-0000-000000000000", 00:38:44.054 "is_configured": false, 00:38:44.054 "data_offset": 0, 00:38:44.054 "data_size": 63488 00:38:44.054 }, 00:38:44.054 { 00:38:44.054 "name": "BaseBdev2", 00:38:44.054 "uuid": "fe770283-ed3a-54c7-b6d0-0b2005a5e35d", 00:38:44.054 "is_configured": true, 00:38:44.054 "data_offset": 2048, 00:38:44.054 "data_size": 63488 00:38:44.054 }, 00:38:44.054 { 00:38:44.054 "name": "BaseBdev3", 00:38:44.054 "uuid": "3736043f-88ea-58b3-a6ec-47f22938a79e", 00:38:44.054 "is_configured": true, 00:38:44.054 "data_offset": 2048, 00:38:44.054 "data_size": 63488 00:38:44.054 }, 00:38:44.054 { 00:38:44.054 "name": "BaseBdev4", 00:38:44.054 "uuid": "1298b0db-c61c-5d5a-93e7-18239ee93f41", 00:38:44.054 "is_configured": true, 00:38:44.054 "data_offset": 2048, 00:38:44.054 "data_size": 63488 00:38:44.054 } 00:38:44.054 ] 00:38:44.054 }' 00:38:44.054 17:35:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:38:44.054 17:35:21 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:38:44.313 17:35:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@777 -- # verify_raid_bdev_process raid_bdev1 none none 00:38:44.313 17:35:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:38:44.313 17:35:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:38:44.313 17:35:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:38:44.313 17:35:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:38:44.313 17:35:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:38:44.313 17:35:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:38:44.313 17:35:21 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:44.313 17:35:21 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:38:44.313 17:35:21 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:44.313 17:35:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:38:44.313 "name": "raid_bdev1", 00:38:44.313 "uuid": "d305ef10-adcc-493b-844a-89564e6539a3", 00:38:44.313 "strip_size_kb": 64, 00:38:44.313 "state": "online", 00:38:44.313 "raid_level": "raid5f", 00:38:44.313 "superblock": true, 00:38:44.313 "num_base_bdevs": 4, 00:38:44.313 "num_base_bdevs_discovered": 3, 00:38:44.313 "num_base_bdevs_operational": 3, 00:38:44.313 "base_bdevs_list": [ 00:38:44.313 { 00:38:44.313 "name": null, 00:38:44.313 "uuid": "00000000-0000-0000-0000-000000000000", 00:38:44.314 "is_configured": false, 00:38:44.314 "data_offset": 0, 00:38:44.314 "data_size": 63488 00:38:44.314 }, 00:38:44.314 { 00:38:44.314 "name": "BaseBdev2", 00:38:44.314 "uuid": "fe770283-ed3a-54c7-b6d0-0b2005a5e35d", 00:38:44.314 "is_configured": true, 00:38:44.314 "data_offset": 2048, 00:38:44.314 "data_size": 63488 00:38:44.314 }, 00:38:44.314 { 00:38:44.314 "name": "BaseBdev3", 00:38:44.314 "uuid": "3736043f-88ea-58b3-a6ec-47f22938a79e", 00:38:44.314 "is_configured": true, 00:38:44.314 "data_offset": 2048, 00:38:44.314 "data_size": 63488 00:38:44.314 }, 00:38:44.314 { 00:38:44.314 "name": "BaseBdev4", 00:38:44.314 "uuid": "1298b0db-c61c-5d5a-93e7-18239ee93f41", 00:38:44.314 "is_configured": true, 00:38:44.314 "data_offset": 2048, 00:38:44.314 "data_size": 63488 00:38:44.314 } 00:38:44.314 ] 00:38:44.314 }' 00:38:44.314 17:35:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:38:44.314 17:35:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:38:44.314 17:35:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:38:44.573 17:35:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:38:44.573 17:35:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@778 -- # NOT rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:38:44.573 17:35:21 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@652 -- # local es=0 00:38:44.573 17:35:21 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:38:44.573 17:35:21 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:38:44.573 17:35:21 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:38:44.573 17:35:21 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:38:44.573 17:35:21 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:38:44.573 17:35:21 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:38:44.573 17:35:21 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:44.573 17:35:21 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:38:44.573 [2024-11-26 17:35:21.805061] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:38:44.573 [2024-11-26 17:35:21.805296] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:38:44.573 [2024-11-26 17:35:21.805315] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:38:44.573 request: 00:38:44.573 { 00:38:44.573 "base_bdev": "BaseBdev1", 00:38:44.573 "raid_bdev": "raid_bdev1", 00:38:44.573 "method": "bdev_raid_add_base_bdev", 00:38:44.573 "req_id": 1 00:38:44.573 } 00:38:44.573 Got JSON-RPC error response 00:38:44.573 response: 00:38:44.573 { 00:38:44.573 "code": -22, 00:38:44.573 "message": "Failed to add base bdev to RAID bdev: Invalid argument" 00:38:44.573 } 00:38:44.573 17:35:21 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:38:44.573 17:35:21 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@655 -- # es=1 00:38:44.573 17:35:21 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:38:44.573 17:35:21 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:38:44.573 17:35:21 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:38:44.573 17:35:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@779 -- # sleep 1 00:38:45.509 17:35:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@780 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:38:45.509 17:35:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:38:45.509 17:35:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:38:45.509 17:35:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:38:45.509 17:35:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:38:45.509 17:35:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:38:45.509 17:35:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:38:45.509 17:35:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:38:45.509 17:35:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:38:45.509 17:35:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:38:45.509 17:35:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:38:45.509 17:35:22 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:45.509 17:35:22 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:38:45.509 17:35:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:38:45.509 17:35:22 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:45.509 17:35:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:38:45.509 "name": "raid_bdev1", 00:38:45.509 "uuid": "d305ef10-adcc-493b-844a-89564e6539a3", 00:38:45.509 "strip_size_kb": 64, 00:38:45.509 "state": "online", 00:38:45.509 "raid_level": "raid5f", 00:38:45.509 "superblock": true, 00:38:45.509 "num_base_bdevs": 4, 00:38:45.509 "num_base_bdevs_discovered": 3, 00:38:45.509 "num_base_bdevs_operational": 3, 00:38:45.509 "base_bdevs_list": [ 00:38:45.509 { 00:38:45.509 "name": null, 00:38:45.509 "uuid": "00000000-0000-0000-0000-000000000000", 00:38:45.509 "is_configured": false, 00:38:45.509 "data_offset": 0, 00:38:45.509 "data_size": 63488 00:38:45.509 }, 00:38:45.509 { 00:38:45.509 "name": "BaseBdev2", 00:38:45.509 "uuid": "fe770283-ed3a-54c7-b6d0-0b2005a5e35d", 00:38:45.510 "is_configured": true, 00:38:45.510 "data_offset": 2048, 00:38:45.510 "data_size": 63488 00:38:45.510 }, 00:38:45.510 { 00:38:45.510 "name": "BaseBdev3", 00:38:45.510 "uuid": "3736043f-88ea-58b3-a6ec-47f22938a79e", 00:38:45.510 "is_configured": true, 00:38:45.510 "data_offset": 2048, 00:38:45.510 "data_size": 63488 00:38:45.510 }, 00:38:45.510 { 00:38:45.510 "name": "BaseBdev4", 00:38:45.510 "uuid": "1298b0db-c61c-5d5a-93e7-18239ee93f41", 00:38:45.510 "is_configured": true, 00:38:45.510 "data_offset": 2048, 00:38:45.510 "data_size": 63488 00:38:45.510 } 00:38:45.510 ] 00:38:45.510 }' 00:38:45.510 17:35:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:38:45.510 17:35:22 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:38:46.077 17:35:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@781 -- # verify_raid_bdev_process raid_bdev1 none none 00:38:46.077 17:35:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:38:46.077 17:35:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:38:46.077 17:35:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:38:46.077 17:35:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:38:46.077 17:35:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:38:46.077 17:35:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:38:46.077 17:35:23 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:46.077 17:35:23 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:38:46.077 17:35:23 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:46.077 17:35:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:38:46.077 "name": "raid_bdev1", 00:38:46.077 "uuid": "d305ef10-adcc-493b-844a-89564e6539a3", 00:38:46.077 "strip_size_kb": 64, 00:38:46.077 "state": "online", 00:38:46.077 "raid_level": "raid5f", 00:38:46.077 "superblock": true, 00:38:46.077 "num_base_bdevs": 4, 00:38:46.077 "num_base_bdevs_discovered": 3, 00:38:46.077 "num_base_bdevs_operational": 3, 00:38:46.077 "base_bdevs_list": [ 00:38:46.077 { 00:38:46.077 "name": null, 00:38:46.077 "uuid": "00000000-0000-0000-0000-000000000000", 00:38:46.077 "is_configured": false, 00:38:46.077 "data_offset": 0, 00:38:46.077 "data_size": 63488 00:38:46.077 }, 00:38:46.077 { 00:38:46.077 "name": "BaseBdev2", 00:38:46.077 "uuid": "fe770283-ed3a-54c7-b6d0-0b2005a5e35d", 00:38:46.077 "is_configured": true, 00:38:46.077 "data_offset": 2048, 00:38:46.077 "data_size": 63488 00:38:46.077 }, 00:38:46.077 { 00:38:46.077 "name": "BaseBdev3", 00:38:46.077 "uuid": "3736043f-88ea-58b3-a6ec-47f22938a79e", 00:38:46.077 "is_configured": true, 00:38:46.077 "data_offset": 2048, 00:38:46.077 "data_size": 63488 00:38:46.077 }, 00:38:46.077 { 00:38:46.077 "name": "BaseBdev4", 00:38:46.077 "uuid": "1298b0db-c61c-5d5a-93e7-18239ee93f41", 00:38:46.077 "is_configured": true, 00:38:46.077 "data_offset": 2048, 00:38:46.077 "data_size": 63488 00:38:46.077 } 00:38:46.077 ] 00:38:46.077 }' 00:38:46.077 17:35:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:38:46.077 17:35:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:38:46.077 17:35:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:38:46.077 17:35:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:38:46.077 17:35:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@784 -- # killprocess 85592 00:38:46.077 17:35:23 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@954 -- # '[' -z 85592 ']' 00:38:46.077 17:35:23 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@958 -- # kill -0 85592 00:38:46.077 17:35:23 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@959 -- # uname 00:38:46.077 17:35:23 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:38:46.077 17:35:23 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 85592 00:38:46.077 17:35:23 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:38:46.077 17:35:23 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:38:46.077 killing process with pid 85592 00:38:46.077 Received shutdown signal, test time was about 60.000000 seconds 00:38:46.077 00:38:46.077 Latency(us) 00:38:46.077 [2024-11-26T17:35:23.524Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:38:46.077 [2024-11-26T17:35:23.524Z] =================================================================================================================== 00:38:46.077 [2024-11-26T17:35:23.524Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:38:46.077 17:35:23 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@972 -- # echo 'killing process with pid 85592' 00:38:46.077 17:35:23 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@973 -- # kill 85592 00:38:46.077 [2024-11-26 17:35:23.429292] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:38:46.077 17:35:23 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@978 -- # wait 85592 00:38:46.077 [2024-11-26 17:35:23.429473] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:38:46.077 [2024-11-26 17:35:23.429566] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:38:46.077 [2024-11-26 17:35:23.429582] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state offline 00:38:46.644 [2024-11-26 17:35:23.978954] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:38:48.018 17:35:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@786 -- # return 0 00:38:48.018 00:38:48.018 real 0m27.947s 00:38:48.018 user 0m34.833s 00:38:48.018 sys 0m3.667s 00:38:48.018 17:35:25 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@1130 -- # xtrace_disable 00:38:48.018 ************************************ 00:38:48.018 END TEST raid5f_rebuild_test_sb 00:38:48.018 ************************************ 00:38:48.018 17:35:25 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:38:48.018 17:35:25 bdev_raid -- bdev/bdev_raid.sh@995 -- # base_blocklen=4096 00:38:48.018 17:35:25 bdev_raid -- bdev/bdev_raid.sh@997 -- # run_test raid_state_function_test_sb_4k raid_state_function_test raid1 2 true 00:38:48.018 17:35:25 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:38:48.018 17:35:25 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:38:48.018 17:35:25 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:38:48.018 ************************************ 00:38:48.018 START TEST raid_state_function_test_sb_4k 00:38:48.018 ************************************ 00:38:48.018 17:35:25 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@1129 -- # raid_state_function_test raid1 2 true 00:38:48.018 17:35:25 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@205 -- # local raid_level=raid1 00:38:48.018 17:35:25 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=2 00:38:48.018 17:35:25 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:38:48.018 17:35:25 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:38:48.018 17:35:25 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:38:48.018 17:35:25 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:38:48.018 17:35:25 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:38:48.018 17:35:25 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:38:48.018 17:35:25 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:38:48.018 17:35:25 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:38:48.018 17:35:25 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:38:48.018 17:35:25 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:38:48.018 17:35:25 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:38:48.018 17:35:25 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:38:48.018 17:35:25 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:38:48.018 17:35:25 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@211 -- # local strip_size 00:38:48.018 17:35:25 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:38:48.018 17:35:25 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:38:48.018 17:35:25 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@215 -- # '[' raid1 '!=' raid1 ']' 00:38:48.018 17:35:25 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@219 -- # strip_size=0 00:38:48.018 17:35:25 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:38:48.018 17:35:25 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:38:48.018 17:35:25 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@229 -- # raid_pid=86408 00:38:48.018 17:35:25 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:38:48.018 17:35:25 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 86408' 00:38:48.018 Process raid pid: 86408 00:38:48.018 17:35:25 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@231 -- # waitforlisten 86408 00:38:48.018 17:35:25 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@835 -- # '[' -z 86408 ']' 00:38:48.018 17:35:25 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:38:48.018 17:35:25 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@840 -- # local max_retries=100 00:38:48.018 17:35:25 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:38:48.018 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:38:48.018 17:35:25 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@844 -- # xtrace_disable 00:38:48.018 17:35:25 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:38:48.018 [2024-11-26 17:35:25.419214] Starting SPDK v25.01-pre git sha1 f7ce15267 / DPDK 24.03.0 initialization... 00:38:48.018 [2024-11-26 17:35:25.419533] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:38:48.277 [2024-11-26 17:35:25.589723] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:38:48.277 [2024-11-26 17:35:25.708829] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:38:48.542 [2024-11-26 17:35:25.926678] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:38:48.542 [2024-11-26 17:35:25.926871] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:38:49.111 17:35:26 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:38:49.111 17:35:26 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@868 -- # return 0 00:38:49.111 17:35:26 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:38:49.111 17:35:26 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:49.111 17:35:26 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:38:49.111 [2024-11-26 17:35:26.466205] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:38:49.111 [2024-11-26 17:35:26.466422] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:38:49.111 [2024-11-26 17:35:26.466445] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:38:49.111 [2024-11-26 17:35:26.466459] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:38:49.111 17:35:26 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:49.111 17:35:26 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:38:49.112 17:35:26 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:38:49.112 17:35:26 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:38:49.112 17:35:26 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:38:49.112 17:35:26 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:38:49.112 17:35:26 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:38:49.112 17:35:26 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:38:49.112 17:35:26 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:38:49.112 17:35:26 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:38:49.112 17:35:26 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:38:49.112 17:35:26 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:38:49.112 17:35:26 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:49.112 17:35:26 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:38:49.112 17:35:26 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:38:49.112 17:35:26 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:49.112 17:35:26 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:38:49.112 "name": "Existed_Raid", 00:38:49.112 "uuid": "2160c65b-d91f-4e50-a581-ab11728b133c", 00:38:49.112 "strip_size_kb": 0, 00:38:49.112 "state": "configuring", 00:38:49.112 "raid_level": "raid1", 00:38:49.112 "superblock": true, 00:38:49.112 "num_base_bdevs": 2, 00:38:49.112 "num_base_bdevs_discovered": 0, 00:38:49.112 "num_base_bdevs_operational": 2, 00:38:49.112 "base_bdevs_list": [ 00:38:49.112 { 00:38:49.112 "name": "BaseBdev1", 00:38:49.112 "uuid": "00000000-0000-0000-0000-000000000000", 00:38:49.112 "is_configured": false, 00:38:49.112 "data_offset": 0, 00:38:49.112 "data_size": 0 00:38:49.112 }, 00:38:49.112 { 00:38:49.112 "name": "BaseBdev2", 00:38:49.112 "uuid": "00000000-0000-0000-0000-000000000000", 00:38:49.112 "is_configured": false, 00:38:49.112 "data_offset": 0, 00:38:49.112 "data_size": 0 00:38:49.112 } 00:38:49.112 ] 00:38:49.112 }' 00:38:49.112 17:35:26 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:38:49.112 17:35:26 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:38:49.680 17:35:26 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:38:49.680 17:35:26 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:49.681 17:35:26 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:38:49.681 [2024-11-26 17:35:26.938273] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:38:49.681 [2024-11-26 17:35:26.938438] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:38:49.681 17:35:26 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:49.681 17:35:26 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:38:49.681 17:35:26 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:49.681 17:35:26 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:38:49.681 [2024-11-26 17:35:26.946246] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:38:49.681 [2024-11-26 17:35:26.946417] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:38:49.681 [2024-11-26 17:35:26.946539] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:38:49.681 [2024-11-26 17:35:26.946569] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:38:49.681 17:35:26 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:49.681 17:35:26 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 4096 -b BaseBdev1 00:38:49.681 17:35:26 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:49.681 17:35:26 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:38:49.681 [2024-11-26 17:35:26.993960] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:38:49.681 BaseBdev1 00:38:49.681 17:35:26 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:49.681 17:35:26 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:38:49.681 17:35:26 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:38:49.681 17:35:26 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:38:49.681 17:35:26 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@905 -- # local i 00:38:49.681 17:35:26 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:38:49.681 17:35:26 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:38:49.681 17:35:26 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:38:49.681 17:35:26 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:49.681 17:35:26 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:38:49.681 17:35:27 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:49.681 17:35:27 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:38:49.681 17:35:27 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:49.681 17:35:27 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:38:49.681 [ 00:38:49.681 { 00:38:49.681 "name": "BaseBdev1", 00:38:49.681 "aliases": [ 00:38:49.681 "2870296d-8182-44a5-963a-f79a09e4fad0" 00:38:49.681 ], 00:38:49.681 "product_name": "Malloc disk", 00:38:49.681 "block_size": 4096, 00:38:49.681 "num_blocks": 8192, 00:38:49.681 "uuid": "2870296d-8182-44a5-963a-f79a09e4fad0", 00:38:49.681 "assigned_rate_limits": { 00:38:49.681 "rw_ios_per_sec": 0, 00:38:49.681 "rw_mbytes_per_sec": 0, 00:38:49.681 "r_mbytes_per_sec": 0, 00:38:49.681 "w_mbytes_per_sec": 0 00:38:49.681 }, 00:38:49.681 "claimed": true, 00:38:49.681 "claim_type": "exclusive_write", 00:38:49.681 "zoned": false, 00:38:49.681 "supported_io_types": { 00:38:49.681 "read": true, 00:38:49.681 "write": true, 00:38:49.681 "unmap": true, 00:38:49.681 "flush": true, 00:38:49.681 "reset": true, 00:38:49.681 "nvme_admin": false, 00:38:49.681 "nvme_io": false, 00:38:49.681 "nvme_io_md": false, 00:38:49.681 "write_zeroes": true, 00:38:49.681 "zcopy": true, 00:38:49.681 "get_zone_info": false, 00:38:49.681 "zone_management": false, 00:38:49.681 "zone_append": false, 00:38:49.681 "compare": false, 00:38:49.681 "compare_and_write": false, 00:38:49.681 "abort": true, 00:38:49.681 "seek_hole": false, 00:38:49.681 "seek_data": false, 00:38:49.681 "copy": true, 00:38:49.681 "nvme_iov_md": false 00:38:49.681 }, 00:38:49.681 "memory_domains": [ 00:38:49.681 { 00:38:49.681 "dma_device_id": "system", 00:38:49.681 "dma_device_type": 1 00:38:49.681 }, 00:38:49.681 { 00:38:49.681 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:38:49.681 "dma_device_type": 2 00:38:49.681 } 00:38:49.681 ], 00:38:49.681 "driver_specific": {} 00:38:49.681 } 00:38:49.681 ] 00:38:49.681 17:35:27 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:49.681 17:35:27 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@911 -- # return 0 00:38:49.681 17:35:27 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:38:49.681 17:35:27 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:38:49.681 17:35:27 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:38:49.681 17:35:27 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:38:49.681 17:35:27 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:38:49.681 17:35:27 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:38:49.681 17:35:27 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:38:49.681 17:35:27 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:38:49.681 17:35:27 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:38:49.681 17:35:27 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:38:49.681 17:35:27 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:38:49.681 17:35:27 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:38:49.681 17:35:27 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:49.681 17:35:27 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:38:49.681 17:35:27 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:49.681 17:35:27 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:38:49.681 "name": "Existed_Raid", 00:38:49.681 "uuid": "dd21b23b-aa59-4f0b-9fc2-1e03b0313100", 00:38:49.681 "strip_size_kb": 0, 00:38:49.681 "state": "configuring", 00:38:49.681 "raid_level": "raid1", 00:38:49.681 "superblock": true, 00:38:49.681 "num_base_bdevs": 2, 00:38:49.681 "num_base_bdevs_discovered": 1, 00:38:49.681 "num_base_bdevs_operational": 2, 00:38:49.681 "base_bdevs_list": [ 00:38:49.681 { 00:38:49.681 "name": "BaseBdev1", 00:38:49.681 "uuid": "2870296d-8182-44a5-963a-f79a09e4fad0", 00:38:49.681 "is_configured": true, 00:38:49.681 "data_offset": 256, 00:38:49.681 "data_size": 7936 00:38:49.681 }, 00:38:49.681 { 00:38:49.681 "name": "BaseBdev2", 00:38:49.681 "uuid": "00000000-0000-0000-0000-000000000000", 00:38:49.681 "is_configured": false, 00:38:49.681 "data_offset": 0, 00:38:49.681 "data_size": 0 00:38:49.681 } 00:38:49.681 ] 00:38:49.681 }' 00:38:49.681 17:35:27 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:38:49.681 17:35:27 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:38:50.249 17:35:27 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:38:50.249 17:35:27 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:50.249 17:35:27 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:38:50.249 [2024-11-26 17:35:27.474132] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:38:50.249 [2024-11-26 17:35:27.474330] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:38:50.249 17:35:27 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:50.249 17:35:27 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:38:50.249 17:35:27 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:50.249 17:35:27 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:38:50.249 [2024-11-26 17:35:27.486207] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:38:50.249 [2024-11-26 17:35:27.488433] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:38:50.249 [2024-11-26 17:35:27.488586] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:38:50.249 17:35:27 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:50.249 17:35:27 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:38:50.249 17:35:27 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:38:50.250 17:35:27 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:38:50.250 17:35:27 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:38:50.250 17:35:27 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:38:50.250 17:35:27 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:38:50.250 17:35:27 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:38:50.250 17:35:27 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:38:50.250 17:35:27 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:38:50.250 17:35:27 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:38:50.250 17:35:27 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:38:50.250 17:35:27 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:38:50.250 17:35:27 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:38:50.250 17:35:27 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:38:50.250 17:35:27 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:50.250 17:35:27 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:38:50.250 17:35:27 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:50.250 17:35:27 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:38:50.250 "name": "Existed_Raid", 00:38:50.250 "uuid": "92853fd0-8793-46f4-b094-3d4cc97eae46", 00:38:50.250 "strip_size_kb": 0, 00:38:50.250 "state": "configuring", 00:38:50.250 "raid_level": "raid1", 00:38:50.250 "superblock": true, 00:38:50.250 "num_base_bdevs": 2, 00:38:50.250 "num_base_bdevs_discovered": 1, 00:38:50.250 "num_base_bdevs_operational": 2, 00:38:50.250 "base_bdevs_list": [ 00:38:50.250 { 00:38:50.250 "name": "BaseBdev1", 00:38:50.250 "uuid": "2870296d-8182-44a5-963a-f79a09e4fad0", 00:38:50.250 "is_configured": true, 00:38:50.250 "data_offset": 256, 00:38:50.250 "data_size": 7936 00:38:50.250 }, 00:38:50.250 { 00:38:50.250 "name": "BaseBdev2", 00:38:50.250 "uuid": "00000000-0000-0000-0000-000000000000", 00:38:50.250 "is_configured": false, 00:38:50.250 "data_offset": 0, 00:38:50.250 "data_size": 0 00:38:50.250 } 00:38:50.250 ] 00:38:50.250 }' 00:38:50.250 17:35:27 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:38:50.250 17:35:27 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:38:50.509 17:35:27 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 4096 -b BaseBdev2 00:38:50.509 17:35:27 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:50.509 17:35:27 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:38:50.768 [2024-11-26 17:35:27.979623] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:38:50.768 [2024-11-26 17:35:27.979897] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:38:50.768 [2024-11-26 17:35:27.979914] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:38:50.768 [2024-11-26 17:35:27.980215] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:38:50.768 BaseBdev2 00:38:50.768 [2024-11-26 17:35:27.980427] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:38:50.768 [2024-11-26 17:35:27.980445] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:38:50.768 [2024-11-26 17:35:27.980597] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:38:50.768 17:35:27 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:50.768 17:35:27 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:38:50.768 17:35:27 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:38:50.768 17:35:27 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:38:50.768 17:35:27 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@905 -- # local i 00:38:50.768 17:35:27 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:38:50.768 17:35:27 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:38:50.768 17:35:27 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:38:50.768 17:35:27 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:50.768 17:35:27 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:38:50.768 17:35:27 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:50.768 17:35:27 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:38:50.768 17:35:27 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:50.769 17:35:27 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:38:50.769 [ 00:38:50.769 { 00:38:50.769 "name": "BaseBdev2", 00:38:50.769 "aliases": [ 00:38:50.769 "bc1b3a3c-4566-4c54-bc2f-b6b940bd5d6f" 00:38:50.769 ], 00:38:50.769 "product_name": "Malloc disk", 00:38:50.769 "block_size": 4096, 00:38:50.769 "num_blocks": 8192, 00:38:50.769 "uuid": "bc1b3a3c-4566-4c54-bc2f-b6b940bd5d6f", 00:38:50.769 "assigned_rate_limits": { 00:38:50.769 "rw_ios_per_sec": 0, 00:38:50.769 "rw_mbytes_per_sec": 0, 00:38:50.769 "r_mbytes_per_sec": 0, 00:38:50.769 "w_mbytes_per_sec": 0 00:38:50.769 }, 00:38:50.769 "claimed": true, 00:38:50.769 "claim_type": "exclusive_write", 00:38:50.769 "zoned": false, 00:38:50.769 "supported_io_types": { 00:38:50.769 "read": true, 00:38:50.769 "write": true, 00:38:50.769 "unmap": true, 00:38:50.769 "flush": true, 00:38:50.769 "reset": true, 00:38:50.769 "nvme_admin": false, 00:38:50.769 "nvme_io": false, 00:38:50.769 "nvme_io_md": false, 00:38:50.769 "write_zeroes": true, 00:38:50.769 "zcopy": true, 00:38:50.769 "get_zone_info": false, 00:38:50.769 "zone_management": false, 00:38:50.769 "zone_append": false, 00:38:50.769 "compare": false, 00:38:50.769 "compare_and_write": false, 00:38:50.769 "abort": true, 00:38:50.769 "seek_hole": false, 00:38:50.769 "seek_data": false, 00:38:50.769 "copy": true, 00:38:50.769 "nvme_iov_md": false 00:38:50.769 }, 00:38:50.769 "memory_domains": [ 00:38:50.769 { 00:38:50.769 "dma_device_id": "system", 00:38:50.769 "dma_device_type": 1 00:38:50.769 }, 00:38:50.769 { 00:38:50.769 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:38:50.769 "dma_device_type": 2 00:38:50.769 } 00:38:50.769 ], 00:38:50.769 "driver_specific": {} 00:38:50.769 } 00:38:50.769 ] 00:38:50.769 17:35:28 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:50.769 17:35:28 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@911 -- # return 0 00:38:50.769 17:35:28 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:38:50.769 17:35:28 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:38:50.769 17:35:28 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid1 0 2 00:38:50.769 17:35:28 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:38:50.769 17:35:28 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:38:50.769 17:35:28 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:38:50.769 17:35:28 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:38:50.769 17:35:28 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:38:50.769 17:35:28 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:38:50.769 17:35:28 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:38:50.769 17:35:28 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:38:50.769 17:35:28 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:38:50.769 17:35:28 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:38:50.769 17:35:28 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:50.769 17:35:28 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:38:50.769 17:35:28 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:38:50.769 17:35:28 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:50.769 17:35:28 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:38:50.769 "name": "Existed_Raid", 00:38:50.769 "uuid": "92853fd0-8793-46f4-b094-3d4cc97eae46", 00:38:50.769 "strip_size_kb": 0, 00:38:50.769 "state": "online", 00:38:50.769 "raid_level": "raid1", 00:38:50.769 "superblock": true, 00:38:50.769 "num_base_bdevs": 2, 00:38:50.769 "num_base_bdevs_discovered": 2, 00:38:50.769 "num_base_bdevs_operational": 2, 00:38:50.769 "base_bdevs_list": [ 00:38:50.769 { 00:38:50.769 "name": "BaseBdev1", 00:38:50.769 "uuid": "2870296d-8182-44a5-963a-f79a09e4fad0", 00:38:50.769 "is_configured": true, 00:38:50.769 "data_offset": 256, 00:38:50.769 "data_size": 7936 00:38:50.769 }, 00:38:50.769 { 00:38:50.769 "name": "BaseBdev2", 00:38:50.769 "uuid": "bc1b3a3c-4566-4c54-bc2f-b6b940bd5d6f", 00:38:50.769 "is_configured": true, 00:38:50.769 "data_offset": 256, 00:38:50.769 "data_size": 7936 00:38:50.769 } 00:38:50.769 ] 00:38:50.769 }' 00:38:50.769 17:35:28 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:38:50.769 17:35:28 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:38:51.028 17:35:28 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:38:51.028 17:35:28 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:38:51.028 17:35:28 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:38:51.028 17:35:28 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:38:51.028 17:35:28 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@184 -- # local name 00:38:51.028 17:35:28 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:38:51.028 17:35:28 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:38:51.028 17:35:28 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:38:51.028 17:35:28 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:51.028 17:35:28 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:38:51.028 [2024-11-26 17:35:28.456062] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:38:51.287 17:35:28 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:51.288 17:35:28 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:38:51.288 "name": "Existed_Raid", 00:38:51.288 "aliases": [ 00:38:51.288 "92853fd0-8793-46f4-b094-3d4cc97eae46" 00:38:51.288 ], 00:38:51.288 "product_name": "Raid Volume", 00:38:51.288 "block_size": 4096, 00:38:51.288 "num_blocks": 7936, 00:38:51.288 "uuid": "92853fd0-8793-46f4-b094-3d4cc97eae46", 00:38:51.288 "assigned_rate_limits": { 00:38:51.288 "rw_ios_per_sec": 0, 00:38:51.288 "rw_mbytes_per_sec": 0, 00:38:51.288 "r_mbytes_per_sec": 0, 00:38:51.288 "w_mbytes_per_sec": 0 00:38:51.288 }, 00:38:51.288 "claimed": false, 00:38:51.288 "zoned": false, 00:38:51.288 "supported_io_types": { 00:38:51.288 "read": true, 00:38:51.288 "write": true, 00:38:51.288 "unmap": false, 00:38:51.288 "flush": false, 00:38:51.288 "reset": true, 00:38:51.288 "nvme_admin": false, 00:38:51.288 "nvme_io": false, 00:38:51.288 "nvme_io_md": false, 00:38:51.288 "write_zeroes": true, 00:38:51.288 "zcopy": false, 00:38:51.288 "get_zone_info": false, 00:38:51.288 "zone_management": false, 00:38:51.288 "zone_append": false, 00:38:51.288 "compare": false, 00:38:51.288 "compare_and_write": false, 00:38:51.288 "abort": false, 00:38:51.288 "seek_hole": false, 00:38:51.288 "seek_data": false, 00:38:51.288 "copy": false, 00:38:51.288 "nvme_iov_md": false 00:38:51.288 }, 00:38:51.288 "memory_domains": [ 00:38:51.288 { 00:38:51.288 "dma_device_id": "system", 00:38:51.288 "dma_device_type": 1 00:38:51.288 }, 00:38:51.288 { 00:38:51.288 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:38:51.288 "dma_device_type": 2 00:38:51.288 }, 00:38:51.288 { 00:38:51.288 "dma_device_id": "system", 00:38:51.288 "dma_device_type": 1 00:38:51.288 }, 00:38:51.288 { 00:38:51.288 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:38:51.288 "dma_device_type": 2 00:38:51.288 } 00:38:51.288 ], 00:38:51.288 "driver_specific": { 00:38:51.288 "raid": { 00:38:51.288 "uuid": "92853fd0-8793-46f4-b094-3d4cc97eae46", 00:38:51.288 "strip_size_kb": 0, 00:38:51.288 "state": "online", 00:38:51.288 "raid_level": "raid1", 00:38:51.288 "superblock": true, 00:38:51.288 "num_base_bdevs": 2, 00:38:51.288 "num_base_bdevs_discovered": 2, 00:38:51.288 "num_base_bdevs_operational": 2, 00:38:51.288 "base_bdevs_list": [ 00:38:51.288 { 00:38:51.288 "name": "BaseBdev1", 00:38:51.288 "uuid": "2870296d-8182-44a5-963a-f79a09e4fad0", 00:38:51.288 "is_configured": true, 00:38:51.288 "data_offset": 256, 00:38:51.288 "data_size": 7936 00:38:51.288 }, 00:38:51.288 { 00:38:51.288 "name": "BaseBdev2", 00:38:51.288 "uuid": "bc1b3a3c-4566-4c54-bc2f-b6b940bd5d6f", 00:38:51.288 "is_configured": true, 00:38:51.288 "data_offset": 256, 00:38:51.288 "data_size": 7936 00:38:51.288 } 00:38:51.288 ] 00:38:51.288 } 00:38:51.288 } 00:38:51.288 }' 00:38:51.288 17:35:28 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:38:51.288 17:35:28 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:38:51.288 BaseBdev2' 00:38:51.288 17:35:28 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:38:51.288 17:35:28 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='4096 ' 00:38:51.288 17:35:28 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:38:51.288 17:35:28 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:38:51.288 17:35:28 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:51.288 17:35:28 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:38:51.288 17:35:28 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:38:51.288 17:35:28 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:51.288 17:35:28 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4096 ' 00:38:51.288 17:35:28 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@193 -- # [[ 4096 == \4\0\9\6\ \ \ ]] 00:38:51.288 17:35:28 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:38:51.288 17:35:28 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:38:51.288 17:35:28 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:51.288 17:35:28 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:38:51.288 17:35:28 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:38:51.288 17:35:28 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:51.288 17:35:28 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4096 ' 00:38:51.288 17:35:28 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@193 -- # [[ 4096 == \4\0\9\6\ \ \ ]] 00:38:51.288 17:35:28 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:38:51.288 17:35:28 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:51.288 17:35:28 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:38:51.288 [2024-11-26 17:35:28.695858] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:38:51.548 17:35:28 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:51.548 17:35:28 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@260 -- # local expected_state 00:38:51.548 17:35:28 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@261 -- # has_redundancy raid1 00:38:51.548 17:35:28 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@198 -- # case $1 in 00:38:51.548 17:35:28 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@199 -- # return 0 00:38:51.548 17:35:28 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@264 -- # expected_state=online 00:38:51.548 17:35:28 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid online raid1 0 1 00:38:51.548 17:35:28 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:38:51.548 17:35:28 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:38:51.548 17:35:28 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:38:51.548 17:35:28 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:38:51.548 17:35:28 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:38:51.548 17:35:28 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:38:51.548 17:35:28 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:38:51.548 17:35:28 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:38:51.548 17:35:28 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:38:51.548 17:35:28 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:38:51.548 17:35:28 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:38:51.548 17:35:28 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:51.548 17:35:28 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:38:51.548 17:35:28 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:51.548 17:35:28 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:38:51.548 "name": "Existed_Raid", 00:38:51.548 "uuid": "92853fd0-8793-46f4-b094-3d4cc97eae46", 00:38:51.548 "strip_size_kb": 0, 00:38:51.548 "state": "online", 00:38:51.548 "raid_level": "raid1", 00:38:51.548 "superblock": true, 00:38:51.548 "num_base_bdevs": 2, 00:38:51.548 "num_base_bdevs_discovered": 1, 00:38:51.548 "num_base_bdevs_operational": 1, 00:38:51.548 "base_bdevs_list": [ 00:38:51.548 { 00:38:51.548 "name": null, 00:38:51.548 "uuid": "00000000-0000-0000-0000-000000000000", 00:38:51.548 "is_configured": false, 00:38:51.548 "data_offset": 0, 00:38:51.548 "data_size": 7936 00:38:51.548 }, 00:38:51.548 { 00:38:51.548 "name": "BaseBdev2", 00:38:51.548 "uuid": "bc1b3a3c-4566-4c54-bc2f-b6b940bd5d6f", 00:38:51.548 "is_configured": true, 00:38:51.548 "data_offset": 256, 00:38:51.548 "data_size": 7936 00:38:51.548 } 00:38:51.548 ] 00:38:51.548 }' 00:38:51.548 17:35:28 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:38:51.548 17:35:28 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:38:51.807 17:35:29 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:38:51.807 17:35:29 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:38:51.807 17:35:29 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:38:51.807 17:35:29 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:51.807 17:35:29 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:38:51.807 17:35:29 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:38:52.065 17:35:29 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:52.065 17:35:29 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:38:52.065 17:35:29 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:38:52.065 17:35:29 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:38:52.065 17:35:29 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:52.065 17:35:29 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:38:52.065 [2024-11-26 17:35:29.281617] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:38:52.065 [2024-11-26 17:35:29.282042] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:38:52.065 [2024-11-26 17:35:29.446589] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:38:52.065 [2024-11-26 17:35:29.446683] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:38:52.065 [2024-11-26 17:35:29.446707] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:38:52.065 17:35:29 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:52.065 17:35:29 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:38:52.065 17:35:29 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:38:52.065 17:35:29 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:38:52.065 17:35:29 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:52.065 17:35:29 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:38:52.065 17:35:29 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:38:52.065 17:35:29 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:52.065 17:35:29 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:38:52.065 17:35:29 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:38:52.065 17:35:29 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@284 -- # '[' 2 -gt 2 ']' 00:38:52.065 17:35:29 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@326 -- # killprocess 86408 00:38:52.065 17:35:29 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@954 -- # '[' -z 86408 ']' 00:38:52.065 17:35:29 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@958 -- # kill -0 86408 00:38:52.065 17:35:29 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@959 -- # uname 00:38:52.065 17:35:29 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:38:52.065 17:35:29 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 86408 00:38:52.324 killing process with pid 86408 00:38:52.324 17:35:29 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:38:52.324 17:35:29 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:38:52.324 17:35:29 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@972 -- # echo 'killing process with pid 86408' 00:38:52.324 17:35:29 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@973 -- # kill 86408 00:38:52.324 [2024-11-26 17:35:29.540986] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:38:52.324 17:35:29 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@978 -- # wait 86408 00:38:52.324 [2024-11-26 17:35:29.559516] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:38:53.701 ************************************ 00:38:53.701 END TEST raid_state_function_test_sb_4k 00:38:53.701 ************************************ 00:38:53.701 17:35:30 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@328 -- # return 0 00:38:53.701 00:38:53.701 real 0m5.503s 00:38:53.701 user 0m7.926s 00:38:53.701 sys 0m0.922s 00:38:53.701 17:35:30 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@1130 -- # xtrace_disable 00:38:53.701 17:35:30 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:38:53.701 17:35:30 bdev_raid -- bdev/bdev_raid.sh@998 -- # run_test raid_superblock_test_4k raid_superblock_test raid1 2 00:38:53.701 17:35:30 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:38:53.701 17:35:30 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:38:53.701 17:35:30 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:38:53.701 ************************************ 00:38:53.701 START TEST raid_superblock_test_4k 00:38:53.701 ************************************ 00:38:53.701 17:35:30 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@1129 -- # raid_superblock_test raid1 2 00:38:53.701 17:35:30 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@393 -- # local raid_level=raid1 00:38:53.701 17:35:30 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=2 00:38:53.701 17:35:30 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:38:53.701 17:35:30 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:38:53.701 17:35:30 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:38:53.701 17:35:30 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:38:53.701 17:35:30 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:38:53.701 17:35:30 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:38:53.701 17:35:30 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:38:53.701 17:35:30 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@399 -- # local strip_size 00:38:53.701 17:35:30 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:38:53.701 17:35:30 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:38:53.701 17:35:30 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:38:53.701 17:35:30 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@404 -- # '[' raid1 '!=' raid1 ']' 00:38:53.701 17:35:30 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@408 -- # strip_size=0 00:38:53.701 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:38:53.701 17:35:30 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@412 -- # raid_pid=86660 00:38:53.701 17:35:30 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@413 -- # waitforlisten 86660 00:38:53.701 17:35:30 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@835 -- # '[' -z 86660 ']' 00:38:53.701 17:35:30 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:38:53.701 17:35:30 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:38:53.701 17:35:30 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@840 -- # local max_retries=100 00:38:53.701 17:35:30 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:38:53.701 17:35:30 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@844 -- # xtrace_disable 00:38:53.701 17:35:30 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:38:53.701 [2024-11-26 17:35:31.031482] Starting SPDK v25.01-pre git sha1 f7ce15267 / DPDK 24.03.0 initialization... 00:38:53.701 [2024-11-26 17:35:31.031911] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid86660 ] 00:38:53.960 [2024-11-26 17:35:31.234231] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:38:54.219 [2024-11-26 17:35:31.416083] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:38:54.478 [2024-11-26 17:35:31.674810] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:38:54.478 [2024-11-26 17:35:31.674866] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:38:54.738 17:35:31 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:38:54.738 17:35:31 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@868 -- # return 0 00:38:54.739 17:35:31 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:38:54.739 17:35:31 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:38:54.739 17:35:31 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:38:54.739 17:35:31 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:38:54.739 17:35:31 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:38:54.739 17:35:31 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:38:54.739 17:35:31 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:38:54.739 17:35:31 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:38:54.739 17:35:31 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 4096 -b malloc1 00:38:54.739 17:35:31 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:54.739 17:35:31 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:38:54.739 malloc1 00:38:54.739 17:35:32 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:54.739 17:35:32 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:38:54.739 17:35:32 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:54.739 17:35:32 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:38:54.739 [2024-11-26 17:35:32.021400] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:38:54.739 [2024-11-26 17:35:32.021689] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:38:54.739 [2024-11-26 17:35:32.021757] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:38:54.739 [2024-11-26 17:35:32.021846] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:38:54.739 [2024-11-26 17:35:32.024877] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:38:54.739 [2024-11-26 17:35:32.025057] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:38:54.739 pt1 00:38:54.739 17:35:32 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:54.739 17:35:32 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:38:54.739 17:35:32 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:38:54.739 17:35:32 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:38:54.739 17:35:32 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:38:54.739 17:35:32 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:38:54.739 17:35:32 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:38:54.739 17:35:32 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:38:54.739 17:35:32 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:38:54.739 17:35:32 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 4096 -b malloc2 00:38:54.739 17:35:32 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:54.739 17:35:32 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:38:54.739 malloc2 00:38:54.739 17:35:32 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:54.739 17:35:32 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:38:54.739 17:35:32 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:54.739 17:35:32 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:38:54.739 [2024-11-26 17:35:32.084496] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:38:54.739 [2024-11-26 17:35:32.084731] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:38:54.739 [2024-11-26 17:35:32.084803] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:38:54.739 [2024-11-26 17:35:32.084887] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:38:54.739 [2024-11-26 17:35:32.087753] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:38:54.739 [2024-11-26 17:35:32.087892] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:38:54.739 pt2 00:38:54.739 17:35:32 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:54.739 17:35:32 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:38:54.739 17:35:32 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:38:54.739 17:35:32 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''pt1 pt2'\''' -n raid_bdev1 -s 00:38:54.739 17:35:32 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:54.739 17:35:32 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:38:54.739 [2024-11-26 17:35:32.092570] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:38:54.739 [2024-11-26 17:35:32.095302] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:38:54.739 [2024-11-26 17:35:32.095622] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:38:54.739 [2024-11-26 17:35:32.095737] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:38:54.739 [2024-11-26 17:35:32.096074] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:38:54.739 [2024-11-26 17:35:32.096376] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:38:54.739 [2024-11-26 17:35:32.096404] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:38:54.739 [2024-11-26 17:35:32.096607] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:38:54.739 17:35:32 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:54.739 17:35:32 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:38:54.739 17:35:32 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:38:54.739 17:35:32 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:38:54.739 17:35:32 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:38:54.739 17:35:32 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:38:54.739 17:35:32 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:38:54.739 17:35:32 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:38:54.739 17:35:32 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:38:54.739 17:35:32 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:38:54.739 17:35:32 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:38:54.739 17:35:32 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:38:54.739 17:35:32 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:38:54.739 17:35:32 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:54.739 17:35:32 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:38:54.739 17:35:32 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:54.739 17:35:32 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:38:54.739 "name": "raid_bdev1", 00:38:54.739 "uuid": "5369ed9b-329e-4719-9786-c27dd674dde6", 00:38:54.739 "strip_size_kb": 0, 00:38:54.739 "state": "online", 00:38:54.739 "raid_level": "raid1", 00:38:54.739 "superblock": true, 00:38:54.739 "num_base_bdevs": 2, 00:38:54.739 "num_base_bdevs_discovered": 2, 00:38:54.739 "num_base_bdevs_operational": 2, 00:38:54.739 "base_bdevs_list": [ 00:38:54.739 { 00:38:54.739 "name": "pt1", 00:38:54.739 "uuid": "00000000-0000-0000-0000-000000000001", 00:38:54.739 "is_configured": true, 00:38:54.739 "data_offset": 256, 00:38:54.739 "data_size": 7936 00:38:54.739 }, 00:38:54.739 { 00:38:54.739 "name": "pt2", 00:38:54.739 "uuid": "00000000-0000-0000-0000-000000000002", 00:38:54.739 "is_configured": true, 00:38:54.739 "data_offset": 256, 00:38:54.739 "data_size": 7936 00:38:54.739 } 00:38:54.739 ] 00:38:54.739 }' 00:38:54.739 17:35:32 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:38:54.739 17:35:32 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:38:55.308 17:35:32 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:38:55.308 17:35:32 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:38:55.308 17:35:32 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:38:55.308 17:35:32 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:38:55.308 17:35:32 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@184 -- # local name 00:38:55.308 17:35:32 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:38:55.308 17:35:32 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:38:55.308 17:35:32 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:55.308 17:35:32 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:38:55.308 17:35:32 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:38:55.308 [2024-11-26 17:35:32.553065] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:38:55.308 17:35:32 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:55.308 17:35:32 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:38:55.308 "name": "raid_bdev1", 00:38:55.308 "aliases": [ 00:38:55.308 "5369ed9b-329e-4719-9786-c27dd674dde6" 00:38:55.308 ], 00:38:55.308 "product_name": "Raid Volume", 00:38:55.308 "block_size": 4096, 00:38:55.308 "num_blocks": 7936, 00:38:55.308 "uuid": "5369ed9b-329e-4719-9786-c27dd674dde6", 00:38:55.308 "assigned_rate_limits": { 00:38:55.308 "rw_ios_per_sec": 0, 00:38:55.308 "rw_mbytes_per_sec": 0, 00:38:55.308 "r_mbytes_per_sec": 0, 00:38:55.308 "w_mbytes_per_sec": 0 00:38:55.308 }, 00:38:55.308 "claimed": false, 00:38:55.308 "zoned": false, 00:38:55.308 "supported_io_types": { 00:38:55.308 "read": true, 00:38:55.308 "write": true, 00:38:55.308 "unmap": false, 00:38:55.308 "flush": false, 00:38:55.308 "reset": true, 00:38:55.308 "nvme_admin": false, 00:38:55.308 "nvme_io": false, 00:38:55.308 "nvme_io_md": false, 00:38:55.308 "write_zeroes": true, 00:38:55.308 "zcopy": false, 00:38:55.308 "get_zone_info": false, 00:38:55.308 "zone_management": false, 00:38:55.308 "zone_append": false, 00:38:55.308 "compare": false, 00:38:55.308 "compare_and_write": false, 00:38:55.308 "abort": false, 00:38:55.308 "seek_hole": false, 00:38:55.308 "seek_data": false, 00:38:55.308 "copy": false, 00:38:55.308 "nvme_iov_md": false 00:38:55.308 }, 00:38:55.308 "memory_domains": [ 00:38:55.308 { 00:38:55.308 "dma_device_id": "system", 00:38:55.308 "dma_device_type": 1 00:38:55.308 }, 00:38:55.308 { 00:38:55.308 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:38:55.309 "dma_device_type": 2 00:38:55.309 }, 00:38:55.309 { 00:38:55.309 "dma_device_id": "system", 00:38:55.309 "dma_device_type": 1 00:38:55.309 }, 00:38:55.309 { 00:38:55.309 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:38:55.309 "dma_device_type": 2 00:38:55.309 } 00:38:55.309 ], 00:38:55.309 "driver_specific": { 00:38:55.309 "raid": { 00:38:55.309 "uuid": "5369ed9b-329e-4719-9786-c27dd674dde6", 00:38:55.309 "strip_size_kb": 0, 00:38:55.309 "state": "online", 00:38:55.309 "raid_level": "raid1", 00:38:55.309 "superblock": true, 00:38:55.309 "num_base_bdevs": 2, 00:38:55.309 "num_base_bdevs_discovered": 2, 00:38:55.309 "num_base_bdevs_operational": 2, 00:38:55.309 "base_bdevs_list": [ 00:38:55.309 { 00:38:55.309 "name": "pt1", 00:38:55.309 "uuid": "00000000-0000-0000-0000-000000000001", 00:38:55.309 "is_configured": true, 00:38:55.309 "data_offset": 256, 00:38:55.309 "data_size": 7936 00:38:55.309 }, 00:38:55.309 { 00:38:55.309 "name": "pt2", 00:38:55.309 "uuid": "00000000-0000-0000-0000-000000000002", 00:38:55.309 "is_configured": true, 00:38:55.309 "data_offset": 256, 00:38:55.309 "data_size": 7936 00:38:55.309 } 00:38:55.309 ] 00:38:55.309 } 00:38:55.309 } 00:38:55.309 }' 00:38:55.309 17:35:32 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:38:55.309 17:35:32 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:38:55.309 pt2' 00:38:55.309 17:35:32 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:38:55.309 17:35:32 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='4096 ' 00:38:55.309 17:35:32 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:38:55.309 17:35:32 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:38:55.309 17:35:32 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:38:55.309 17:35:32 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:55.309 17:35:32 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:38:55.309 17:35:32 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:55.309 17:35:32 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4096 ' 00:38:55.309 17:35:32 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@193 -- # [[ 4096 == \4\0\9\6\ \ \ ]] 00:38:55.309 17:35:32 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:38:55.309 17:35:32 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:38:55.309 17:35:32 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:55.309 17:35:32 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:38:55.309 17:35:32 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:38:55.309 17:35:32 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:55.569 17:35:32 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4096 ' 00:38:55.569 17:35:32 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@193 -- # [[ 4096 == \4\0\9\6\ \ \ ]] 00:38:55.569 17:35:32 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:38:55.569 17:35:32 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:38:55.569 17:35:32 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:55.569 17:35:32 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:38:55.569 [2024-11-26 17:35:32.764941] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:38:55.569 17:35:32 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:55.569 17:35:32 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=5369ed9b-329e-4719-9786-c27dd674dde6 00:38:55.569 17:35:32 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@436 -- # '[' -z 5369ed9b-329e-4719-9786-c27dd674dde6 ']' 00:38:55.569 17:35:32 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:38:55.569 17:35:32 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:55.569 17:35:32 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:38:55.569 [2024-11-26 17:35:32.804680] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:38:55.569 [2024-11-26 17:35:32.806464] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:38:55.569 [2024-11-26 17:35:32.806587] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:38:55.569 [2024-11-26 17:35:32.806657] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:38:55.569 [2024-11-26 17:35:32.806674] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:38:55.569 17:35:32 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:55.569 17:35:32 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:38:55.569 17:35:32 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:55.569 17:35:32 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:38:55.569 17:35:32 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:38:55.569 17:35:32 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:55.569 17:35:32 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:38:55.569 17:35:32 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:38:55.569 17:35:32 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:38:55.569 17:35:32 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:38:55.569 17:35:32 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:55.569 17:35:32 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:38:55.569 17:35:32 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:55.569 17:35:32 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:38:55.569 17:35:32 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:38:55.569 17:35:32 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:55.569 17:35:32 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:38:55.569 17:35:32 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:55.569 17:35:32 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:38:55.569 17:35:32 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:55.569 17:35:32 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:38:55.569 17:35:32 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:38:55.569 17:35:32 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:55.569 17:35:32 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:38:55.569 17:35:32 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:38:55.569 17:35:32 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@652 -- # local es=0 00:38:55.569 17:35:32 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:38:55.569 17:35:32 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:38:55.569 17:35:32 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:38:55.569 17:35:32 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:38:55.569 17:35:32 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:38:55.569 17:35:32 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:38:55.569 17:35:32 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:55.569 17:35:32 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:38:55.569 [2024-11-26 17:35:32.924752] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:38:55.569 [2024-11-26 17:35:32.927254] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:38:55.569 [2024-11-26 17:35:32.927481] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:38:55.569 [2024-11-26 17:35:32.927553] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:38:55.569 [2024-11-26 17:35:32.927573] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:38:55.569 [2024-11-26 17:35:32.927588] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state configuring 00:38:55.569 request: 00:38:55.569 { 00:38:55.569 "name": "raid_bdev1", 00:38:55.569 "raid_level": "raid1", 00:38:55.569 "base_bdevs": [ 00:38:55.569 "malloc1", 00:38:55.569 "malloc2" 00:38:55.569 ], 00:38:55.569 "superblock": false, 00:38:55.569 "method": "bdev_raid_create", 00:38:55.569 "req_id": 1 00:38:55.569 } 00:38:55.569 Got JSON-RPC error response 00:38:55.569 response: 00:38:55.569 { 00:38:55.569 "code": -17, 00:38:55.569 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:38:55.569 } 00:38:55.569 17:35:32 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:38:55.569 17:35:32 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@655 -- # es=1 00:38:55.569 17:35:32 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:38:55.569 17:35:32 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:38:55.569 17:35:32 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:38:55.569 17:35:32 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:38:55.569 17:35:32 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:38:55.569 17:35:32 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:55.569 17:35:32 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:38:55.569 17:35:32 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:55.569 17:35:32 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:38:55.569 17:35:32 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:38:55.569 17:35:32 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:38:55.569 17:35:32 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:55.569 17:35:32 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:38:55.569 [2024-11-26 17:35:32.980743] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:38:55.569 [2024-11-26 17:35:32.980924] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:38:55.569 [2024-11-26 17:35:32.980982] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:38:55.569 [2024-11-26 17:35:32.981092] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:38:55.569 [2024-11-26 17:35:32.984233] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:38:55.569 [2024-11-26 17:35:32.984383] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:38:55.569 [2024-11-26 17:35:32.984545] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:38:55.569 [2024-11-26 17:35:32.984685] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:38:55.569 pt1 00:38:55.569 17:35:32 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:55.569 17:35:32 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 2 00:38:55.569 17:35:32 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:38:55.570 17:35:32 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:38:55.570 17:35:32 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:38:55.570 17:35:32 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:38:55.570 17:35:32 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:38:55.570 17:35:32 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:38:55.570 17:35:32 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:38:55.570 17:35:32 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:38:55.570 17:35:32 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:38:55.570 17:35:32 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:38:55.570 17:35:32 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:38:55.570 17:35:32 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:55.570 17:35:32 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:38:55.570 17:35:33 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:55.829 17:35:33 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:38:55.829 "name": "raid_bdev1", 00:38:55.829 "uuid": "5369ed9b-329e-4719-9786-c27dd674dde6", 00:38:55.829 "strip_size_kb": 0, 00:38:55.829 "state": "configuring", 00:38:55.829 "raid_level": "raid1", 00:38:55.829 "superblock": true, 00:38:55.829 "num_base_bdevs": 2, 00:38:55.829 "num_base_bdevs_discovered": 1, 00:38:55.829 "num_base_bdevs_operational": 2, 00:38:55.829 "base_bdevs_list": [ 00:38:55.829 { 00:38:55.829 "name": "pt1", 00:38:55.829 "uuid": "00000000-0000-0000-0000-000000000001", 00:38:55.829 "is_configured": true, 00:38:55.829 "data_offset": 256, 00:38:55.829 "data_size": 7936 00:38:55.829 }, 00:38:55.829 { 00:38:55.829 "name": null, 00:38:55.829 "uuid": "00000000-0000-0000-0000-000000000002", 00:38:55.829 "is_configured": false, 00:38:55.829 "data_offset": 256, 00:38:55.829 "data_size": 7936 00:38:55.829 } 00:38:55.829 ] 00:38:55.829 }' 00:38:55.829 17:35:33 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:38:55.829 17:35:33 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:38:56.088 17:35:33 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@470 -- # '[' 2 -gt 2 ']' 00:38:56.088 17:35:33 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:38:56.088 17:35:33 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:38:56.088 17:35:33 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:38:56.088 17:35:33 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:56.088 17:35:33 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:38:56.088 [2024-11-26 17:35:33.421075] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:38:56.088 [2024-11-26 17:35:33.421363] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:38:56.088 [2024-11-26 17:35:33.421425] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:38:56.088 [2024-11-26 17:35:33.421517] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:38:56.088 [2024-11-26 17:35:33.422153] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:38:56.088 [2024-11-26 17:35:33.422294] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:38:56.088 [2024-11-26 17:35:33.422419] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:38:56.088 [2024-11-26 17:35:33.422456] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:38:56.088 [2024-11-26 17:35:33.422622] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:38:56.088 [2024-11-26 17:35:33.422637] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:38:56.088 [2024-11-26 17:35:33.422935] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:38:56.088 [2024-11-26 17:35:33.423108] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:38:56.088 [2024-11-26 17:35:33.423119] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:38:56.088 [2024-11-26 17:35:33.423293] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:38:56.088 pt2 00:38:56.088 17:35:33 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:56.088 17:35:33 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:38:56.088 17:35:33 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:38:56.088 17:35:33 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:38:56.088 17:35:33 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:38:56.088 17:35:33 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:38:56.088 17:35:33 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:38:56.088 17:35:33 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:38:56.088 17:35:33 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:38:56.088 17:35:33 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:38:56.088 17:35:33 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:38:56.088 17:35:33 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:38:56.088 17:35:33 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:38:56.088 17:35:33 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:38:56.088 17:35:33 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:38:56.088 17:35:33 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:56.088 17:35:33 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:38:56.088 17:35:33 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:56.088 17:35:33 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:38:56.088 "name": "raid_bdev1", 00:38:56.088 "uuid": "5369ed9b-329e-4719-9786-c27dd674dde6", 00:38:56.088 "strip_size_kb": 0, 00:38:56.088 "state": "online", 00:38:56.088 "raid_level": "raid1", 00:38:56.088 "superblock": true, 00:38:56.088 "num_base_bdevs": 2, 00:38:56.088 "num_base_bdevs_discovered": 2, 00:38:56.088 "num_base_bdevs_operational": 2, 00:38:56.088 "base_bdevs_list": [ 00:38:56.088 { 00:38:56.088 "name": "pt1", 00:38:56.088 "uuid": "00000000-0000-0000-0000-000000000001", 00:38:56.088 "is_configured": true, 00:38:56.088 "data_offset": 256, 00:38:56.088 "data_size": 7936 00:38:56.088 }, 00:38:56.088 { 00:38:56.088 "name": "pt2", 00:38:56.088 "uuid": "00000000-0000-0000-0000-000000000002", 00:38:56.088 "is_configured": true, 00:38:56.088 "data_offset": 256, 00:38:56.088 "data_size": 7936 00:38:56.088 } 00:38:56.088 ] 00:38:56.088 }' 00:38:56.088 17:35:33 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:38:56.088 17:35:33 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:38:56.657 17:35:33 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:38:56.657 17:35:33 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:38:56.657 17:35:33 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:38:56.657 17:35:33 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:38:56.657 17:35:33 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@184 -- # local name 00:38:56.657 17:35:33 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:38:56.657 17:35:33 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:38:56.657 17:35:33 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:56.657 17:35:33 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:38:56.657 17:35:33 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:38:56.657 [2024-11-26 17:35:33.889455] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:38:56.657 17:35:33 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:56.657 17:35:33 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:38:56.657 "name": "raid_bdev1", 00:38:56.657 "aliases": [ 00:38:56.657 "5369ed9b-329e-4719-9786-c27dd674dde6" 00:38:56.657 ], 00:38:56.657 "product_name": "Raid Volume", 00:38:56.657 "block_size": 4096, 00:38:56.657 "num_blocks": 7936, 00:38:56.657 "uuid": "5369ed9b-329e-4719-9786-c27dd674dde6", 00:38:56.657 "assigned_rate_limits": { 00:38:56.657 "rw_ios_per_sec": 0, 00:38:56.657 "rw_mbytes_per_sec": 0, 00:38:56.657 "r_mbytes_per_sec": 0, 00:38:56.657 "w_mbytes_per_sec": 0 00:38:56.657 }, 00:38:56.657 "claimed": false, 00:38:56.657 "zoned": false, 00:38:56.657 "supported_io_types": { 00:38:56.657 "read": true, 00:38:56.657 "write": true, 00:38:56.657 "unmap": false, 00:38:56.657 "flush": false, 00:38:56.657 "reset": true, 00:38:56.657 "nvme_admin": false, 00:38:56.657 "nvme_io": false, 00:38:56.657 "nvme_io_md": false, 00:38:56.657 "write_zeroes": true, 00:38:56.657 "zcopy": false, 00:38:56.657 "get_zone_info": false, 00:38:56.657 "zone_management": false, 00:38:56.657 "zone_append": false, 00:38:56.657 "compare": false, 00:38:56.657 "compare_and_write": false, 00:38:56.657 "abort": false, 00:38:56.657 "seek_hole": false, 00:38:56.657 "seek_data": false, 00:38:56.657 "copy": false, 00:38:56.657 "nvme_iov_md": false 00:38:56.657 }, 00:38:56.657 "memory_domains": [ 00:38:56.657 { 00:38:56.657 "dma_device_id": "system", 00:38:56.657 "dma_device_type": 1 00:38:56.657 }, 00:38:56.657 { 00:38:56.657 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:38:56.657 "dma_device_type": 2 00:38:56.657 }, 00:38:56.657 { 00:38:56.657 "dma_device_id": "system", 00:38:56.657 "dma_device_type": 1 00:38:56.657 }, 00:38:56.657 { 00:38:56.657 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:38:56.657 "dma_device_type": 2 00:38:56.657 } 00:38:56.657 ], 00:38:56.657 "driver_specific": { 00:38:56.657 "raid": { 00:38:56.657 "uuid": "5369ed9b-329e-4719-9786-c27dd674dde6", 00:38:56.657 "strip_size_kb": 0, 00:38:56.657 "state": "online", 00:38:56.657 "raid_level": "raid1", 00:38:56.657 "superblock": true, 00:38:56.657 "num_base_bdevs": 2, 00:38:56.657 "num_base_bdevs_discovered": 2, 00:38:56.657 "num_base_bdevs_operational": 2, 00:38:56.657 "base_bdevs_list": [ 00:38:56.657 { 00:38:56.657 "name": "pt1", 00:38:56.657 "uuid": "00000000-0000-0000-0000-000000000001", 00:38:56.657 "is_configured": true, 00:38:56.657 "data_offset": 256, 00:38:56.657 "data_size": 7936 00:38:56.657 }, 00:38:56.657 { 00:38:56.657 "name": "pt2", 00:38:56.657 "uuid": "00000000-0000-0000-0000-000000000002", 00:38:56.657 "is_configured": true, 00:38:56.657 "data_offset": 256, 00:38:56.657 "data_size": 7936 00:38:56.657 } 00:38:56.657 ] 00:38:56.657 } 00:38:56.657 } 00:38:56.657 }' 00:38:56.657 17:35:33 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:38:56.657 17:35:33 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:38:56.657 pt2' 00:38:56.657 17:35:33 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:38:56.657 17:35:34 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='4096 ' 00:38:56.657 17:35:34 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:38:56.657 17:35:34 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:38:56.657 17:35:34 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:56.657 17:35:34 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:38:56.657 17:35:34 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:38:56.657 17:35:34 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:56.657 17:35:34 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4096 ' 00:38:56.657 17:35:34 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@193 -- # [[ 4096 == \4\0\9\6\ \ \ ]] 00:38:56.658 17:35:34 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:38:56.658 17:35:34 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:38:56.658 17:35:34 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:38:56.658 17:35:34 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:56.658 17:35:34 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:38:56.658 17:35:34 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:56.917 17:35:34 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4096 ' 00:38:56.917 17:35:34 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@193 -- # [[ 4096 == \4\0\9\6\ \ \ ]] 00:38:56.917 17:35:34 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:38:56.917 17:35:34 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:56.917 17:35:34 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:38:56.917 17:35:34 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:38:56.917 [2024-11-26 17:35:34.121383] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:38:56.917 17:35:34 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:56.917 17:35:34 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@487 -- # '[' 5369ed9b-329e-4719-9786-c27dd674dde6 '!=' 5369ed9b-329e-4719-9786-c27dd674dde6 ']' 00:38:56.917 17:35:34 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@491 -- # has_redundancy raid1 00:38:56.917 17:35:34 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@198 -- # case $1 in 00:38:56.917 17:35:34 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@199 -- # return 0 00:38:56.917 17:35:34 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@493 -- # rpc_cmd bdev_passthru_delete pt1 00:38:56.917 17:35:34 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:56.917 17:35:34 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:38:56.917 [2024-11-26 17:35:34.169205] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: pt1 00:38:56.917 17:35:34 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:56.917 17:35:34 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@496 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:38:56.917 17:35:34 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:38:56.917 17:35:34 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:38:56.917 17:35:34 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:38:56.917 17:35:34 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:38:56.917 17:35:34 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:38:56.917 17:35:34 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:38:56.917 17:35:34 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:38:56.917 17:35:34 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:38:56.917 17:35:34 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:38:56.917 17:35:34 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:38:56.917 17:35:34 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:38:56.917 17:35:34 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:56.917 17:35:34 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:38:56.917 17:35:34 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:56.917 17:35:34 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:38:56.917 "name": "raid_bdev1", 00:38:56.917 "uuid": "5369ed9b-329e-4719-9786-c27dd674dde6", 00:38:56.917 "strip_size_kb": 0, 00:38:56.917 "state": "online", 00:38:56.917 "raid_level": "raid1", 00:38:56.917 "superblock": true, 00:38:56.917 "num_base_bdevs": 2, 00:38:56.917 "num_base_bdevs_discovered": 1, 00:38:56.917 "num_base_bdevs_operational": 1, 00:38:56.917 "base_bdevs_list": [ 00:38:56.917 { 00:38:56.917 "name": null, 00:38:56.917 "uuid": "00000000-0000-0000-0000-000000000000", 00:38:56.917 "is_configured": false, 00:38:56.917 "data_offset": 0, 00:38:56.917 "data_size": 7936 00:38:56.917 }, 00:38:56.917 { 00:38:56.917 "name": "pt2", 00:38:56.917 "uuid": "00000000-0000-0000-0000-000000000002", 00:38:56.917 "is_configured": true, 00:38:56.917 "data_offset": 256, 00:38:56.917 "data_size": 7936 00:38:56.917 } 00:38:56.917 ] 00:38:56.917 }' 00:38:56.917 17:35:34 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:38:56.917 17:35:34 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:38:57.175 17:35:34 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@499 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:38:57.175 17:35:34 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:57.175 17:35:34 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:38:57.175 [2024-11-26 17:35:34.613304] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:38:57.175 [2024-11-26 17:35:34.613485] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:38:57.175 [2024-11-26 17:35:34.613607] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:38:57.175 [2024-11-26 17:35:34.613664] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:38:57.176 [2024-11-26 17:35:34.613680] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:38:57.176 17:35:34 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:57.435 17:35:34 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@500 -- # rpc_cmd bdev_raid_get_bdevs all 00:38:57.435 17:35:34 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:57.435 17:35:34 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:38:57.435 17:35:34 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@500 -- # jq -r '.[]' 00:38:57.435 17:35:34 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:57.435 17:35:34 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@500 -- # raid_bdev= 00:38:57.435 17:35:34 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@501 -- # '[' -n '' ']' 00:38:57.435 17:35:34 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@506 -- # (( i = 1 )) 00:38:57.435 17:35:34 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:38:57.435 17:35:34 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt2 00:38:57.435 17:35:34 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:57.435 17:35:34 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:38:57.435 17:35:34 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:57.435 17:35:34 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:38:57.435 17:35:34 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:38:57.435 17:35:34 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@511 -- # (( i = 1 )) 00:38:57.435 17:35:34 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:38:57.435 17:35:34 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@519 -- # i=1 00:38:57.435 17:35:34 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@520 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:38:57.435 17:35:34 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:57.435 17:35:34 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:38:57.435 [2024-11-26 17:35:34.677284] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:38:57.435 [2024-11-26 17:35:34.677511] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:38:57.435 [2024-11-26 17:35:34.677565] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:38:57.435 [2024-11-26 17:35:34.677651] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:38:57.435 [2024-11-26 17:35:34.680697] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:38:57.435 [2024-11-26 17:35:34.680849] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:38:57.435 [2024-11-26 17:35:34.681021] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:38:57.435 [2024-11-26 17:35:34.681222] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:38:57.435 [2024-11-26 17:35:34.681403] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:38:57.435 [2024-11-26 17:35:34.681504] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:38:57.435 [2024-11-26 17:35:34.681794] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:38:57.435 [2024-11-26 17:35:34.682123] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:38:57.435 [2024-11-26 17:35:34.682212] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008200 00:38:57.435 [2024-11-26 17:35:34.682534] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:38:57.435 pt2 00:38:57.435 17:35:34 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:57.435 17:35:34 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@523 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:38:57.435 17:35:34 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:38:57.435 17:35:34 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:38:57.435 17:35:34 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:38:57.435 17:35:34 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:38:57.435 17:35:34 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:38:57.435 17:35:34 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:38:57.435 17:35:34 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:38:57.435 17:35:34 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:38:57.435 17:35:34 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:38:57.435 17:35:34 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:38:57.435 17:35:34 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:57.435 17:35:34 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:38:57.435 17:35:34 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:38:57.435 17:35:34 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:57.435 17:35:34 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:38:57.435 "name": "raid_bdev1", 00:38:57.435 "uuid": "5369ed9b-329e-4719-9786-c27dd674dde6", 00:38:57.435 "strip_size_kb": 0, 00:38:57.435 "state": "online", 00:38:57.435 "raid_level": "raid1", 00:38:57.435 "superblock": true, 00:38:57.435 "num_base_bdevs": 2, 00:38:57.435 "num_base_bdevs_discovered": 1, 00:38:57.435 "num_base_bdevs_operational": 1, 00:38:57.435 "base_bdevs_list": [ 00:38:57.435 { 00:38:57.435 "name": null, 00:38:57.435 "uuid": "00000000-0000-0000-0000-000000000000", 00:38:57.435 "is_configured": false, 00:38:57.435 "data_offset": 256, 00:38:57.435 "data_size": 7936 00:38:57.435 }, 00:38:57.435 { 00:38:57.435 "name": "pt2", 00:38:57.435 "uuid": "00000000-0000-0000-0000-000000000002", 00:38:57.435 "is_configured": true, 00:38:57.435 "data_offset": 256, 00:38:57.435 "data_size": 7936 00:38:57.435 } 00:38:57.435 ] 00:38:57.435 }' 00:38:57.435 17:35:34 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:38:57.435 17:35:34 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:38:57.694 17:35:35 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@526 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:38:57.694 17:35:35 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:57.694 17:35:35 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:38:57.694 [2024-11-26 17:35:35.133537] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:38:57.694 [2024-11-26 17:35:35.133568] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:38:57.694 [2024-11-26 17:35:35.133619] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:38:57.694 [2024-11-26 17:35:35.133662] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:38:57.694 [2024-11-26 17:35:35.133673] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name raid_bdev1, state offline 00:38:57.694 17:35:35 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:57.954 17:35:35 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@527 -- # rpc_cmd bdev_raid_get_bdevs all 00:38:57.954 17:35:35 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:57.954 17:35:35 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:38:57.954 17:35:35 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@527 -- # jq -r '.[]' 00:38:57.954 17:35:35 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:57.954 17:35:35 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@527 -- # raid_bdev= 00:38:57.954 17:35:35 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@528 -- # '[' -n '' ']' 00:38:57.954 17:35:35 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@532 -- # '[' 2 -gt 2 ']' 00:38:57.954 17:35:35 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@540 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:38:57.954 17:35:35 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:57.954 17:35:35 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:38:57.954 [2024-11-26 17:35:35.193581] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:38:57.954 [2024-11-26 17:35:35.193757] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:38:57.954 [2024-11-26 17:35:35.193811] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009980 00:38:57.954 [2024-11-26 17:35:35.193899] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:38:57.954 [2024-11-26 17:35:35.196737] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:38:57.954 [2024-11-26 17:35:35.196776] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:38:57.954 [2024-11-26 17:35:35.196845] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:38:57.954 [2024-11-26 17:35:35.196896] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:38:57.954 [2024-11-26 17:35:35.197033] bdev_raid.c:3685:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev pt2 (4) greater than existing raid bdev raid_bdev1 (2) 00:38:57.954 [2024-11-26 17:35:35.197061] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:38:57.954 [2024-11-26 17:35:35.197078] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008580 name raid_bdev1, state configuring 00:38:57.954 [2024-11-26 17:35:35.197142] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:38:57.954 [2024-11-26 17:35:35.197213] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008900 00:38:57.954 [2024-11-26 17:35:35.197224] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:38:57.954 [2024-11-26 17:35:35.197488] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:38:57.954 [2024-11-26 17:35:35.197647] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008900 00:38:57.954 [2024-11-26 17:35:35.197671] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008900 00:38:57.954 [2024-11-26 17:35:35.197869] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:38:57.954 pt1 00:38:57.954 17:35:35 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:57.954 17:35:35 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@542 -- # '[' 2 -gt 2 ']' 00:38:57.954 17:35:35 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@554 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:38:57.954 17:35:35 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:38:57.954 17:35:35 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:38:57.954 17:35:35 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:38:57.954 17:35:35 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:38:57.954 17:35:35 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:38:57.954 17:35:35 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:38:57.954 17:35:35 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:38:57.954 17:35:35 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:38:57.954 17:35:35 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:38:57.954 17:35:35 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:38:57.954 17:35:35 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:38:57.954 17:35:35 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:57.954 17:35:35 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:38:57.954 17:35:35 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:57.954 17:35:35 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:38:57.954 "name": "raid_bdev1", 00:38:57.954 "uuid": "5369ed9b-329e-4719-9786-c27dd674dde6", 00:38:57.954 "strip_size_kb": 0, 00:38:57.954 "state": "online", 00:38:57.954 "raid_level": "raid1", 00:38:57.954 "superblock": true, 00:38:57.954 "num_base_bdevs": 2, 00:38:57.954 "num_base_bdevs_discovered": 1, 00:38:57.954 "num_base_bdevs_operational": 1, 00:38:57.954 "base_bdevs_list": [ 00:38:57.954 { 00:38:57.954 "name": null, 00:38:57.954 "uuid": "00000000-0000-0000-0000-000000000000", 00:38:57.954 "is_configured": false, 00:38:57.954 "data_offset": 256, 00:38:57.954 "data_size": 7936 00:38:57.954 }, 00:38:57.954 { 00:38:57.954 "name": "pt2", 00:38:57.954 "uuid": "00000000-0000-0000-0000-000000000002", 00:38:57.954 "is_configured": true, 00:38:57.954 "data_offset": 256, 00:38:57.954 "data_size": 7936 00:38:57.954 } 00:38:57.954 ] 00:38:57.954 }' 00:38:57.954 17:35:35 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:38:57.954 17:35:35 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:38:58.213 17:35:35 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@555 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:38:58.213 17:35:35 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@555 -- # rpc_cmd bdev_raid_get_bdevs online 00:38:58.213 17:35:35 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:58.213 17:35:35 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:38:58.472 17:35:35 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:58.472 17:35:35 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@555 -- # [[ false == \f\a\l\s\e ]] 00:38:58.472 17:35:35 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@558 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:38:58.472 17:35:35 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@558 -- # jq -r '.[] | .uuid' 00:38:58.472 17:35:35 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:58.472 17:35:35 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:38:58.472 [2024-11-26 17:35:35.706097] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:38:58.472 17:35:35 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:58.472 17:35:35 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@558 -- # '[' 5369ed9b-329e-4719-9786-c27dd674dde6 '!=' 5369ed9b-329e-4719-9786-c27dd674dde6 ']' 00:38:58.472 17:35:35 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@563 -- # killprocess 86660 00:38:58.472 17:35:35 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@954 -- # '[' -z 86660 ']' 00:38:58.472 17:35:35 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@958 -- # kill -0 86660 00:38:58.472 17:35:35 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@959 -- # uname 00:38:58.472 17:35:35 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:38:58.473 17:35:35 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 86660 00:38:58.473 17:35:35 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:38:58.473 killing process with pid 86660 00:38:58.473 17:35:35 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:38:58.473 17:35:35 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@972 -- # echo 'killing process with pid 86660' 00:38:58.473 17:35:35 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@973 -- # kill 86660 00:38:58.473 [2024-11-26 17:35:35.773716] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:38:58.473 [2024-11-26 17:35:35.773790] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:38:58.473 17:35:35 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@978 -- # wait 86660 00:38:58.473 [2024-11-26 17:35:35.773838] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:38:58.473 [2024-11-26 17:35:35.773857] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008900 name raid_bdev1, state offline 00:38:58.744 [2024-11-26 17:35:36.002975] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:39:00.136 ************************************ 00:39:00.136 END TEST raid_superblock_test_4k 00:39:00.136 ************************************ 00:39:00.136 17:35:37 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@565 -- # return 0 00:39:00.136 00:39:00.136 real 0m6.325s 00:39:00.136 user 0m9.418s 00:39:00.136 sys 0m1.274s 00:39:00.136 17:35:37 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@1130 -- # xtrace_disable 00:39:00.136 17:35:37 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:39:00.136 17:35:37 bdev_raid -- bdev/bdev_raid.sh@999 -- # '[' true = true ']' 00:39:00.136 17:35:37 bdev_raid -- bdev/bdev_raid.sh@1000 -- # run_test raid_rebuild_test_sb_4k raid_rebuild_test raid1 2 true false true 00:39:00.136 17:35:37 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 7 -le 1 ']' 00:39:00.136 17:35:37 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:39:00.136 17:35:37 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:39:00.136 ************************************ 00:39:00.136 START TEST raid_rebuild_test_sb_4k 00:39:00.136 ************************************ 00:39:00.136 17:35:37 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@1129 -- # raid_rebuild_test raid1 2 true false true 00:39:00.136 17:35:37 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@569 -- # local raid_level=raid1 00:39:00.136 17:35:37 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=2 00:39:00.136 17:35:37 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@571 -- # local superblock=true 00:39:00.136 17:35:37 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@572 -- # local background_io=false 00:39:00.136 17:35:37 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@573 -- # local verify=true 00:39:00.136 17:35:37 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:39:00.136 17:35:37 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:39:00.136 17:35:37 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:39:00.136 17:35:37 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:39:00.136 17:35:37 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:39:00.136 17:35:37 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:39:00.136 17:35:37 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:39:00.136 17:35:37 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:39:00.136 17:35:37 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:39:00.136 17:35:37 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:39:00.136 17:35:37 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:39:00.136 17:35:37 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@576 -- # local strip_size 00:39:00.136 17:35:37 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@577 -- # local create_arg 00:39:00.136 17:35:37 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:39:00.136 17:35:37 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@579 -- # local data_offset 00:39:00.136 17:35:37 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@581 -- # '[' raid1 '!=' raid1 ']' 00:39:00.136 17:35:37 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@589 -- # strip_size=0 00:39:00.136 17:35:37 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@592 -- # '[' true = true ']' 00:39:00.136 17:35:37 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@593 -- # create_arg+=' -s' 00:39:00.136 17:35:37 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@597 -- # raid_pid=86990 00:39:00.136 17:35:37 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@598 -- # waitforlisten 86990 00:39:00.136 17:35:37 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:39:00.136 17:35:37 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@835 -- # '[' -z 86990 ']' 00:39:00.136 17:35:37 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:39:00.136 17:35:37 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@840 -- # local max_retries=100 00:39:00.136 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:39:00.136 17:35:37 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:39:00.136 17:35:37 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@844 -- # xtrace_disable 00:39:00.136 17:35:37 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:39:00.136 [2024-11-26 17:35:37.431983] Starting SPDK v25.01-pre git sha1 f7ce15267 / DPDK 24.03.0 initialization... 00:39:00.136 [2024-11-26 17:35:37.432175] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid86990 ] 00:39:00.136 I/O size of 3145728 is greater than zero copy threshold (65536). 00:39:00.136 Zero copy mechanism will not be used. 00:39:00.394 [2024-11-26 17:35:37.622282] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:39:00.394 [2024-11-26 17:35:37.765170] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:39:00.652 [2024-11-26 17:35:38.010872] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:39:00.652 [2024-11-26 17:35:38.010954] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:39:00.912 17:35:38 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:39:00.912 17:35:38 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@868 -- # return 0 00:39:00.912 17:35:38 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:39:00.912 17:35:38 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 4096 -b BaseBdev1_malloc 00:39:00.912 17:35:38 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:00.912 17:35:38 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:39:01.172 BaseBdev1_malloc 00:39:01.172 17:35:38 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:01.172 17:35:38 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:39:01.172 17:35:38 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:01.172 17:35:38 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:39:01.172 [2024-11-26 17:35:38.391378] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:39:01.172 [2024-11-26 17:35:38.391713] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:39:01.172 [2024-11-26 17:35:38.391749] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:39:01.172 [2024-11-26 17:35:38.391767] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:39:01.172 [2024-11-26 17:35:38.394646] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:39:01.172 [2024-11-26 17:35:38.394693] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:39:01.172 BaseBdev1 00:39:01.172 17:35:38 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:01.172 17:35:38 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:39:01.172 17:35:38 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 4096 -b BaseBdev2_malloc 00:39:01.172 17:35:38 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:01.172 17:35:38 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:39:01.172 BaseBdev2_malloc 00:39:01.172 17:35:38 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:01.172 17:35:38 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:39:01.172 17:35:38 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:01.172 17:35:38 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:39:01.172 [2024-11-26 17:35:38.446099] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:39:01.172 [2024-11-26 17:35:38.446385] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:39:01.172 [2024-11-26 17:35:38.446452] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:39:01.173 [2024-11-26 17:35:38.446545] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:39:01.173 [2024-11-26 17:35:38.449323] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:39:01.173 [2024-11-26 17:35:38.449475] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:39:01.173 BaseBdev2 00:39:01.173 17:35:38 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:01.173 17:35:38 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 4096 -b spare_malloc 00:39:01.173 17:35:38 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:01.173 17:35:38 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:39:01.173 spare_malloc 00:39:01.173 17:35:38 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:01.173 17:35:38 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:39:01.173 17:35:38 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:01.173 17:35:38 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:39:01.173 spare_delay 00:39:01.173 17:35:38 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:01.173 17:35:38 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:39:01.173 17:35:38 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:01.173 17:35:38 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:39:01.173 [2024-11-26 17:35:38.530163] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:39:01.173 [2024-11-26 17:35:38.530234] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:39:01.173 [2024-11-26 17:35:38.530257] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:39:01.173 [2024-11-26 17:35:38.530272] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:39:01.173 [2024-11-26 17:35:38.532981] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:39:01.173 [2024-11-26 17:35:38.533023] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:39:01.173 spare 00:39:01.173 17:35:38 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:01.173 17:35:38 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n raid_bdev1 00:39:01.173 17:35:38 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:01.173 17:35:38 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:39:01.173 [2024-11-26 17:35:38.538234] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:39:01.173 [2024-11-26 17:35:38.540814] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:39:01.173 [2024-11-26 17:35:38.541168] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:39:01.173 [2024-11-26 17:35:38.541223] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:39:01.173 [2024-11-26 17:35:38.541626] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:39:01.173 [2024-11-26 17:35:38.541908] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:39:01.173 [2024-11-26 17:35:38.542007] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:39:01.173 [2024-11-26 17:35:38.542228] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:39:01.173 17:35:38 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:01.173 17:35:38 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:39:01.173 17:35:38 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:39:01.173 17:35:38 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:39:01.173 17:35:38 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:39:01.173 17:35:38 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:39:01.173 17:35:38 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:39:01.173 17:35:38 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:39:01.173 17:35:38 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:39:01.173 17:35:38 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:39:01.173 17:35:38 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:39:01.173 17:35:38 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:39:01.173 17:35:38 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:39:01.173 17:35:38 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:01.173 17:35:38 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:39:01.173 17:35:38 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:01.173 17:35:38 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:39:01.173 "name": "raid_bdev1", 00:39:01.173 "uuid": "f6fdc1be-0c2a-4e97-9961-2b8101299d38", 00:39:01.173 "strip_size_kb": 0, 00:39:01.173 "state": "online", 00:39:01.173 "raid_level": "raid1", 00:39:01.173 "superblock": true, 00:39:01.173 "num_base_bdevs": 2, 00:39:01.173 "num_base_bdevs_discovered": 2, 00:39:01.173 "num_base_bdevs_operational": 2, 00:39:01.173 "base_bdevs_list": [ 00:39:01.173 { 00:39:01.173 "name": "BaseBdev1", 00:39:01.173 "uuid": "8828ed03-0f2f-5178-88c9-776500247f37", 00:39:01.173 "is_configured": true, 00:39:01.173 "data_offset": 256, 00:39:01.173 "data_size": 7936 00:39:01.173 }, 00:39:01.173 { 00:39:01.173 "name": "BaseBdev2", 00:39:01.173 "uuid": "78003efe-560e-5af0-aaa1-cc3e98a31e48", 00:39:01.173 "is_configured": true, 00:39:01.173 "data_offset": 256, 00:39:01.173 "data_size": 7936 00:39:01.173 } 00:39:01.173 ] 00:39:01.173 }' 00:39:01.173 17:35:38 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:39:01.173 17:35:38 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:39:01.741 17:35:38 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:39:01.741 17:35:38 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:39:01.741 17:35:38 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:01.741 17:35:38 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:39:01.741 [2024-11-26 17:35:38.922700] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:39:01.741 17:35:38 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:01.741 17:35:38 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=7936 00:39:01.741 17:35:38 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:39:01.741 17:35:38 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:39:01.741 17:35:38 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:01.741 17:35:38 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:39:01.741 17:35:38 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:01.741 17:35:38 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@619 -- # data_offset=256 00:39:01.741 17:35:38 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@621 -- # '[' false = true ']' 00:39:01.741 17:35:38 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@624 -- # '[' true = true ']' 00:39:01.741 17:35:38 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@625 -- # local write_unit_size 00:39:01.741 17:35:38 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@628 -- # nbd_start_disks /var/tmp/spdk.sock raid_bdev1 /dev/nbd0 00:39:01.741 17:35:38 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:39:01.741 17:35:38 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@10 -- # bdev_list=('raid_bdev1') 00:39:01.741 17:35:38 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@10 -- # local bdev_list 00:39:01.741 17:35:38 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:39:01.741 17:35:38 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@11 -- # local nbd_list 00:39:01.741 17:35:38 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@12 -- # local i 00:39:01.741 17:35:38 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:39:01.741 17:35:38 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:39:01.741 17:35:38 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk raid_bdev1 /dev/nbd0 00:39:02.000 [2024-11-26 17:35:39.266435] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:39:02.000 /dev/nbd0 00:39:02.000 17:35:39 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:39:02.000 17:35:39 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:39:02.000 17:35:39 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:39:02.000 17:35:39 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@873 -- # local i 00:39:02.000 17:35:39 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:39:02.000 17:35:39 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:39:02.000 17:35:39 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:39:02.000 17:35:39 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@877 -- # break 00:39:02.000 17:35:39 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:39:02.000 17:35:39 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:39:02.000 17:35:39 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:39:02.000 1+0 records in 00:39:02.000 1+0 records out 00:39:02.000 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000765997 s, 5.3 MB/s 00:39:02.000 17:35:39 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:39:02.000 17:35:39 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@890 -- # size=4096 00:39:02.000 17:35:39 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:39:02.000 17:35:39 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:39:02.000 17:35:39 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@893 -- # return 0 00:39:02.000 17:35:39 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:39:02.000 17:35:39 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:39:02.000 17:35:39 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@629 -- # '[' raid1 = raid5f ']' 00:39:02.000 17:35:39 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@633 -- # write_unit_size=1 00:39:02.000 17:35:39 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@635 -- # dd if=/dev/urandom of=/dev/nbd0 bs=4096 count=7936 oflag=direct 00:39:02.936 7936+0 records in 00:39:02.936 7936+0 records out 00:39:02.936 32505856 bytes (33 MB, 31 MiB) copied, 0.751472 s, 43.3 MB/s 00:39:02.936 17:35:40 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@636 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:39:02.936 17:35:40 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:39:02.936 17:35:40 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:39:02.936 17:35:40 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@50 -- # local nbd_list 00:39:02.936 17:35:40 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@51 -- # local i 00:39:02.936 17:35:40 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:39:02.936 17:35:40 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:39:02.936 17:35:40 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:39:02.936 [2024-11-26 17:35:40.348710] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:39:02.936 17:35:40 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:39:02.936 17:35:40 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:39:02.936 17:35:40 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:39:02.936 17:35:40 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:39:02.936 17:35:40 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:39:02.936 17:35:40 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@41 -- # break 00:39:02.936 17:35:40 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@45 -- # return 0 00:39:02.936 17:35:40 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:39:02.936 17:35:40 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:02.936 17:35:40 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:39:02.936 [2024-11-26 17:35:40.365868] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:39:02.936 17:35:40 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:02.936 17:35:40 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:39:02.936 17:35:40 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:39:02.936 17:35:40 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:39:02.936 17:35:40 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:39:02.936 17:35:40 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:39:02.936 17:35:40 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:39:02.936 17:35:40 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:39:02.936 17:35:40 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:39:02.936 17:35:40 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:39:02.936 17:35:40 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:39:02.936 17:35:40 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:39:02.936 17:35:40 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:39:02.936 17:35:40 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:02.936 17:35:40 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:39:03.195 17:35:40 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:03.195 17:35:40 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:39:03.195 "name": "raid_bdev1", 00:39:03.195 "uuid": "f6fdc1be-0c2a-4e97-9961-2b8101299d38", 00:39:03.195 "strip_size_kb": 0, 00:39:03.195 "state": "online", 00:39:03.195 "raid_level": "raid1", 00:39:03.195 "superblock": true, 00:39:03.195 "num_base_bdevs": 2, 00:39:03.195 "num_base_bdevs_discovered": 1, 00:39:03.195 "num_base_bdevs_operational": 1, 00:39:03.195 "base_bdevs_list": [ 00:39:03.195 { 00:39:03.195 "name": null, 00:39:03.195 "uuid": "00000000-0000-0000-0000-000000000000", 00:39:03.195 "is_configured": false, 00:39:03.195 "data_offset": 0, 00:39:03.195 "data_size": 7936 00:39:03.195 }, 00:39:03.195 { 00:39:03.195 "name": "BaseBdev2", 00:39:03.195 "uuid": "78003efe-560e-5af0-aaa1-cc3e98a31e48", 00:39:03.195 "is_configured": true, 00:39:03.195 "data_offset": 256, 00:39:03.195 "data_size": 7936 00:39:03.195 } 00:39:03.195 ] 00:39:03.195 }' 00:39:03.195 17:35:40 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:39:03.195 17:35:40 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:39:03.453 17:35:40 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:39:03.453 17:35:40 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:03.453 17:35:40 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:39:03.453 [2024-11-26 17:35:40.741989] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:39:03.453 [2024-11-26 17:35:40.762169] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00018d260 00:39:03.454 17:35:40 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:03.454 17:35:40 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@647 -- # sleep 1 00:39:03.454 [2024-11-26 17:35:40.764700] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:39:04.389 17:35:41 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:39:04.389 17:35:41 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:39:04.389 17:35:41 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:39:04.389 17:35:41 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@171 -- # local target=spare 00:39:04.389 17:35:41 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:39:04.389 17:35:41 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:39:04.389 17:35:41 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:39:04.389 17:35:41 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:04.389 17:35:41 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:39:04.389 17:35:41 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:04.389 17:35:41 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:39:04.389 "name": "raid_bdev1", 00:39:04.389 "uuid": "f6fdc1be-0c2a-4e97-9961-2b8101299d38", 00:39:04.389 "strip_size_kb": 0, 00:39:04.389 "state": "online", 00:39:04.389 "raid_level": "raid1", 00:39:04.389 "superblock": true, 00:39:04.389 "num_base_bdevs": 2, 00:39:04.389 "num_base_bdevs_discovered": 2, 00:39:04.389 "num_base_bdevs_operational": 2, 00:39:04.389 "process": { 00:39:04.389 "type": "rebuild", 00:39:04.389 "target": "spare", 00:39:04.389 "progress": { 00:39:04.389 "blocks": 2560, 00:39:04.389 "percent": 32 00:39:04.389 } 00:39:04.389 }, 00:39:04.389 "base_bdevs_list": [ 00:39:04.389 { 00:39:04.389 "name": "spare", 00:39:04.390 "uuid": "bf8797e3-2cc7-5292-b913-a2a672ed610f", 00:39:04.390 "is_configured": true, 00:39:04.390 "data_offset": 256, 00:39:04.390 "data_size": 7936 00:39:04.390 }, 00:39:04.390 { 00:39:04.390 "name": "BaseBdev2", 00:39:04.390 "uuid": "78003efe-560e-5af0-aaa1-cc3e98a31e48", 00:39:04.390 "is_configured": true, 00:39:04.390 "data_offset": 256, 00:39:04.390 "data_size": 7936 00:39:04.390 } 00:39:04.390 ] 00:39:04.390 }' 00:39:04.390 17:35:41 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:39:04.649 17:35:41 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:39:04.649 17:35:41 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:39:04.649 17:35:41 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:39:04.649 17:35:41 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:39:04.649 17:35:41 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:04.649 17:35:41 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:39:04.649 [2024-11-26 17:35:41.918639] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:39:04.649 [2024-11-26 17:35:41.976497] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:39:04.649 [2024-11-26 17:35:41.976579] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:39:04.649 [2024-11-26 17:35:41.976596] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:39:04.649 [2024-11-26 17:35:41.976609] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:39:04.649 17:35:42 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:04.649 17:35:42 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:39:04.649 17:35:42 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:39:04.649 17:35:42 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:39:04.649 17:35:42 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:39:04.649 17:35:42 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:39:04.649 17:35:42 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:39:04.649 17:35:42 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:39:04.649 17:35:42 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:39:04.649 17:35:42 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:39:04.649 17:35:42 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:39:04.649 17:35:42 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:39:04.649 17:35:42 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:39:04.649 17:35:42 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:04.649 17:35:42 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:39:04.649 17:35:42 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:04.649 17:35:42 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:39:04.649 "name": "raid_bdev1", 00:39:04.649 "uuid": "f6fdc1be-0c2a-4e97-9961-2b8101299d38", 00:39:04.649 "strip_size_kb": 0, 00:39:04.649 "state": "online", 00:39:04.649 "raid_level": "raid1", 00:39:04.650 "superblock": true, 00:39:04.650 "num_base_bdevs": 2, 00:39:04.650 "num_base_bdevs_discovered": 1, 00:39:04.650 "num_base_bdevs_operational": 1, 00:39:04.650 "base_bdevs_list": [ 00:39:04.650 { 00:39:04.650 "name": null, 00:39:04.650 "uuid": "00000000-0000-0000-0000-000000000000", 00:39:04.650 "is_configured": false, 00:39:04.650 "data_offset": 0, 00:39:04.650 "data_size": 7936 00:39:04.650 }, 00:39:04.650 { 00:39:04.650 "name": "BaseBdev2", 00:39:04.650 "uuid": "78003efe-560e-5af0-aaa1-cc3e98a31e48", 00:39:04.650 "is_configured": true, 00:39:04.650 "data_offset": 256, 00:39:04.650 "data_size": 7936 00:39:04.650 } 00:39:04.650 ] 00:39:04.650 }' 00:39:04.650 17:35:42 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:39:04.650 17:35:42 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:39:05.217 17:35:42 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:39:05.217 17:35:42 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:39:05.217 17:35:42 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:39:05.217 17:35:42 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@171 -- # local target=none 00:39:05.217 17:35:42 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:39:05.218 17:35:42 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:39:05.218 17:35:42 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:39:05.218 17:35:42 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:05.218 17:35:42 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:39:05.218 17:35:42 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:05.218 17:35:42 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:39:05.218 "name": "raid_bdev1", 00:39:05.218 "uuid": "f6fdc1be-0c2a-4e97-9961-2b8101299d38", 00:39:05.218 "strip_size_kb": 0, 00:39:05.218 "state": "online", 00:39:05.218 "raid_level": "raid1", 00:39:05.218 "superblock": true, 00:39:05.218 "num_base_bdevs": 2, 00:39:05.218 "num_base_bdevs_discovered": 1, 00:39:05.218 "num_base_bdevs_operational": 1, 00:39:05.218 "base_bdevs_list": [ 00:39:05.218 { 00:39:05.218 "name": null, 00:39:05.218 "uuid": "00000000-0000-0000-0000-000000000000", 00:39:05.218 "is_configured": false, 00:39:05.218 "data_offset": 0, 00:39:05.218 "data_size": 7936 00:39:05.218 }, 00:39:05.218 { 00:39:05.218 "name": "BaseBdev2", 00:39:05.218 "uuid": "78003efe-560e-5af0-aaa1-cc3e98a31e48", 00:39:05.218 "is_configured": true, 00:39:05.218 "data_offset": 256, 00:39:05.218 "data_size": 7936 00:39:05.218 } 00:39:05.218 ] 00:39:05.218 }' 00:39:05.218 17:35:42 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:39:05.218 17:35:42 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:39:05.218 17:35:42 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:39:05.218 17:35:42 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:39:05.218 17:35:42 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:39:05.218 17:35:42 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:05.218 17:35:42 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:39:05.218 [2024-11-26 17:35:42.603001] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:39:05.218 [2024-11-26 17:35:42.621267] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00018d330 00:39:05.218 17:35:42 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:05.218 17:35:42 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@663 -- # sleep 1 00:39:05.218 [2024-11-26 17:35:42.623894] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:39:06.594 17:35:43 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:39:06.594 17:35:43 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:39:06.594 17:35:43 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:39:06.594 17:35:43 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@171 -- # local target=spare 00:39:06.594 17:35:43 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:39:06.594 17:35:43 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:39:06.594 17:35:43 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:06.594 17:35:43 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:39:06.594 17:35:43 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:39:06.594 17:35:43 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:06.594 17:35:43 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:39:06.594 "name": "raid_bdev1", 00:39:06.594 "uuid": "f6fdc1be-0c2a-4e97-9961-2b8101299d38", 00:39:06.594 "strip_size_kb": 0, 00:39:06.594 "state": "online", 00:39:06.594 "raid_level": "raid1", 00:39:06.594 "superblock": true, 00:39:06.594 "num_base_bdevs": 2, 00:39:06.594 "num_base_bdevs_discovered": 2, 00:39:06.594 "num_base_bdevs_operational": 2, 00:39:06.594 "process": { 00:39:06.594 "type": "rebuild", 00:39:06.594 "target": "spare", 00:39:06.594 "progress": { 00:39:06.594 "blocks": 2560, 00:39:06.594 "percent": 32 00:39:06.594 } 00:39:06.594 }, 00:39:06.594 "base_bdevs_list": [ 00:39:06.594 { 00:39:06.594 "name": "spare", 00:39:06.594 "uuid": "bf8797e3-2cc7-5292-b913-a2a672ed610f", 00:39:06.594 "is_configured": true, 00:39:06.594 "data_offset": 256, 00:39:06.594 "data_size": 7936 00:39:06.594 }, 00:39:06.594 { 00:39:06.594 "name": "BaseBdev2", 00:39:06.594 "uuid": "78003efe-560e-5af0-aaa1-cc3e98a31e48", 00:39:06.594 "is_configured": true, 00:39:06.594 "data_offset": 256, 00:39:06.594 "data_size": 7936 00:39:06.594 } 00:39:06.594 ] 00:39:06.594 }' 00:39:06.594 17:35:43 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:39:06.594 17:35:43 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:39:06.594 17:35:43 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:39:06.594 17:35:43 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:39:06.594 17:35:43 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@666 -- # '[' true = true ']' 00:39:06.594 17:35:43 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@666 -- # '[' = false ']' 00:39:06.594 /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh: line 666: [: =: unary operator expected 00:39:06.594 17:35:43 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=2 00:39:06.594 17:35:43 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@693 -- # '[' raid1 = raid1 ']' 00:39:06.595 17:35:43 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@693 -- # '[' 2 -gt 2 ']' 00:39:06.595 17:35:43 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@706 -- # local timeout=697 00:39:06.595 17:35:43 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:39:06.595 17:35:43 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:39:06.595 17:35:43 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:39:06.595 17:35:43 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:39:06.595 17:35:43 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@171 -- # local target=spare 00:39:06.595 17:35:43 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:39:06.595 17:35:43 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:39:06.595 17:35:43 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:39:06.595 17:35:43 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:06.595 17:35:43 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:39:06.595 17:35:43 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:06.595 17:35:43 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:39:06.595 "name": "raid_bdev1", 00:39:06.595 "uuid": "f6fdc1be-0c2a-4e97-9961-2b8101299d38", 00:39:06.595 "strip_size_kb": 0, 00:39:06.595 "state": "online", 00:39:06.595 "raid_level": "raid1", 00:39:06.595 "superblock": true, 00:39:06.595 "num_base_bdevs": 2, 00:39:06.595 "num_base_bdevs_discovered": 2, 00:39:06.595 "num_base_bdevs_operational": 2, 00:39:06.595 "process": { 00:39:06.595 "type": "rebuild", 00:39:06.595 "target": "spare", 00:39:06.595 "progress": { 00:39:06.595 "blocks": 2816, 00:39:06.595 "percent": 35 00:39:06.595 } 00:39:06.595 }, 00:39:06.595 "base_bdevs_list": [ 00:39:06.595 { 00:39:06.595 "name": "spare", 00:39:06.595 "uuid": "bf8797e3-2cc7-5292-b913-a2a672ed610f", 00:39:06.595 "is_configured": true, 00:39:06.595 "data_offset": 256, 00:39:06.595 "data_size": 7936 00:39:06.595 }, 00:39:06.595 { 00:39:06.595 "name": "BaseBdev2", 00:39:06.595 "uuid": "78003efe-560e-5af0-aaa1-cc3e98a31e48", 00:39:06.595 "is_configured": true, 00:39:06.595 "data_offset": 256, 00:39:06.595 "data_size": 7936 00:39:06.595 } 00:39:06.595 ] 00:39:06.595 }' 00:39:06.595 17:35:43 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:39:06.595 17:35:43 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:39:06.595 17:35:43 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:39:06.595 17:35:43 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:39:06.595 17:35:43 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@711 -- # sleep 1 00:39:07.587 17:35:44 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:39:07.587 17:35:44 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:39:07.587 17:35:44 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:39:07.587 17:35:44 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:39:07.587 17:35:44 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@171 -- # local target=spare 00:39:07.587 17:35:44 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:39:07.587 17:35:44 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:39:07.587 17:35:44 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:39:07.587 17:35:44 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:07.587 17:35:44 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:39:07.587 17:35:44 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:07.587 17:35:44 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:39:07.587 "name": "raid_bdev1", 00:39:07.587 "uuid": "f6fdc1be-0c2a-4e97-9961-2b8101299d38", 00:39:07.587 "strip_size_kb": 0, 00:39:07.587 "state": "online", 00:39:07.587 "raid_level": "raid1", 00:39:07.587 "superblock": true, 00:39:07.587 "num_base_bdevs": 2, 00:39:07.587 "num_base_bdevs_discovered": 2, 00:39:07.587 "num_base_bdevs_operational": 2, 00:39:07.587 "process": { 00:39:07.587 "type": "rebuild", 00:39:07.587 "target": "spare", 00:39:07.587 "progress": { 00:39:07.587 "blocks": 5632, 00:39:07.587 "percent": 70 00:39:07.587 } 00:39:07.587 }, 00:39:07.587 "base_bdevs_list": [ 00:39:07.587 { 00:39:07.587 "name": "spare", 00:39:07.587 "uuid": "bf8797e3-2cc7-5292-b913-a2a672ed610f", 00:39:07.587 "is_configured": true, 00:39:07.587 "data_offset": 256, 00:39:07.587 "data_size": 7936 00:39:07.587 }, 00:39:07.587 { 00:39:07.587 "name": "BaseBdev2", 00:39:07.587 "uuid": "78003efe-560e-5af0-aaa1-cc3e98a31e48", 00:39:07.587 "is_configured": true, 00:39:07.587 "data_offset": 256, 00:39:07.587 "data_size": 7936 00:39:07.587 } 00:39:07.587 ] 00:39:07.587 }' 00:39:07.587 17:35:44 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:39:07.587 17:35:45 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:39:07.587 17:35:45 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:39:07.846 17:35:45 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:39:07.846 17:35:45 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@711 -- # sleep 1 00:39:08.412 [2024-11-26 17:35:45.754620] bdev_raid.c:2900:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:39:08.412 [2024-11-26 17:35:45.754714] bdev_raid.c:2562:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:39:08.412 [2024-11-26 17:35:45.754843] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:39:08.670 17:35:46 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:39:08.670 17:35:46 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:39:08.670 17:35:46 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:39:08.670 17:35:46 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:39:08.670 17:35:46 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@171 -- # local target=spare 00:39:08.670 17:35:46 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:39:08.670 17:35:46 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:39:08.670 17:35:46 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:39:08.670 17:35:46 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:08.670 17:35:46 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:39:08.670 17:35:46 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:08.670 17:35:46 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:39:08.670 "name": "raid_bdev1", 00:39:08.670 "uuid": "f6fdc1be-0c2a-4e97-9961-2b8101299d38", 00:39:08.670 "strip_size_kb": 0, 00:39:08.670 "state": "online", 00:39:08.670 "raid_level": "raid1", 00:39:08.670 "superblock": true, 00:39:08.670 "num_base_bdevs": 2, 00:39:08.670 "num_base_bdevs_discovered": 2, 00:39:08.670 "num_base_bdevs_operational": 2, 00:39:08.670 "base_bdevs_list": [ 00:39:08.670 { 00:39:08.670 "name": "spare", 00:39:08.670 "uuid": "bf8797e3-2cc7-5292-b913-a2a672ed610f", 00:39:08.670 "is_configured": true, 00:39:08.670 "data_offset": 256, 00:39:08.670 "data_size": 7936 00:39:08.670 }, 00:39:08.670 { 00:39:08.670 "name": "BaseBdev2", 00:39:08.670 "uuid": "78003efe-560e-5af0-aaa1-cc3e98a31e48", 00:39:08.670 "is_configured": true, 00:39:08.670 "data_offset": 256, 00:39:08.670 "data_size": 7936 00:39:08.670 } 00:39:08.670 ] 00:39:08.670 }' 00:39:08.670 17:35:46 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:39:08.929 17:35:46 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:39:08.929 17:35:46 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:39:08.929 17:35:46 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:39:08.929 17:35:46 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@709 -- # break 00:39:08.929 17:35:46 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:39:08.929 17:35:46 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:39:08.929 17:35:46 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:39:08.929 17:35:46 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@171 -- # local target=none 00:39:08.929 17:35:46 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:39:08.929 17:35:46 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:39:08.929 17:35:46 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:08.929 17:35:46 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:39:08.929 17:35:46 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:39:08.929 17:35:46 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:08.929 17:35:46 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:39:08.929 "name": "raid_bdev1", 00:39:08.929 "uuid": "f6fdc1be-0c2a-4e97-9961-2b8101299d38", 00:39:08.929 "strip_size_kb": 0, 00:39:08.929 "state": "online", 00:39:08.929 "raid_level": "raid1", 00:39:08.929 "superblock": true, 00:39:08.929 "num_base_bdevs": 2, 00:39:08.929 "num_base_bdevs_discovered": 2, 00:39:08.929 "num_base_bdevs_operational": 2, 00:39:08.929 "base_bdevs_list": [ 00:39:08.929 { 00:39:08.929 "name": "spare", 00:39:08.929 "uuid": "bf8797e3-2cc7-5292-b913-a2a672ed610f", 00:39:08.929 "is_configured": true, 00:39:08.929 "data_offset": 256, 00:39:08.929 "data_size": 7936 00:39:08.929 }, 00:39:08.929 { 00:39:08.929 "name": "BaseBdev2", 00:39:08.929 "uuid": "78003efe-560e-5af0-aaa1-cc3e98a31e48", 00:39:08.929 "is_configured": true, 00:39:08.930 "data_offset": 256, 00:39:08.930 "data_size": 7936 00:39:08.930 } 00:39:08.930 ] 00:39:08.930 }' 00:39:08.930 17:35:46 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:39:08.930 17:35:46 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:39:08.930 17:35:46 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:39:08.930 17:35:46 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:39:08.930 17:35:46 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:39:08.930 17:35:46 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:39:08.930 17:35:46 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:39:08.930 17:35:46 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:39:08.930 17:35:46 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:39:08.930 17:35:46 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:39:08.930 17:35:46 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:39:08.930 17:35:46 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:39:08.930 17:35:46 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:39:08.930 17:35:46 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:39:08.930 17:35:46 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:39:08.930 17:35:46 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:39:08.930 17:35:46 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:08.930 17:35:46 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:39:08.930 17:35:46 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:09.188 17:35:46 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:39:09.188 "name": "raid_bdev1", 00:39:09.188 "uuid": "f6fdc1be-0c2a-4e97-9961-2b8101299d38", 00:39:09.188 "strip_size_kb": 0, 00:39:09.188 "state": "online", 00:39:09.188 "raid_level": "raid1", 00:39:09.188 "superblock": true, 00:39:09.188 "num_base_bdevs": 2, 00:39:09.188 "num_base_bdevs_discovered": 2, 00:39:09.188 "num_base_bdevs_operational": 2, 00:39:09.188 "base_bdevs_list": [ 00:39:09.188 { 00:39:09.188 "name": "spare", 00:39:09.188 "uuid": "bf8797e3-2cc7-5292-b913-a2a672ed610f", 00:39:09.188 "is_configured": true, 00:39:09.188 "data_offset": 256, 00:39:09.188 "data_size": 7936 00:39:09.188 }, 00:39:09.188 { 00:39:09.188 "name": "BaseBdev2", 00:39:09.188 "uuid": "78003efe-560e-5af0-aaa1-cc3e98a31e48", 00:39:09.188 "is_configured": true, 00:39:09.188 "data_offset": 256, 00:39:09.188 "data_size": 7936 00:39:09.188 } 00:39:09.188 ] 00:39:09.188 }' 00:39:09.188 17:35:46 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:39:09.188 17:35:46 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:39:09.447 17:35:46 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:39:09.447 17:35:46 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:09.447 17:35:46 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:39:09.447 [2024-11-26 17:35:46.797812] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:39:09.447 [2024-11-26 17:35:46.797880] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:39:09.447 [2024-11-26 17:35:46.798040] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:39:09.447 [2024-11-26 17:35:46.798181] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:39:09.447 [2024-11-26 17:35:46.798206] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:39:09.447 17:35:46 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:09.447 17:35:46 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@720 -- # jq length 00:39:09.447 17:35:46 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:39:09.447 17:35:46 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:09.447 17:35:46 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:39:09.447 17:35:46 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:09.447 17:35:46 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:39:09.447 17:35:46 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:39:09.447 17:35:46 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@723 -- # '[' false = true ']' 00:39:09.447 17:35:46 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@737 -- # nbd_start_disks /var/tmp/spdk.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 00:39:09.447 17:35:46 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:39:09.447 17:35:46 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev1' 'spare') 00:39:09.447 17:35:46 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@10 -- # local bdev_list 00:39:09.447 17:35:46 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:39:09.447 17:35:46 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@11 -- # local nbd_list 00:39:09.447 17:35:46 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@12 -- # local i 00:39:09.447 17:35:46 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:39:09.447 17:35:46 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:39:09.447 17:35:46 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev1 /dev/nbd0 00:39:09.705 /dev/nbd0 00:39:09.705 17:35:47 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:39:09.705 17:35:47 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:39:09.705 17:35:47 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:39:09.705 17:35:47 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@873 -- # local i 00:39:09.705 17:35:47 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:39:09.705 17:35:47 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:39:09.705 17:35:47 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:39:09.705 17:35:47 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@877 -- # break 00:39:09.705 17:35:47 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:39:09.705 17:35:47 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:39:09.705 17:35:47 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:39:09.705 1+0 records in 00:39:09.705 1+0 records out 00:39:09.705 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000231447 s, 17.7 MB/s 00:39:09.706 17:35:47 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:39:09.706 17:35:47 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@890 -- # size=4096 00:39:09.706 17:35:47 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:39:09.706 17:35:47 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:39:09.706 17:35:47 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@893 -- # return 0 00:39:09.706 17:35:47 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:39:09.706 17:35:47 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:39:09.706 17:35:47 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd1 00:39:10.272 /dev/nbd1 00:39:10.272 17:35:47 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:39:10.272 17:35:47 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:39:10.272 17:35:47 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:39:10.272 17:35:47 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@873 -- # local i 00:39:10.272 17:35:47 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:39:10.272 17:35:47 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:39:10.272 17:35:47 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:39:10.272 17:35:47 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@877 -- # break 00:39:10.272 17:35:47 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:39:10.272 17:35:47 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:39:10.272 17:35:47 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:39:10.272 1+0 records in 00:39:10.272 1+0 records out 00:39:10.272 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000330552 s, 12.4 MB/s 00:39:10.272 17:35:47 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:39:10.272 17:35:47 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@890 -- # size=4096 00:39:10.272 17:35:47 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:39:10.272 17:35:47 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:39:10.272 17:35:47 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@893 -- # return 0 00:39:10.272 17:35:47 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:39:10.272 17:35:47 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:39:10.272 17:35:47 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@738 -- # cmp -i 1048576 /dev/nbd0 /dev/nbd1 00:39:10.272 17:35:47 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@739 -- # nbd_stop_disks /var/tmp/spdk.sock '/dev/nbd0 /dev/nbd1' 00:39:10.272 17:35:47 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:39:10.272 17:35:47 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:39:10.272 17:35:47 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@50 -- # local nbd_list 00:39:10.272 17:35:47 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@51 -- # local i 00:39:10.272 17:35:47 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:39:10.272 17:35:47 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:39:10.530 17:35:47 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:39:10.530 17:35:47 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:39:10.530 17:35:47 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:39:10.530 17:35:47 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:39:10.530 17:35:47 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:39:10.530 17:35:47 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:39:10.530 17:35:47 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@41 -- # break 00:39:10.530 17:35:47 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@45 -- # return 0 00:39:10.530 17:35:47 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:39:10.530 17:35:47 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:39:11.097 17:35:48 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:39:11.097 17:35:48 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:39:11.097 17:35:48 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:39:11.097 17:35:48 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:39:11.097 17:35:48 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:39:11.097 17:35:48 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:39:11.097 17:35:48 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@41 -- # break 00:39:11.097 17:35:48 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@45 -- # return 0 00:39:11.097 17:35:48 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@743 -- # '[' true = true ']' 00:39:11.097 17:35:48 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@745 -- # rpc_cmd bdev_passthru_delete spare 00:39:11.097 17:35:48 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:11.097 17:35:48 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:39:11.097 17:35:48 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:11.097 17:35:48 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@746 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:39:11.097 17:35:48 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:11.097 17:35:48 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:39:11.097 [2024-11-26 17:35:48.271875] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:39:11.097 [2024-11-26 17:35:48.271945] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:39:11.097 [2024-11-26 17:35:48.271976] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:39:11.097 [2024-11-26 17:35:48.271988] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:39:11.097 [2024-11-26 17:35:48.274719] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:39:11.097 [2024-11-26 17:35:48.274760] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:39:11.097 [2024-11-26 17:35:48.274851] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:39:11.097 [2024-11-26 17:35:48.274908] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:39:11.097 [2024-11-26 17:35:48.275079] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:39:11.097 spare 00:39:11.097 17:35:48 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:11.097 17:35:48 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@747 -- # rpc_cmd bdev_wait_for_examine 00:39:11.097 17:35:48 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:11.097 17:35:48 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:39:11.097 [2024-11-26 17:35:48.375181] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007b00 00:39:11.097 [2024-11-26 17:35:48.375212] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:39:11.097 [2024-11-26 17:35:48.375489] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0001c1b50 00:39:11.097 [2024-11-26 17:35:48.375658] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007b00 00:39:11.097 [2024-11-26 17:35:48.375677] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007b00 00:39:11.097 [2024-11-26 17:35:48.375874] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:39:11.098 17:35:48 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:11.098 17:35:48 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@749 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:39:11.098 17:35:48 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:39:11.098 17:35:48 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:39:11.098 17:35:48 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:39:11.098 17:35:48 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:39:11.098 17:35:48 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:39:11.098 17:35:48 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:39:11.098 17:35:48 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:39:11.098 17:35:48 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:39:11.098 17:35:48 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:39:11.098 17:35:48 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:39:11.098 17:35:48 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:39:11.098 17:35:48 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:11.098 17:35:48 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:39:11.098 17:35:48 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:11.098 17:35:48 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:39:11.098 "name": "raid_bdev1", 00:39:11.098 "uuid": "f6fdc1be-0c2a-4e97-9961-2b8101299d38", 00:39:11.098 "strip_size_kb": 0, 00:39:11.098 "state": "online", 00:39:11.098 "raid_level": "raid1", 00:39:11.098 "superblock": true, 00:39:11.098 "num_base_bdevs": 2, 00:39:11.098 "num_base_bdevs_discovered": 2, 00:39:11.098 "num_base_bdevs_operational": 2, 00:39:11.098 "base_bdevs_list": [ 00:39:11.098 { 00:39:11.098 "name": "spare", 00:39:11.098 "uuid": "bf8797e3-2cc7-5292-b913-a2a672ed610f", 00:39:11.098 "is_configured": true, 00:39:11.098 "data_offset": 256, 00:39:11.098 "data_size": 7936 00:39:11.098 }, 00:39:11.098 { 00:39:11.098 "name": "BaseBdev2", 00:39:11.098 "uuid": "78003efe-560e-5af0-aaa1-cc3e98a31e48", 00:39:11.098 "is_configured": true, 00:39:11.098 "data_offset": 256, 00:39:11.098 "data_size": 7936 00:39:11.098 } 00:39:11.098 ] 00:39:11.098 }' 00:39:11.098 17:35:48 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:39:11.098 17:35:48 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:39:11.357 17:35:48 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@750 -- # verify_raid_bdev_process raid_bdev1 none none 00:39:11.357 17:35:48 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:39:11.357 17:35:48 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:39:11.357 17:35:48 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@171 -- # local target=none 00:39:11.357 17:35:48 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:39:11.357 17:35:48 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:39:11.357 17:35:48 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:11.357 17:35:48 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:39:11.357 17:35:48 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:39:11.618 17:35:48 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:11.618 17:35:48 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:39:11.618 "name": "raid_bdev1", 00:39:11.618 "uuid": "f6fdc1be-0c2a-4e97-9961-2b8101299d38", 00:39:11.618 "strip_size_kb": 0, 00:39:11.618 "state": "online", 00:39:11.618 "raid_level": "raid1", 00:39:11.618 "superblock": true, 00:39:11.618 "num_base_bdevs": 2, 00:39:11.618 "num_base_bdevs_discovered": 2, 00:39:11.618 "num_base_bdevs_operational": 2, 00:39:11.618 "base_bdevs_list": [ 00:39:11.618 { 00:39:11.618 "name": "spare", 00:39:11.618 "uuid": "bf8797e3-2cc7-5292-b913-a2a672ed610f", 00:39:11.618 "is_configured": true, 00:39:11.618 "data_offset": 256, 00:39:11.618 "data_size": 7936 00:39:11.618 }, 00:39:11.618 { 00:39:11.618 "name": "BaseBdev2", 00:39:11.618 "uuid": "78003efe-560e-5af0-aaa1-cc3e98a31e48", 00:39:11.618 "is_configured": true, 00:39:11.618 "data_offset": 256, 00:39:11.618 "data_size": 7936 00:39:11.618 } 00:39:11.618 ] 00:39:11.618 }' 00:39:11.618 17:35:48 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:39:11.618 17:35:48 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:39:11.618 17:35:48 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:39:11.618 17:35:48 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:39:11.618 17:35:48 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@751 -- # rpc_cmd bdev_raid_get_bdevs all 00:39:11.618 17:35:48 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@751 -- # jq -r '.[].base_bdevs_list[0].name' 00:39:11.618 17:35:48 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:11.618 17:35:48 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:39:11.618 17:35:48 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:11.618 17:35:48 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@751 -- # [[ spare == \s\p\a\r\e ]] 00:39:11.618 17:35:48 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@754 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:39:11.618 17:35:48 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:11.618 17:35:48 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:39:11.618 [2024-11-26 17:35:48.984097] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:39:11.618 17:35:48 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:11.618 17:35:48 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@755 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:39:11.618 17:35:48 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:39:11.618 17:35:48 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:39:11.618 17:35:48 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:39:11.618 17:35:48 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:39:11.618 17:35:48 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:39:11.618 17:35:48 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:39:11.618 17:35:48 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:39:11.618 17:35:48 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:39:11.618 17:35:48 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:39:11.618 17:35:48 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:39:11.618 17:35:48 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:11.618 17:35:48 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:39:11.618 17:35:48 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:39:11.618 17:35:49 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:11.618 17:35:49 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:39:11.618 "name": "raid_bdev1", 00:39:11.618 "uuid": "f6fdc1be-0c2a-4e97-9961-2b8101299d38", 00:39:11.618 "strip_size_kb": 0, 00:39:11.618 "state": "online", 00:39:11.618 "raid_level": "raid1", 00:39:11.618 "superblock": true, 00:39:11.618 "num_base_bdevs": 2, 00:39:11.618 "num_base_bdevs_discovered": 1, 00:39:11.618 "num_base_bdevs_operational": 1, 00:39:11.618 "base_bdevs_list": [ 00:39:11.618 { 00:39:11.618 "name": null, 00:39:11.619 "uuid": "00000000-0000-0000-0000-000000000000", 00:39:11.619 "is_configured": false, 00:39:11.619 "data_offset": 0, 00:39:11.619 "data_size": 7936 00:39:11.619 }, 00:39:11.619 { 00:39:11.619 "name": "BaseBdev2", 00:39:11.619 "uuid": "78003efe-560e-5af0-aaa1-cc3e98a31e48", 00:39:11.619 "is_configured": true, 00:39:11.619 "data_offset": 256, 00:39:11.619 "data_size": 7936 00:39:11.619 } 00:39:11.619 ] 00:39:11.619 }' 00:39:11.619 17:35:49 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:39:11.619 17:35:49 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:39:12.185 17:35:49 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@756 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:39:12.185 17:35:49 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:12.185 17:35:49 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:39:12.185 [2024-11-26 17:35:49.460670] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:39:12.185 [2024-11-26 17:35:49.460818] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:39:12.185 [2024-11-26 17:35:49.460842] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:39:12.185 [2024-11-26 17:35:49.460879] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:39:12.185 [2024-11-26 17:35:49.478262] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0001c1c20 00:39:12.185 17:35:49 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:12.185 17:35:49 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@757 -- # sleep 1 00:39:12.185 [2024-11-26 17:35:49.480738] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:39:13.120 17:35:50 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@758 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:39:13.120 17:35:50 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:39:13.120 17:35:50 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:39:13.120 17:35:50 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@171 -- # local target=spare 00:39:13.120 17:35:50 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:39:13.120 17:35:50 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:39:13.120 17:35:50 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:13.120 17:35:50 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:39:13.120 17:35:50 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:39:13.120 17:35:50 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:13.120 17:35:50 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:39:13.120 "name": "raid_bdev1", 00:39:13.120 "uuid": "f6fdc1be-0c2a-4e97-9961-2b8101299d38", 00:39:13.120 "strip_size_kb": 0, 00:39:13.120 "state": "online", 00:39:13.120 "raid_level": "raid1", 00:39:13.120 "superblock": true, 00:39:13.120 "num_base_bdevs": 2, 00:39:13.120 "num_base_bdevs_discovered": 2, 00:39:13.120 "num_base_bdevs_operational": 2, 00:39:13.120 "process": { 00:39:13.120 "type": "rebuild", 00:39:13.120 "target": "spare", 00:39:13.120 "progress": { 00:39:13.120 "blocks": 2560, 00:39:13.120 "percent": 32 00:39:13.120 } 00:39:13.120 }, 00:39:13.120 "base_bdevs_list": [ 00:39:13.120 { 00:39:13.120 "name": "spare", 00:39:13.120 "uuid": "bf8797e3-2cc7-5292-b913-a2a672ed610f", 00:39:13.120 "is_configured": true, 00:39:13.120 "data_offset": 256, 00:39:13.120 "data_size": 7936 00:39:13.120 }, 00:39:13.120 { 00:39:13.120 "name": "BaseBdev2", 00:39:13.120 "uuid": "78003efe-560e-5af0-aaa1-cc3e98a31e48", 00:39:13.120 "is_configured": true, 00:39:13.120 "data_offset": 256, 00:39:13.120 "data_size": 7936 00:39:13.120 } 00:39:13.120 ] 00:39:13.120 }' 00:39:13.120 17:35:50 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:39:13.379 17:35:50 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:39:13.379 17:35:50 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:39:13.379 17:35:50 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:39:13.379 17:35:50 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@761 -- # rpc_cmd bdev_passthru_delete spare 00:39:13.379 17:35:50 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:13.379 17:35:50 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:39:13.379 [2024-11-26 17:35:50.630027] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:39:13.379 [2024-11-26 17:35:50.691613] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:39:13.379 [2024-11-26 17:35:50.691696] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:39:13.379 [2024-11-26 17:35:50.691713] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:39:13.379 [2024-11-26 17:35:50.691726] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:39:13.379 17:35:50 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:13.379 17:35:50 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@762 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:39:13.379 17:35:50 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:39:13.379 17:35:50 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:39:13.379 17:35:50 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:39:13.379 17:35:50 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:39:13.379 17:35:50 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:39:13.379 17:35:50 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:39:13.379 17:35:50 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:39:13.379 17:35:50 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:39:13.379 17:35:50 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:39:13.379 17:35:50 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:39:13.379 17:35:50 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:13.379 17:35:50 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:39:13.379 17:35:50 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:39:13.379 17:35:50 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:13.379 17:35:50 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:39:13.379 "name": "raid_bdev1", 00:39:13.379 "uuid": "f6fdc1be-0c2a-4e97-9961-2b8101299d38", 00:39:13.379 "strip_size_kb": 0, 00:39:13.379 "state": "online", 00:39:13.379 "raid_level": "raid1", 00:39:13.379 "superblock": true, 00:39:13.379 "num_base_bdevs": 2, 00:39:13.379 "num_base_bdevs_discovered": 1, 00:39:13.379 "num_base_bdevs_operational": 1, 00:39:13.379 "base_bdevs_list": [ 00:39:13.379 { 00:39:13.379 "name": null, 00:39:13.379 "uuid": "00000000-0000-0000-0000-000000000000", 00:39:13.379 "is_configured": false, 00:39:13.379 "data_offset": 0, 00:39:13.379 "data_size": 7936 00:39:13.379 }, 00:39:13.379 { 00:39:13.379 "name": "BaseBdev2", 00:39:13.379 "uuid": "78003efe-560e-5af0-aaa1-cc3e98a31e48", 00:39:13.379 "is_configured": true, 00:39:13.379 "data_offset": 256, 00:39:13.379 "data_size": 7936 00:39:13.379 } 00:39:13.379 ] 00:39:13.379 }' 00:39:13.379 17:35:50 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:39:13.379 17:35:50 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:39:13.944 17:35:51 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@763 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:39:13.944 17:35:51 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:13.944 17:35:51 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:39:13.944 [2024-11-26 17:35:51.173069] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:39:13.944 [2024-11-26 17:35:51.173182] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:39:13.944 [2024-11-26 17:35:51.173213] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ab80 00:39:13.944 [2024-11-26 17:35:51.173229] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:39:13.944 [2024-11-26 17:35:51.173816] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:39:13.944 [2024-11-26 17:35:51.173849] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:39:13.944 [2024-11-26 17:35:51.173964] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:39:13.944 [2024-11-26 17:35:51.173984] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:39:13.944 [2024-11-26 17:35:51.173998] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:39:13.944 [2024-11-26 17:35:51.174031] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:39:13.944 [2024-11-26 17:35:51.192580] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0001c1cf0 00:39:13.944 spare 00:39:13.944 17:35:51 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:13.944 17:35:51 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@764 -- # sleep 1 00:39:13.944 [2024-11-26 17:35:51.195041] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:39:14.879 17:35:52 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@765 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:39:14.879 17:35:52 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:39:14.879 17:35:52 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:39:14.879 17:35:52 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@171 -- # local target=spare 00:39:14.879 17:35:52 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:39:14.879 17:35:52 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:39:14.879 17:35:52 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:14.879 17:35:52 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:39:14.879 17:35:52 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:39:14.879 17:35:52 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:14.879 17:35:52 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:39:14.879 "name": "raid_bdev1", 00:39:14.879 "uuid": "f6fdc1be-0c2a-4e97-9961-2b8101299d38", 00:39:14.879 "strip_size_kb": 0, 00:39:14.879 "state": "online", 00:39:14.879 "raid_level": "raid1", 00:39:14.879 "superblock": true, 00:39:14.879 "num_base_bdevs": 2, 00:39:14.879 "num_base_bdevs_discovered": 2, 00:39:14.879 "num_base_bdevs_operational": 2, 00:39:14.879 "process": { 00:39:14.879 "type": "rebuild", 00:39:14.879 "target": "spare", 00:39:14.879 "progress": { 00:39:14.879 "blocks": 2560, 00:39:14.879 "percent": 32 00:39:14.879 } 00:39:14.879 }, 00:39:14.879 "base_bdevs_list": [ 00:39:14.879 { 00:39:14.879 "name": "spare", 00:39:14.879 "uuid": "bf8797e3-2cc7-5292-b913-a2a672ed610f", 00:39:14.879 "is_configured": true, 00:39:14.879 "data_offset": 256, 00:39:14.879 "data_size": 7936 00:39:14.879 }, 00:39:14.879 { 00:39:14.879 "name": "BaseBdev2", 00:39:14.879 "uuid": "78003efe-560e-5af0-aaa1-cc3e98a31e48", 00:39:14.879 "is_configured": true, 00:39:14.879 "data_offset": 256, 00:39:14.879 "data_size": 7936 00:39:14.879 } 00:39:14.879 ] 00:39:14.879 }' 00:39:14.879 17:35:52 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:39:14.879 17:35:52 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:39:14.879 17:35:52 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:39:15.138 17:35:52 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:39:15.138 17:35:52 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@768 -- # rpc_cmd bdev_passthru_delete spare 00:39:15.138 17:35:52 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:15.138 17:35:52 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:39:15.138 [2024-11-26 17:35:52.344993] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:39:15.138 [2024-11-26 17:35:52.406748] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:39:15.138 [2024-11-26 17:35:52.406827] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:39:15.138 [2024-11-26 17:35:52.406850] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:39:15.138 [2024-11-26 17:35:52.406860] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:39:15.138 17:35:52 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:15.138 17:35:52 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@769 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:39:15.138 17:35:52 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:39:15.138 17:35:52 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:39:15.138 17:35:52 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:39:15.138 17:35:52 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:39:15.138 17:35:52 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:39:15.138 17:35:52 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:39:15.138 17:35:52 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:39:15.138 17:35:52 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:39:15.138 17:35:52 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:39:15.138 17:35:52 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:39:15.138 17:35:52 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:39:15.138 17:35:52 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:15.138 17:35:52 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:39:15.138 17:35:52 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:15.138 17:35:52 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:39:15.138 "name": "raid_bdev1", 00:39:15.138 "uuid": "f6fdc1be-0c2a-4e97-9961-2b8101299d38", 00:39:15.138 "strip_size_kb": 0, 00:39:15.138 "state": "online", 00:39:15.138 "raid_level": "raid1", 00:39:15.138 "superblock": true, 00:39:15.138 "num_base_bdevs": 2, 00:39:15.138 "num_base_bdevs_discovered": 1, 00:39:15.138 "num_base_bdevs_operational": 1, 00:39:15.138 "base_bdevs_list": [ 00:39:15.138 { 00:39:15.138 "name": null, 00:39:15.138 "uuid": "00000000-0000-0000-0000-000000000000", 00:39:15.138 "is_configured": false, 00:39:15.138 "data_offset": 0, 00:39:15.138 "data_size": 7936 00:39:15.138 }, 00:39:15.138 { 00:39:15.138 "name": "BaseBdev2", 00:39:15.138 "uuid": "78003efe-560e-5af0-aaa1-cc3e98a31e48", 00:39:15.138 "is_configured": true, 00:39:15.138 "data_offset": 256, 00:39:15.138 "data_size": 7936 00:39:15.138 } 00:39:15.138 ] 00:39:15.138 }' 00:39:15.138 17:35:52 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:39:15.138 17:35:52 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:39:15.705 17:35:52 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@770 -- # verify_raid_bdev_process raid_bdev1 none none 00:39:15.705 17:35:52 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:39:15.705 17:35:52 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:39:15.705 17:35:52 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@171 -- # local target=none 00:39:15.705 17:35:52 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:39:15.705 17:35:52 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:39:15.705 17:35:52 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:39:15.705 17:35:52 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:15.705 17:35:52 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:39:15.705 17:35:52 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:15.705 17:35:52 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:39:15.705 "name": "raid_bdev1", 00:39:15.705 "uuid": "f6fdc1be-0c2a-4e97-9961-2b8101299d38", 00:39:15.705 "strip_size_kb": 0, 00:39:15.705 "state": "online", 00:39:15.705 "raid_level": "raid1", 00:39:15.705 "superblock": true, 00:39:15.705 "num_base_bdevs": 2, 00:39:15.705 "num_base_bdevs_discovered": 1, 00:39:15.705 "num_base_bdevs_operational": 1, 00:39:15.705 "base_bdevs_list": [ 00:39:15.705 { 00:39:15.705 "name": null, 00:39:15.705 "uuid": "00000000-0000-0000-0000-000000000000", 00:39:15.705 "is_configured": false, 00:39:15.705 "data_offset": 0, 00:39:15.705 "data_size": 7936 00:39:15.705 }, 00:39:15.705 { 00:39:15.705 "name": "BaseBdev2", 00:39:15.705 "uuid": "78003efe-560e-5af0-aaa1-cc3e98a31e48", 00:39:15.705 "is_configured": true, 00:39:15.705 "data_offset": 256, 00:39:15.705 "data_size": 7936 00:39:15.705 } 00:39:15.705 ] 00:39:15.705 }' 00:39:15.705 17:35:52 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:39:15.705 17:35:52 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:39:15.705 17:35:52 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:39:15.705 17:35:53 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:39:15.705 17:35:53 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@773 -- # rpc_cmd bdev_passthru_delete BaseBdev1 00:39:15.705 17:35:53 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:15.705 17:35:53 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:39:15.705 17:35:53 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:15.705 17:35:53 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@774 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:39:15.705 17:35:53 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:15.705 17:35:53 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:39:15.705 [2024-11-26 17:35:53.049248] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:39:15.705 [2024-11-26 17:35:53.049334] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:39:15.705 [2024-11-26 17:35:53.049377] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b180 00:39:15.705 [2024-11-26 17:35:53.049408] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:39:15.705 [2024-11-26 17:35:53.050080] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:39:15.705 [2024-11-26 17:35:53.050120] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:39:15.705 [2024-11-26 17:35:53.050232] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev BaseBdev1 00:39:15.705 [2024-11-26 17:35:53.050252] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:39:15.705 [2024-11-26 17:35:53.050283] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:39:15.705 [2024-11-26 17:35:53.050299] bdev_raid.c:3894:raid_bdev_examine_done: *ERROR*: Failed to examine bdev BaseBdev1: Invalid argument 00:39:15.705 BaseBdev1 00:39:15.705 17:35:53 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:15.705 17:35:53 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@775 -- # sleep 1 00:39:16.639 17:35:54 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@776 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:39:16.639 17:35:54 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:39:16.639 17:35:54 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:39:16.639 17:35:54 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:39:16.639 17:35:54 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:39:16.639 17:35:54 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:39:16.639 17:35:54 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:39:16.639 17:35:54 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:39:16.639 17:35:54 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:39:16.639 17:35:54 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:39:16.639 17:35:54 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:39:16.639 17:35:54 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:16.639 17:35:54 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:39:16.639 17:35:54 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:39:16.639 17:35:54 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:16.897 17:35:54 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:39:16.897 "name": "raid_bdev1", 00:39:16.897 "uuid": "f6fdc1be-0c2a-4e97-9961-2b8101299d38", 00:39:16.897 "strip_size_kb": 0, 00:39:16.897 "state": "online", 00:39:16.897 "raid_level": "raid1", 00:39:16.897 "superblock": true, 00:39:16.897 "num_base_bdevs": 2, 00:39:16.897 "num_base_bdevs_discovered": 1, 00:39:16.897 "num_base_bdevs_operational": 1, 00:39:16.897 "base_bdevs_list": [ 00:39:16.897 { 00:39:16.897 "name": null, 00:39:16.897 "uuid": "00000000-0000-0000-0000-000000000000", 00:39:16.897 "is_configured": false, 00:39:16.897 "data_offset": 0, 00:39:16.897 "data_size": 7936 00:39:16.897 }, 00:39:16.897 { 00:39:16.897 "name": "BaseBdev2", 00:39:16.897 "uuid": "78003efe-560e-5af0-aaa1-cc3e98a31e48", 00:39:16.897 "is_configured": true, 00:39:16.897 "data_offset": 256, 00:39:16.897 "data_size": 7936 00:39:16.897 } 00:39:16.897 ] 00:39:16.897 }' 00:39:16.897 17:35:54 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:39:16.897 17:35:54 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:39:17.156 17:35:54 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@777 -- # verify_raid_bdev_process raid_bdev1 none none 00:39:17.156 17:35:54 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:39:17.156 17:35:54 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:39:17.156 17:35:54 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@171 -- # local target=none 00:39:17.156 17:35:54 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:39:17.156 17:35:54 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:39:17.156 17:35:54 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:17.156 17:35:54 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:39:17.156 17:35:54 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:39:17.156 17:35:54 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:17.156 17:35:54 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:39:17.156 "name": "raid_bdev1", 00:39:17.156 "uuid": "f6fdc1be-0c2a-4e97-9961-2b8101299d38", 00:39:17.156 "strip_size_kb": 0, 00:39:17.156 "state": "online", 00:39:17.156 "raid_level": "raid1", 00:39:17.156 "superblock": true, 00:39:17.156 "num_base_bdevs": 2, 00:39:17.156 "num_base_bdevs_discovered": 1, 00:39:17.156 "num_base_bdevs_operational": 1, 00:39:17.156 "base_bdevs_list": [ 00:39:17.156 { 00:39:17.156 "name": null, 00:39:17.156 "uuid": "00000000-0000-0000-0000-000000000000", 00:39:17.156 "is_configured": false, 00:39:17.156 "data_offset": 0, 00:39:17.156 "data_size": 7936 00:39:17.156 }, 00:39:17.156 { 00:39:17.156 "name": "BaseBdev2", 00:39:17.156 "uuid": "78003efe-560e-5af0-aaa1-cc3e98a31e48", 00:39:17.156 "is_configured": true, 00:39:17.156 "data_offset": 256, 00:39:17.156 "data_size": 7936 00:39:17.156 } 00:39:17.156 ] 00:39:17.156 }' 00:39:17.156 17:35:54 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:39:17.414 17:35:54 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:39:17.414 17:35:54 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:39:17.414 17:35:54 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:39:17.414 17:35:54 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@778 -- # NOT rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:39:17.414 17:35:54 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@652 -- # local es=0 00:39:17.414 17:35:54 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:39:17.414 17:35:54 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:39:17.414 17:35:54 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:39:17.414 17:35:54 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:39:17.414 17:35:54 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:39:17.414 17:35:54 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:39:17.414 17:35:54 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:17.414 17:35:54 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:39:17.414 [2024-11-26 17:35:54.665671] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:39:17.414 [2024-11-26 17:35:54.665913] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:39:17.414 [2024-11-26 17:35:54.665944] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:39:17.414 request: 00:39:17.414 { 00:39:17.414 "base_bdev": "BaseBdev1", 00:39:17.414 "raid_bdev": "raid_bdev1", 00:39:17.414 "method": "bdev_raid_add_base_bdev", 00:39:17.414 "req_id": 1 00:39:17.414 } 00:39:17.414 Got JSON-RPC error response 00:39:17.414 response: 00:39:17.414 { 00:39:17.414 "code": -22, 00:39:17.414 "message": "Failed to add base bdev to RAID bdev: Invalid argument" 00:39:17.414 } 00:39:17.414 17:35:54 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:39:17.414 17:35:54 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@655 -- # es=1 00:39:17.414 17:35:54 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:39:17.414 17:35:54 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:39:17.415 17:35:54 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:39:17.415 17:35:54 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@779 -- # sleep 1 00:39:18.352 17:35:55 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@780 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:39:18.352 17:35:55 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:39:18.352 17:35:55 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:39:18.352 17:35:55 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:39:18.352 17:35:55 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:39:18.352 17:35:55 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:39:18.352 17:35:55 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:39:18.352 17:35:55 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:39:18.352 17:35:55 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:39:18.352 17:35:55 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:39:18.352 17:35:55 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:39:18.352 17:35:55 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:18.352 17:35:55 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:39:18.352 17:35:55 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:39:18.352 17:35:55 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:18.352 17:35:55 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:39:18.352 "name": "raid_bdev1", 00:39:18.352 "uuid": "f6fdc1be-0c2a-4e97-9961-2b8101299d38", 00:39:18.352 "strip_size_kb": 0, 00:39:18.352 "state": "online", 00:39:18.352 "raid_level": "raid1", 00:39:18.352 "superblock": true, 00:39:18.352 "num_base_bdevs": 2, 00:39:18.352 "num_base_bdevs_discovered": 1, 00:39:18.352 "num_base_bdevs_operational": 1, 00:39:18.352 "base_bdevs_list": [ 00:39:18.352 { 00:39:18.352 "name": null, 00:39:18.352 "uuid": "00000000-0000-0000-0000-000000000000", 00:39:18.352 "is_configured": false, 00:39:18.352 "data_offset": 0, 00:39:18.352 "data_size": 7936 00:39:18.352 }, 00:39:18.352 { 00:39:18.352 "name": "BaseBdev2", 00:39:18.352 "uuid": "78003efe-560e-5af0-aaa1-cc3e98a31e48", 00:39:18.352 "is_configured": true, 00:39:18.352 "data_offset": 256, 00:39:18.352 "data_size": 7936 00:39:18.352 } 00:39:18.352 ] 00:39:18.352 }' 00:39:18.352 17:35:55 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:39:18.352 17:35:55 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:39:18.920 17:35:56 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@781 -- # verify_raid_bdev_process raid_bdev1 none none 00:39:18.920 17:35:56 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:39:18.920 17:35:56 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:39:18.920 17:35:56 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@171 -- # local target=none 00:39:18.920 17:35:56 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:39:18.920 17:35:56 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:39:18.920 17:35:56 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:18.920 17:35:56 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:39:18.920 17:35:56 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:39:18.920 17:35:56 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:18.920 17:35:56 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:39:18.920 "name": "raid_bdev1", 00:39:18.920 "uuid": "f6fdc1be-0c2a-4e97-9961-2b8101299d38", 00:39:18.920 "strip_size_kb": 0, 00:39:18.920 "state": "online", 00:39:18.920 "raid_level": "raid1", 00:39:18.920 "superblock": true, 00:39:18.920 "num_base_bdevs": 2, 00:39:18.920 "num_base_bdevs_discovered": 1, 00:39:18.920 "num_base_bdevs_operational": 1, 00:39:18.920 "base_bdevs_list": [ 00:39:18.920 { 00:39:18.920 "name": null, 00:39:18.920 "uuid": "00000000-0000-0000-0000-000000000000", 00:39:18.920 "is_configured": false, 00:39:18.920 "data_offset": 0, 00:39:18.920 "data_size": 7936 00:39:18.920 }, 00:39:18.920 { 00:39:18.920 "name": "BaseBdev2", 00:39:18.920 "uuid": "78003efe-560e-5af0-aaa1-cc3e98a31e48", 00:39:18.920 "is_configured": true, 00:39:18.920 "data_offset": 256, 00:39:18.920 "data_size": 7936 00:39:18.920 } 00:39:18.920 ] 00:39:18.920 }' 00:39:18.920 17:35:56 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:39:18.920 17:35:56 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:39:18.920 17:35:56 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:39:18.920 17:35:56 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:39:18.920 17:35:56 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@784 -- # killprocess 86990 00:39:18.920 17:35:56 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@954 -- # '[' -z 86990 ']' 00:39:18.920 17:35:56 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@958 -- # kill -0 86990 00:39:18.920 17:35:56 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@959 -- # uname 00:39:18.920 17:35:56 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:39:18.920 17:35:56 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 86990 00:39:18.920 killing process with pid 86990 00:39:18.920 Received shutdown signal, test time was about 60.000000 seconds 00:39:18.920 00:39:18.920 Latency(us) 00:39:18.920 [2024-11-26T17:35:56.367Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:39:18.920 [2024-11-26T17:35:56.367Z] =================================================================================================================== 00:39:18.920 [2024-11-26T17:35:56.367Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:39:18.920 17:35:56 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:39:18.920 17:35:56 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:39:18.920 17:35:56 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@972 -- # echo 'killing process with pid 86990' 00:39:18.920 17:35:56 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@973 -- # kill 86990 00:39:18.920 [2024-11-26 17:35:56.277753] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:39:18.920 17:35:56 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@978 -- # wait 86990 00:39:18.920 [2024-11-26 17:35:56.277928] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:39:18.920 [2024-11-26 17:35:56.277993] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:39:18.920 [2024-11-26 17:35:56.278009] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state offline 00:39:19.179 [2024-11-26 17:35:56.607005] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:39:20.556 ************************************ 00:39:20.556 END TEST raid_rebuild_test_sb_4k 00:39:20.556 ************************************ 00:39:20.556 17:35:57 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@786 -- # return 0 00:39:20.556 00:39:20.556 real 0m20.502s 00:39:20.556 user 0m26.479s 00:39:20.556 sys 0m3.140s 00:39:20.556 17:35:57 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@1130 -- # xtrace_disable 00:39:20.556 17:35:57 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:39:20.556 17:35:57 bdev_raid -- bdev/bdev_raid.sh@1003 -- # base_malloc_params='-m 32' 00:39:20.556 17:35:57 bdev_raid -- bdev/bdev_raid.sh@1004 -- # run_test raid_state_function_test_sb_md_separate raid_state_function_test raid1 2 true 00:39:20.556 17:35:57 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:39:20.556 17:35:57 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:39:20.556 17:35:57 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:39:20.556 ************************************ 00:39:20.556 START TEST raid_state_function_test_sb_md_separate 00:39:20.556 ************************************ 00:39:20.556 17:35:57 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@1129 -- # raid_state_function_test raid1 2 true 00:39:20.556 17:35:57 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@205 -- # local raid_level=raid1 00:39:20.556 17:35:57 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=2 00:39:20.556 17:35:57 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:39:20.556 17:35:57 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:39:20.556 17:35:57 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:39:20.556 17:35:57 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:39:20.556 17:35:57 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:39:20.556 17:35:57 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:39:20.556 17:35:57 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:39:20.556 17:35:57 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:39:20.556 17:35:57 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:39:20.556 17:35:57 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:39:20.556 17:35:57 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:39:20.556 17:35:57 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:39:20.556 17:35:57 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:39:20.556 17:35:57 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@211 -- # local strip_size 00:39:20.556 17:35:57 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:39:20.557 17:35:57 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:39:20.557 17:35:57 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@215 -- # '[' raid1 '!=' raid1 ']' 00:39:20.557 17:35:57 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@219 -- # strip_size=0 00:39:20.557 17:35:57 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:39:20.557 17:35:57 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:39:20.557 17:35:57 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@229 -- # raid_pid=87686 00:39:20.557 17:35:57 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:39:20.557 17:35:57 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 87686' 00:39:20.557 Process raid pid: 87686 00:39:20.557 17:35:57 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@231 -- # waitforlisten 87686 00:39:20.557 17:35:57 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@835 -- # '[' -z 87686 ']' 00:39:20.557 17:35:57 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:39:20.557 17:35:57 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@840 -- # local max_retries=100 00:39:20.557 17:35:57 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:39:20.557 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:39:20.557 17:35:57 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@844 -- # xtrace_disable 00:39:20.557 17:35:57 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:39:20.816 [2024-11-26 17:35:58.015490] Starting SPDK v25.01-pre git sha1 f7ce15267 / DPDK 24.03.0 initialization... 00:39:20.816 [2024-11-26 17:35:58.015945] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:39:20.816 [2024-11-26 17:35:58.214914] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:39:21.076 [2024-11-26 17:35:58.350954] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:39:21.334 [2024-11-26 17:35:58.625120] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:39:21.334 [2024-11-26 17:35:58.625407] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:39:21.592 17:35:59 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:39:21.592 17:35:59 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@868 -- # return 0 00:39:21.592 17:35:59 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:39:21.592 17:35:59 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:21.592 17:35:59 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:39:21.592 [2024-11-26 17:35:59.013450] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:39:21.592 [2024-11-26 17:35:59.013523] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:39:21.592 [2024-11-26 17:35:59.013539] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:39:21.592 [2024-11-26 17:35:59.013556] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:39:21.592 17:35:59 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:21.592 17:35:59 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:39:21.592 17:35:59 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:39:21.592 17:35:59 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:39:21.592 17:35:59 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:39:21.592 17:35:59 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:39:21.592 17:35:59 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:39:21.592 17:35:59 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:39:21.592 17:35:59 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:39:21.592 17:35:59 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:39:21.592 17:35:59 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:39:21.592 17:35:59 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:39:21.592 17:35:59 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:39:21.592 17:35:59 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:21.592 17:35:59 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:39:21.592 17:35:59 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:21.850 17:35:59 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:39:21.850 "name": "Existed_Raid", 00:39:21.850 "uuid": "547ae966-5ab0-4ae6-af5b-d1a3c731d308", 00:39:21.850 "strip_size_kb": 0, 00:39:21.850 "state": "configuring", 00:39:21.850 "raid_level": "raid1", 00:39:21.850 "superblock": true, 00:39:21.850 "num_base_bdevs": 2, 00:39:21.850 "num_base_bdevs_discovered": 0, 00:39:21.850 "num_base_bdevs_operational": 2, 00:39:21.850 "base_bdevs_list": [ 00:39:21.850 { 00:39:21.850 "name": "BaseBdev1", 00:39:21.850 "uuid": "00000000-0000-0000-0000-000000000000", 00:39:21.850 "is_configured": false, 00:39:21.850 "data_offset": 0, 00:39:21.850 "data_size": 0 00:39:21.850 }, 00:39:21.850 { 00:39:21.850 "name": "BaseBdev2", 00:39:21.850 "uuid": "00000000-0000-0000-0000-000000000000", 00:39:21.850 "is_configured": false, 00:39:21.850 "data_offset": 0, 00:39:21.850 "data_size": 0 00:39:21.850 } 00:39:21.850 ] 00:39:21.850 }' 00:39:21.850 17:35:59 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:39:21.850 17:35:59 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:39:22.108 17:35:59 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:39:22.108 17:35:59 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:22.108 17:35:59 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:39:22.108 [2024-11-26 17:35:59.489455] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:39:22.108 [2024-11-26 17:35:59.489502] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:39:22.108 17:35:59 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:22.108 17:35:59 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:39:22.108 17:35:59 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:22.108 17:35:59 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:39:22.108 [2024-11-26 17:35:59.497455] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:39:22.108 [2024-11-26 17:35:59.497511] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:39:22.108 [2024-11-26 17:35:59.497525] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:39:22.108 [2024-11-26 17:35:59.497546] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:39:22.108 17:35:59 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:22.108 17:35:59 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -b BaseBdev1 00:39:22.108 17:35:59 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:22.108 17:35:59 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:39:22.367 [2024-11-26 17:35:59.554676] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:39:22.367 BaseBdev1 00:39:22.367 17:35:59 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:22.367 17:35:59 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:39:22.367 17:35:59 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:39:22.367 17:35:59 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:39:22.367 17:35:59 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@905 -- # local i 00:39:22.367 17:35:59 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:39:22.367 17:35:59 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:39:22.367 17:35:59 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:39:22.367 17:35:59 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:22.367 17:35:59 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:39:22.367 17:35:59 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:22.367 17:35:59 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:39:22.367 17:35:59 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:22.367 17:35:59 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:39:22.367 [ 00:39:22.367 { 00:39:22.367 "name": "BaseBdev1", 00:39:22.367 "aliases": [ 00:39:22.367 "85226c57-3c10-41f9-9094-fa84943e807e" 00:39:22.367 ], 00:39:22.367 "product_name": "Malloc disk", 00:39:22.367 "block_size": 4096, 00:39:22.367 "num_blocks": 8192, 00:39:22.367 "uuid": "85226c57-3c10-41f9-9094-fa84943e807e", 00:39:22.367 "md_size": 32, 00:39:22.367 "md_interleave": false, 00:39:22.367 "dif_type": 0, 00:39:22.367 "assigned_rate_limits": { 00:39:22.367 "rw_ios_per_sec": 0, 00:39:22.367 "rw_mbytes_per_sec": 0, 00:39:22.367 "r_mbytes_per_sec": 0, 00:39:22.367 "w_mbytes_per_sec": 0 00:39:22.367 }, 00:39:22.367 "claimed": true, 00:39:22.367 "claim_type": "exclusive_write", 00:39:22.367 "zoned": false, 00:39:22.367 "supported_io_types": { 00:39:22.367 "read": true, 00:39:22.367 "write": true, 00:39:22.367 "unmap": true, 00:39:22.367 "flush": true, 00:39:22.367 "reset": true, 00:39:22.367 "nvme_admin": false, 00:39:22.367 "nvme_io": false, 00:39:22.367 "nvme_io_md": false, 00:39:22.367 "write_zeroes": true, 00:39:22.367 "zcopy": true, 00:39:22.367 "get_zone_info": false, 00:39:22.367 "zone_management": false, 00:39:22.367 "zone_append": false, 00:39:22.367 "compare": false, 00:39:22.367 "compare_and_write": false, 00:39:22.367 "abort": true, 00:39:22.367 "seek_hole": false, 00:39:22.367 "seek_data": false, 00:39:22.367 "copy": true, 00:39:22.367 "nvme_iov_md": false 00:39:22.367 }, 00:39:22.367 "memory_domains": [ 00:39:22.367 { 00:39:22.367 "dma_device_id": "system", 00:39:22.367 "dma_device_type": 1 00:39:22.367 }, 00:39:22.367 { 00:39:22.367 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:39:22.367 "dma_device_type": 2 00:39:22.367 } 00:39:22.367 ], 00:39:22.367 "driver_specific": {} 00:39:22.367 } 00:39:22.367 ] 00:39:22.367 17:35:59 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:22.367 17:35:59 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@911 -- # return 0 00:39:22.367 17:35:59 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:39:22.367 17:35:59 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:39:22.367 17:35:59 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:39:22.367 17:35:59 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:39:22.367 17:35:59 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:39:22.367 17:35:59 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:39:22.367 17:35:59 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:39:22.367 17:35:59 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:39:22.367 17:35:59 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:39:22.367 17:35:59 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:39:22.367 17:35:59 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:39:22.367 17:35:59 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:39:22.367 17:35:59 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:22.367 17:35:59 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:39:22.367 17:35:59 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:22.367 17:35:59 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:39:22.367 "name": "Existed_Raid", 00:39:22.367 "uuid": "b6273886-0159-4bb3-8fca-bc08dfcf22ba", 00:39:22.367 "strip_size_kb": 0, 00:39:22.367 "state": "configuring", 00:39:22.367 "raid_level": "raid1", 00:39:22.367 "superblock": true, 00:39:22.367 "num_base_bdevs": 2, 00:39:22.367 "num_base_bdevs_discovered": 1, 00:39:22.367 "num_base_bdevs_operational": 2, 00:39:22.367 "base_bdevs_list": [ 00:39:22.367 { 00:39:22.367 "name": "BaseBdev1", 00:39:22.367 "uuid": "85226c57-3c10-41f9-9094-fa84943e807e", 00:39:22.367 "is_configured": true, 00:39:22.367 "data_offset": 256, 00:39:22.367 "data_size": 7936 00:39:22.367 }, 00:39:22.367 { 00:39:22.367 "name": "BaseBdev2", 00:39:22.367 "uuid": "00000000-0000-0000-0000-000000000000", 00:39:22.367 "is_configured": false, 00:39:22.367 "data_offset": 0, 00:39:22.367 "data_size": 0 00:39:22.367 } 00:39:22.367 ] 00:39:22.367 }' 00:39:22.367 17:35:59 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:39:22.367 17:35:59 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:39:22.627 17:36:00 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:39:22.627 17:36:00 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:22.627 17:36:00 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:39:22.627 [2024-11-26 17:36:00.038919] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:39:22.627 [2024-11-26 17:36:00.038992] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:39:22.627 17:36:00 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:22.627 17:36:00 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:39:22.627 17:36:00 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:22.627 17:36:00 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:39:22.627 [2024-11-26 17:36:00.050971] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:39:22.627 [2024-11-26 17:36:00.054270] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:39:22.627 [2024-11-26 17:36:00.054405] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:39:22.627 17:36:00 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:22.627 17:36:00 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:39:22.627 17:36:00 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:39:22.627 17:36:00 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:39:22.627 17:36:00 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:39:22.627 17:36:00 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:39:22.627 17:36:00 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:39:22.627 17:36:00 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:39:22.627 17:36:00 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:39:22.627 17:36:00 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:39:22.627 17:36:00 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:39:22.627 17:36:00 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:39:22.627 17:36:00 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:39:22.627 17:36:00 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:39:22.627 17:36:00 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:39:22.627 17:36:00 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:22.627 17:36:00 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:39:22.885 17:36:00 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:22.885 17:36:00 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:39:22.885 "name": "Existed_Raid", 00:39:22.885 "uuid": "ec309c7b-7fbf-40cf-bd3f-c34b4f73d7f2", 00:39:22.885 "strip_size_kb": 0, 00:39:22.885 "state": "configuring", 00:39:22.885 "raid_level": "raid1", 00:39:22.885 "superblock": true, 00:39:22.885 "num_base_bdevs": 2, 00:39:22.885 "num_base_bdevs_discovered": 1, 00:39:22.885 "num_base_bdevs_operational": 2, 00:39:22.885 "base_bdevs_list": [ 00:39:22.885 { 00:39:22.885 "name": "BaseBdev1", 00:39:22.885 "uuid": "85226c57-3c10-41f9-9094-fa84943e807e", 00:39:22.885 "is_configured": true, 00:39:22.885 "data_offset": 256, 00:39:22.885 "data_size": 7936 00:39:22.885 }, 00:39:22.885 { 00:39:22.885 "name": "BaseBdev2", 00:39:22.885 "uuid": "00000000-0000-0000-0000-000000000000", 00:39:22.885 "is_configured": false, 00:39:22.885 "data_offset": 0, 00:39:22.885 "data_size": 0 00:39:22.885 } 00:39:22.885 ] 00:39:22.885 }' 00:39:22.885 17:36:00 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:39:22.885 17:36:00 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:39:23.144 17:36:00 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -b BaseBdev2 00:39:23.144 17:36:00 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:23.144 17:36:00 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:39:23.144 [2024-11-26 17:36:00.562376] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:39:23.144 [2024-11-26 17:36:00.562609] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:39:23.144 [2024-11-26 17:36:00.562626] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:39:23.144 [2024-11-26 17:36:00.562704] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:39:23.144 [2024-11-26 17:36:00.562829] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:39:23.144 [2024-11-26 17:36:00.562842] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:39:23.144 BaseBdev2 00:39:23.144 [2024-11-26 17:36:00.562938] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:39:23.144 17:36:00 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:23.144 17:36:00 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:39:23.144 17:36:00 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:39:23.144 17:36:00 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:39:23.144 17:36:00 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@905 -- # local i 00:39:23.144 17:36:00 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:39:23.144 17:36:00 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:39:23.144 17:36:00 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:39:23.144 17:36:00 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:23.144 17:36:00 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:39:23.144 17:36:00 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:23.144 17:36:00 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:39:23.144 17:36:00 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:23.144 17:36:00 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:39:23.144 [ 00:39:23.144 { 00:39:23.144 "name": "BaseBdev2", 00:39:23.144 "aliases": [ 00:39:23.144 "239047fa-0eed-47c9-bb02-776e7c042982" 00:39:23.144 ], 00:39:23.144 "product_name": "Malloc disk", 00:39:23.144 "block_size": 4096, 00:39:23.144 "num_blocks": 8192, 00:39:23.144 "uuid": "239047fa-0eed-47c9-bb02-776e7c042982", 00:39:23.144 "md_size": 32, 00:39:23.144 "md_interleave": false, 00:39:23.144 "dif_type": 0, 00:39:23.144 "assigned_rate_limits": { 00:39:23.144 "rw_ios_per_sec": 0, 00:39:23.144 "rw_mbytes_per_sec": 0, 00:39:23.404 "r_mbytes_per_sec": 0, 00:39:23.404 "w_mbytes_per_sec": 0 00:39:23.404 }, 00:39:23.404 "claimed": true, 00:39:23.404 "claim_type": "exclusive_write", 00:39:23.404 "zoned": false, 00:39:23.404 "supported_io_types": { 00:39:23.404 "read": true, 00:39:23.404 "write": true, 00:39:23.404 "unmap": true, 00:39:23.404 "flush": true, 00:39:23.404 "reset": true, 00:39:23.404 "nvme_admin": false, 00:39:23.404 "nvme_io": false, 00:39:23.404 "nvme_io_md": false, 00:39:23.404 "write_zeroes": true, 00:39:23.404 "zcopy": true, 00:39:23.404 "get_zone_info": false, 00:39:23.404 "zone_management": false, 00:39:23.404 "zone_append": false, 00:39:23.404 "compare": false, 00:39:23.404 "compare_and_write": false, 00:39:23.404 "abort": true, 00:39:23.404 "seek_hole": false, 00:39:23.404 "seek_data": false, 00:39:23.404 "copy": true, 00:39:23.404 "nvme_iov_md": false 00:39:23.404 }, 00:39:23.404 "memory_domains": [ 00:39:23.404 { 00:39:23.404 "dma_device_id": "system", 00:39:23.404 "dma_device_type": 1 00:39:23.404 }, 00:39:23.404 { 00:39:23.404 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:39:23.404 "dma_device_type": 2 00:39:23.404 } 00:39:23.404 ], 00:39:23.404 "driver_specific": {} 00:39:23.404 } 00:39:23.404 ] 00:39:23.404 17:36:00 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:23.404 17:36:00 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@911 -- # return 0 00:39:23.404 17:36:00 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:39:23.404 17:36:00 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:39:23.404 17:36:00 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid1 0 2 00:39:23.404 17:36:00 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:39:23.404 17:36:00 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:39:23.404 17:36:00 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:39:23.404 17:36:00 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:39:23.404 17:36:00 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:39:23.404 17:36:00 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:39:23.404 17:36:00 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:39:23.404 17:36:00 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:39:23.404 17:36:00 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:39:23.404 17:36:00 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:39:23.404 17:36:00 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:39:23.404 17:36:00 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:23.404 17:36:00 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:39:23.404 17:36:00 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:23.404 17:36:00 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:39:23.404 "name": "Existed_Raid", 00:39:23.404 "uuid": "ec309c7b-7fbf-40cf-bd3f-c34b4f73d7f2", 00:39:23.404 "strip_size_kb": 0, 00:39:23.404 "state": "online", 00:39:23.404 "raid_level": "raid1", 00:39:23.404 "superblock": true, 00:39:23.404 "num_base_bdevs": 2, 00:39:23.404 "num_base_bdevs_discovered": 2, 00:39:23.404 "num_base_bdevs_operational": 2, 00:39:23.404 "base_bdevs_list": [ 00:39:23.404 { 00:39:23.404 "name": "BaseBdev1", 00:39:23.404 "uuid": "85226c57-3c10-41f9-9094-fa84943e807e", 00:39:23.404 "is_configured": true, 00:39:23.404 "data_offset": 256, 00:39:23.404 "data_size": 7936 00:39:23.404 }, 00:39:23.404 { 00:39:23.404 "name": "BaseBdev2", 00:39:23.405 "uuid": "239047fa-0eed-47c9-bb02-776e7c042982", 00:39:23.405 "is_configured": true, 00:39:23.405 "data_offset": 256, 00:39:23.405 "data_size": 7936 00:39:23.405 } 00:39:23.405 ] 00:39:23.405 }' 00:39:23.405 17:36:00 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:39:23.405 17:36:00 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:39:23.664 17:36:01 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:39:23.664 17:36:01 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:39:23.664 17:36:01 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:39:23.664 17:36:01 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:39:23.664 17:36:01 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@184 -- # local name 00:39:23.664 17:36:01 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:39:23.664 17:36:01 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:39:23.664 17:36:01 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:39:23.664 17:36:01 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:23.664 17:36:01 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:39:23.664 [2024-11-26 17:36:01.018861] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:39:23.664 17:36:01 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:23.664 17:36:01 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:39:23.664 "name": "Existed_Raid", 00:39:23.664 "aliases": [ 00:39:23.664 "ec309c7b-7fbf-40cf-bd3f-c34b4f73d7f2" 00:39:23.664 ], 00:39:23.664 "product_name": "Raid Volume", 00:39:23.664 "block_size": 4096, 00:39:23.664 "num_blocks": 7936, 00:39:23.664 "uuid": "ec309c7b-7fbf-40cf-bd3f-c34b4f73d7f2", 00:39:23.664 "md_size": 32, 00:39:23.664 "md_interleave": false, 00:39:23.664 "dif_type": 0, 00:39:23.664 "assigned_rate_limits": { 00:39:23.664 "rw_ios_per_sec": 0, 00:39:23.664 "rw_mbytes_per_sec": 0, 00:39:23.664 "r_mbytes_per_sec": 0, 00:39:23.664 "w_mbytes_per_sec": 0 00:39:23.664 }, 00:39:23.664 "claimed": false, 00:39:23.664 "zoned": false, 00:39:23.664 "supported_io_types": { 00:39:23.664 "read": true, 00:39:23.664 "write": true, 00:39:23.664 "unmap": false, 00:39:23.664 "flush": false, 00:39:23.664 "reset": true, 00:39:23.664 "nvme_admin": false, 00:39:23.664 "nvme_io": false, 00:39:23.664 "nvme_io_md": false, 00:39:23.664 "write_zeroes": true, 00:39:23.664 "zcopy": false, 00:39:23.664 "get_zone_info": false, 00:39:23.664 "zone_management": false, 00:39:23.664 "zone_append": false, 00:39:23.664 "compare": false, 00:39:23.664 "compare_and_write": false, 00:39:23.664 "abort": false, 00:39:23.664 "seek_hole": false, 00:39:23.664 "seek_data": false, 00:39:23.664 "copy": false, 00:39:23.664 "nvme_iov_md": false 00:39:23.664 }, 00:39:23.664 "memory_domains": [ 00:39:23.664 { 00:39:23.664 "dma_device_id": "system", 00:39:23.664 "dma_device_type": 1 00:39:23.664 }, 00:39:23.664 { 00:39:23.664 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:39:23.664 "dma_device_type": 2 00:39:23.664 }, 00:39:23.664 { 00:39:23.664 "dma_device_id": "system", 00:39:23.664 "dma_device_type": 1 00:39:23.664 }, 00:39:23.664 { 00:39:23.664 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:39:23.664 "dma_device_type": 2 00:39:23.664 } 00:39:23.664 ], 00:39:23.664 "driver_specific": { 00:39:23.664 "raid": { 00:39:23.664 "uuid": "ec309c7b-7fbf-40cf-bd3f-c34b4f73d7f2", 00:39:23.664 "strip_size_kb": 0, 00:39:23.664 "state": "online", 00:39:23.664 "raid_level": "raid1", 00:39:23.664 "superblock": true, 00:39:23.664 "num_base_bdevs": 2, 00:39:23.664 "num_base_bdevs_discovered": 2, 00:39:23.664 "num_base_bdevs_operational": 2, 00:39:23.664 "base_bdevs_list": [ 00:39:23.664 { 00:39:23.664 "name": "BaseBdev1", 00:39:23.664 "uuid": "85226c57-3c10-41f9-9094-fa84943e807e", 00:39:23.664 "is_configured": true, 00:39:23.664 "data_offset": 256, 00:39:23.664 "data_size": 7936 00:39:23.664 }, 00:39:23.664 { 00:39:23.664 "name": "BaseBdev2", 00:39:23.664 "uuid": "239047fa-0eed-47c9-bb02-776e7c042982", 00:39:23.664 "is_configured": true, 00:39:23.664 "data_offset": 256, 00:39:23.664 "data_size": 7936 00:39:23.664 } 00:39:23.664 ] 00:39:23.664 } 00:39:23.664 } 00:39:23.664 }' 00:39:23.664 17:36:01 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:39:23.664 17:36:01 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:39:23.664 BaseBdev2' 00:39:23.664 17:36:01 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:39:23.923 17:36:01 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='4096 32 false 0' 00:39:23.923 17:36:01 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:39:23.923 17:36:01 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:39:23.923 17:36:01 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:39:23.923 17:36:01 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:23.923 17:36:01 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:39:23.923 17:36:01 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:23.923 17:36:01 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4096 32 false 0' 00:39:23.923 17:36:01 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@193 -- # [[ 4096 32 false 0 == \4\0\9\6\ \3\2\ \f\a\l\s\e\ \0 ]] 00:39:23.923 17:36:01 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:39:23.923 17:36:01 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:39:23.923 17:36:01 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:39:23.923 17:36:01 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:23.923 17:36:01 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:39:23.923 17:36:01 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:23.923 17:36:01 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4096 32 false 0' 00:39:23.923 17:36:01 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@193 -- # [[ 4096 32 false 0 == \4\0\9\6\ \3\2\ \f\a\l\s\e\ \0 ]] 00:39:23.923 17:36:01 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:39:23.923 17:36:01 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:23.923 17:36:01 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:39:23.923 [2024-11-26 17:36:01.202695] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:39:24.181 17:36:01 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:24.181 17:36:01 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@260 -- # local expected_state 00:39:24.181 17:36:01 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@261 -- # has_redundancy raid1 00:39:24.181 17:36:01 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@198 -- # case $1 in 00:39:24.181 17:36:01 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@199 -- # return 0 00:39:24.181 17:36:01 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@264 -- # expected_state=online 00:39:24.181 17:36:01 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid online raid1 0 1 00:39:24.181 17:36:01 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:39:24.181 17:36:01 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:39:24.181 17:36:01 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:39:24.181 17:36:01 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:39:24.181 17:36:01 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:39:24.181 17:36:01 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:39:24.181 17:36:01 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:39:24.181 17:36:01 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:39:24.181 17:36:01 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:39:24.181 17:36:01 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:39:24.181 17:36:01 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:39:24.181 17:36:01 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:24.181 17:36:01 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:39:24.181 17:36:01 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:24.181 17:36:01 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:39:24.181 "name": "Existed_Raid", 00:39:24.181 "uuid": "ec309c7b-7fbf-40cf-bd3f-c34b4f73d7f2", 00:39:24.181 "strip_size_kb": 0, 00:39:24.181 "state": "online", 00:39:24.181 "raid_level": "raid1", 00:39:24.181 "superblock": true, 00:39:24.181 "num_base_bdevs": 2, 00:39:24.182 "num_base_bdevs_discovered": 1, 00:39:24.182 "num_base_bdevs_operational": 1, 00:39:24.182 "base_bdevs_list": [ 00:39:24.182 { 00:39:24.182 "name": null, 00:39:24.182 "uuid": "00000000-0000-0000-0000-000000000000", 00:39:24.182 "is_configured": false, 00:39:24.182 "data_offset": 0, 00:39:24.182 "data_size": 7936 00:39:24.182 }, 00:39:24.182 { 00:39:24.182 "name": "BaseBdev2", 00:39:24.182 "uuid": "239047fa-0eed-47c9-bb02-776e7c042982", 00:39:24.182 "is_configured": true, 00:39:24.182 "data_offset": 256, 00:39:24.182 "data_size": 7936 00:39:24.182 } 00:39:24.182 ] 00:39:24.182 }' 00:39:24.182 17:36:01 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:39:24.182 17:36:01 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:39:24.440 17:36:01 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:39:24.440 17:36:01 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:39:24.440 17:36:01 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:39:24.440 17:36:01 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:39:24.440 17:36:01 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:24.440 17:36:01 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:39:24.440 17:36:01 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:24.440 17:36:01 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:39:24.440 17:36:01 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:39:24.440 17:36:01 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:39:24.440 17:36:01 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:24.440 17:36:01 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:39:24.440 [2024-11-26 17:36:01.827208] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:39:24.440 [2024-11-26 17:36:01.827338] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:39:24.698 [2024-11-26 17:36:01.939451] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:39:24.698 [2024-11-26 17:36:01.939519] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:39:24.698 [2024-11-26 17:36:01.939537] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:39:24.698 17:36:01 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:24.698 17:36:01 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:39:24.698 17:36:01 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:39:24.698 17:36:01 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:39:24.698 17:36:01 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:24.698 17:36:01 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:39:24.698 17:36:01 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:39:24.698 17:36:01 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:24.698 17:36:01 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:39:24.698 17:36:01 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:39:24.698 17:36:01 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@284 -- # '[' 2 -gt 2 ']' 00:39:24.698 17:36:01 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@326 -- # killprocess 87686 00:39:24.698 17:36:01 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@954 -- # '[' -z 87686 ']' 00:39:24.698 17:36:01 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@958 -- # kill -0 87686 00:39:24.698 17:36:01 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@959 -- # uname 00:39:24.698 17:36:01 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:39:24.698 17:36:01 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 87686 00:39:24.698 killing process with pid 87686 00:39:24.698 17:36:02 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:39:24.698 17:36:02 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:39:24.698 17:36:02 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@972 -- # echo 'killing process with pid 87686' 00:39:24.698 17:36:02 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@973 -- # kill 87686 00:39:24.698 [2024-11-26 17:36:02.027857] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:39:24.698 17:36:02 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@978 -- # wait 87686 00:39:24.698 [2024-11-26 17:36:02.045820] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:39:26.074 17:36:03 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@328 -- # return 0 00:39:26.074 00:39:26.074 real 0m5.400s 00:39:26.074 user 0m7.575s 00:39:26.074 sys 0m1.083s 00:39:26.074 17:36:03 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@1130 -- # xtrace_disable 00:39:26.074 ************************************ 00:39:26.074 END TEST raid_state_function_test_sb_md_separate 00:39:26.074 ************************************ 00:39:26.074 17:36:03 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:39:26.074 17:36:03 bdev_raid -- bdev/bdev_raid.sh@1005 -- # run_test raid_superblock_test_md_separate raid_superblock_test raid1 2 00:39:26.074 17:36:03 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:39:26.074 17:36:03 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:39:26.074 17:36:03 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:39:26.074 ************************************ 00:39:26.074 START TEST raid_superblock_test_md_separate 00:39:26.074 ************************************ 00:39:26.074 17:36:03 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@1129 -- # raid_superblock_test raid1 2 00:39:26.074 17:36:03 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@393 -- # local raid_level=raid1 00:39:26.074 17:36:03 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=2 00:39:26.074 17:36:03 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:39:26.074 17:36:03 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:39:26.074 17:36:03 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:39:26.074 17:36:03 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:39:26.074 17:36:03 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:39:26.074 17:36:03 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:39:26.074 17:36:03 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:39:26.074 17:36:03 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@399 -- # local strip_size 00:39:26.074 17:36:03 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:39:26.074 17:36:03 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:39:26.074 17:36:03 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:39:26.074 17:36:03 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@404 -- # '[' raid1 '!=' raid1 ']' 00:39:26.074 17:36:03 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@408 -- # strip_size=0 00:39:26.074 17:36:03 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@412 -- # raid_pid=87933 00:39:26.074 17:36:03 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@413 -- # waitforlisten 87933 00:39:26.074 17:36:03 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:39:26.074 17:36:03 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@835 -- # '[' -z 87933 ']' 00:39:26.075 17:36:03 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:39:26.075 17:36:03 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@840 -- # local max_retries=100 00:39:26.075 17:36:03 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:39:26.075 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:39:26.075 17:36:03 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@844 -- # xtrace_disable 00:39:26.075 17:36:03 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:39:26.075 [2024-11-26 17:36:03.481058] Starting SPDK v25.01-pre git sha1 f7ce15267 / DPDK 24.03.0 initialization... 00:39:26.075 [2024-11-26 17:36:03.481222] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid87933 ] 00:39:26.333 [2024-11-26 17:36:03.657489] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:39:26.589 [2024-11-26 17:36:03.799344] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:39:26.846 [2024-11-26 17:36:04.041523] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:39:26.846 [2024-11-26 17:36:04.041565] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:39:27.105 17:36:04 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:39:27.105 17:36:04 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@868 -- # return 0 00:39:27.105 17:36:04 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:39:27.105 17:36:04 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:39:27.105 17:36:04 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:39:27.105 17:36:04 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:39:27.105 17:36:04 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:39:27.105 17:36:04 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:39:27.105 17:36:04 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:39:27.105 17:36:04 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:39:27.105 17:36:04 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -b malloc1 00:39:27.105 17:36:04 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:27.105 17:36:04 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:39:27.105 malloc1 00:39:27.105 17:36:04 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:27.105 17:36:04 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:39:27.105 17:36:04 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:27.105 17:36:04 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:39:27.105 [2024-11-26 17:36:04.480488] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:39:27.105 [2024-11-26 17:36:04.480778] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:39:27.105 [2024-11-26 17:36:04.480848] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:39:27.105 [2024-11-26 17:36:04.480940] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:39:27.105 [2024-11-26 17:36:04.483538] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:39:27.105 [2024-11-26 17:36:04.483695] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:39:27.105 pt1 00:39:27.105 17:36:04 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:27.105 17:36:04 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:39:27.106 17:36:04 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:39:27.106 17:36:04 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:39:27.106 17:36:04 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:39:27.106 17:36:04 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:39:27.106 17:36:04 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:39:27.106 17:36:04 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:39:27.106 17:36:04 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:39:27.106 17:36:04 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -b malloc2 00:39:27.106 17:36:04 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:27.106 17:36:04 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:39:27.106 malloc2 00:39:27.106 17:36:04 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:27.106 17:36:04 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:39:27.106 17:36:04 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:27.106 17:36:04 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:39:27.106 [2024-11-26 17:36:04.546520] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:39:27.106 [2024-11-26 17:36:04.546579] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:39:27.106 [2024-11-26 17:36:04.546608] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:39:27.106 [2024-11-26 17:36:04.546621] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:39:27.106 [2024-11-26 17:36:04.549250] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:39:27.106 [2024-11-26 17:36:04.549412] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:39:27.364 pt2 00:39:27.364 17:36:04 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:27.364 17:36:04 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:39:27.364 17:36:04 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:39:27.364 17:36:04 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''pt1 pt2'\''' -n raid_bdev1 -s 00:39:27.364 17:36:04 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:27.364 17:36:04 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:39:27.364 [2024-11-26 17:36:04.558562] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:39:27.364 [2024-11-26 17:36:04.561176] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:39:27.364 [2024-11-26 17:36:04.561365] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:39:27.364 [2024-11-26 17:36:04.561381] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:39:27.364 [2024-11-26 17:36:04.561455] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:39:27.364 [2024-11-26 17:36:04.561590] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:39:27.364 [2024-11-26 17:36:04.561604] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:39:27.364 [2024-11-26 17:36:04.561700] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:39:27.364 17:36:04 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:27.364 17:36:04 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:39:27.364 17:36:04 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:39:27.364 17:36:04 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:39:27.364 17:36:04 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:39:27.364 17:36:04 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:39:27.364 17:36:04 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:39:27.364 17:36:04 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:39:27.364 17:36:04 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:39:27.364 17:36:04 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:39:27.364 17:36:04 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:39:27.364 17:36:04 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:39:27.364 17:36:04 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:39:27.364 17:36:04 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:27.364 17:36:04 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:39:27.364 17:36:04 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:27.364 17:36:04 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:39:27.364 "name": "raid_bdev1", 00:39:27.364 "uuid": "fb789a53-f4f2-4ae9-9147-b3272519d778", 00:39:27.364 "strip_size_kb": 0, 00:39:27.364 "state": "online", 00:39:27.364 "raid_level": "raid1", 00:39:27.364 "superblock": true, 00:39:27.364 "num_base_bdevs": 2, 00:39:27.364 "num_base_bdevs_discovered": 2, 00:39:27.364 "num_base_bdevs_operational": 2, 00:39:27.364 "base_bdevs_list": [ 00:39:27.364 { 00:39:27.364 "name": "pt1", 00:39:27.364 "uuid": "00000000-0000-0000-0000-000000000001", 00:39:27.364 "is_configured": true, 00:39:27.364 "data_offset": 256, 00:39:27.364 "data_size": 7936 00:39:27.364 }, 00:39:27.364 { 00:39:27.364 "name": "pt2", 00:39:27.364 "uuid": "00000000-0000-0000-0000-000000000002", 00:39:27.364 "is_configured": true, 00:39:27.364 "data_offset": 256, 00:39:27.364 "data_size": 7936 00:39:27.364 } 00:39:27.364 ] 00:39:27.364 }' 00:39:27.364 17:36:04 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:39:27.364 17:36:04 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:39:27.622 17:36:04 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:39:27.622 17:36:04 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:39:27.622 17:36:04 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:39:27.622 17:36:04 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:39:27.622 17:36:04 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@184 -- # local name 00:39:27.622 17:36:04 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:39:27.622 17:36:04 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:39:27.622 17:36:04 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:27.622 17:36:04 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:39:27.622 17:36:04 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:39:27.622 [2024-11-26 17:36:04.998921] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:39:27.622 17:36:05 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:27.622 17:36:05 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:39:27.622 "name": "raid_bdev1", 00:39:27.622 "aliases": [ 00:39:27.623 "fb789a53-f4f2-4ae9-9147-b3272519d778" 00:39:27.623 ], 00:39:27.623 "product_name": "Raid Volume", 00:39:27.623 "block_size": 4096, 00:39:27.623 "num_blocks": 7936, 00:39:27.623 "uuid": "fb789a53-f4f2-4ae9-9147-b3272519d778", 00:39:27.623 "md_size": 32, 00:39:27.623 "md_interleave": false, 00:39:27.623 "dif_type": 0, 00:39:27.623 "assigned_rate_limits": { 00:39:27.623 "rw_ios_per_sec": 0, 00:39:27.623 "rw_mbytes_per_sec": 0, 00:39:27.623 "r_mbytes_per_sec": 0, 00:39:27.623 "w_mbytes_per_sec": 0 00:39:27.623 }, 00:39:27.623 "claimed": false, 00:39:27.623 "zoned": false, 00:39:27.623 "supported_io_types": { 00:39:27.623 "read": true, 00:39:27.623 "write": true, 00:39:27.623 "unmap": false, 00:39:27.623 "flush": false, 00:39:27.623 "reset": true, 00:39:27.623 "nvme_admin": false, 00:39:27.623 "nvme_io": false, 00:39:27.623 "nvme_io_md": false, 00:39:27.623 "write_zeroes": true, 00:39:27.623 "zcopy": false, 00:39:27.623 "get_zone_info": false, 00:39:27.623 "zone_management": false, 00:39:27.623 "zone_append": false, 00:39:27.623 "compare": false, 00:39:27.623 "compare_and_write": false, 00:39:27.623 "abort": false, 00:39:27.623 "seek_hole": false, 00:39:27.623 "seek_data": false, 00:39:27.623 "copy": false, 00:39:27.623 "nvme_iov_md": false 00:39:27.623 }, 00:39:27.623 "memory_domains": [ 00:39:27.623 { 00:39:27.623 "dma_device_id": "system", 00:39:27.623 "dma_device_type": 1 00:39:27.623 }, 00:39:27.623 { 00:39:27.623 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:39:27.623 "dma_device_type": 2 00:39:27.623 }, 00:39:27.623 { 00:39:27.623 "dma_device_id": "system", 00:39:27.623 "dma_device_type": 1 00:39:27.623 }, 00:39:27.623 { 00:39:27.623 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:39:27.623 "dma_device_type": 2 00:39:27.623 } 00:39:27.623 ], 00:39:27.623 "driver_specific": { 00:39:27.623 "raid": { 00:39:27.623 "uuid": "fb789a53-f4f2-4ae9-9147-b3272519d778", 00:39:27.623 "strip_size_kb": 0, 00:39:27.623 "state": "online", 00:39:27.623 "raid_level": "raid1", 00:39:27.623 "superblock": true, 00:39:27.623 "num_base_bdevs": 2, 00:39:27.623 "num_base_bdevs_discovered": 2, 00:39:27.623 "num_base_bdevs_operational": 2, 00:39:27.623 "base_bdevs_list": [ 00:39:27.623 { 00:39:27.623 "name": "pt1", 00:39:27.623 "uuid": "00000000-0000-0000-0000-000000000001", 00:39:27.623 "is_configured": true, 00:39:27.623 "data_offset": 256, 00:39:27.623 "data_size": 7936 00:39:27.623 }, 00:39:27.623 { 00:39:27.623 "name": "pt2", 00:39:27.623 "uuid": "00000000-0000-0000-0000-000000000002", 00:39:27.623 "is_configured": true, 00:39:27.623 "data_offset": 256, 00:39:27.623 "data_size": 7936 00:39:27.623 } 00:39:27.623 ] 00:39:27.623 } 00:39:27.623 } 00:39:27.623 }' 00:39:27.623 17:36:05 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:39:27.881 17:36:05 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:39:27.881 pt2' 00:39:27.881 17:36:05 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:39:27.881 17:36:05 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='4096 32 false 0' 00:39:27.881 17:36:05 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:39:27.881 17:36:05 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:39:27.881 17:36:05 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:39:27.881 17:36:05 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:27.881 17:36:05 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:39:27.881 17:36:05 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:27.881 17:36:05 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4096 32 false 0' 00:39:27.881 17:36:05 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@193 -- # [[ 4096 32 false 0 == \4\0\9\6\ \3\2\ \f\a\l\s\e\ \0 ]] 00:39:27.881 17:36:05 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:39:27.881 17:36:05 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:39:27.881 17:36:05 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:39:27.881 17:36:05 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:27.881 17:36:05 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:39:27.881 17:36:05 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:27.881 17:36:05 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4096 32 false 0' 00:39:27.881 17:36:05 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@193 -- # [[ 4096 32 false 0 == \4\0\9\6\ \3\2\ \f\a\l\s\e\ \0 ]] 00:39:27.881 17:36:05 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:39:27.881 17:36:05 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:39:27.881 17:36:05 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:27.881 17:36:05 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:39:27.881 [2024-11-26 17:36:05.214860] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:39:27.881 17:36:05 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:27.881 17:36:05 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=fb789a53-f4f2-4ae9-9147-b3272519d778 00:39:27.881 17:36:05 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@436 -- # '[' -z fb789a53-f4f2-4ae9-9147-b3272519d778 ']' 00:39:27.881 17:36:05 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:39:27.881 17:36:05 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:27.881 17:36:05 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:39:27.881 [2024-11-26 17:36:05.254631] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:39:27.881 [2024-11-26 17:36:05.254656] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:39:27.881 [2024-11-26 17:36:05.254753] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:39:27.881 [2024-11-26 17:36:05.254815] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:39:27.881 [2024-11-26 17:36:05.254831] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:39:27.881 17:36:05 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:27.881 17:36:05 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:39:27.881 17:36:05 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:39:27.881 17:36:05 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:27.881 17:36:05 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:39:27.881 17:36:05 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:27.881 17:36:05 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:39:27.881 17:36:05 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:39:27.881 17:36:05 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:39:27.881 17:36:05 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:39:27.881 17:36:05 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:27.881 17:36:05 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:39:27.881 17:36:05 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:27.881 17:36:05 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:39:27.881 17:36:05 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:39:27.881 17:36:05 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:27.881 17:36:05 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:39:27.881 17:36:05 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:28.139 17:36:05 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:39:28.139 17:36:05 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:28.139 17:36:05 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:39:28.139 17:36:05 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:39:28.139 17:36:05 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:28.140 17:36:05 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:39:28.140 17:36:05 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:39:28.140 17:36:05 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@652 -- # local es=0 00:39:28.140 17:36:05 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:39:28.140 17:36:05 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:39:28.140 17:36:05 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:39:28.140 17:36:05 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:39:28.140 17:36:05 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:39:28.140 17:36:05 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:39:28.140 17:36:05 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:28.140 17:36:05 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:39:28.140 [2024-11-26 17:36:05.386668] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:39:28.140 [2024-11-26 17:36:05.389180] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:39:28.140 [2024-11-26 17:36:05.389254] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:39:28.140 [2024-11-26 17:36:05.389308] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:39:28.140 [2024-11-26 17:36:05.389324] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:39:28.140 [2024-11-26 17:36:05.389336] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state configuring 00:39:28.140 request: 00:39:28.140 { 00:39:28.140 "name": "raid_bdev1", 00:39:28.140 "raid_level": "raid1", 00:39:28.140 "base_bdevs": [ 00:39:28.140 "malloc1", 00:39:28.140 "malloc2" 00:39:28.140 ], 00:39:28.140 "superblock": false, 00:39:28.140 "method": "bdev_raid_create", 00:39:28.140 "req_id": 1 00:39:28.140 } 00:39:28.140 Got JSON-RPC error response 00:39:28.140 response: 00:39:28.140 { 00:39:28.140 "code": -17, 00:39:28.140 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:39:28.140 } 00:39:28.140 17:36:05 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:39:28.140 17:36:05 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@655 -- # es=1 00:39:28.140 17:36:05 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:39:28.140 17:36:05 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:39:28.140 17:36:05 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:39:28.140 17:36:05 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:39:28.140 17:36:05 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:28.140 17:36:05 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:39:28.140 17:36:05 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:39:28.140 17:36:05 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:28.140 17:36:05 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:39:28.140 17:36:05 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:39:28.140 17:36:05 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:39:28.140 17:36:05 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:28.140 17:36:05 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:39:28.140 [2024-11-26 17:36:05.454678] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:39:28.140 [2024-11-26 17:36:05.454732] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:39:28.140 [2024-11-26 17:36:05.454751] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:39:28.140 [2024-11-26 17:36:05.454766] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:39:28.140 [2024-11-26 17:36:05.457322] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:39:28.140 [2024-11-26 17:36:05.457472] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:39:28.140 [2024-11-26 17:36:05.457529] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:39:28.140 [2024-11-26 17:36:05.457589] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:39:28.140 pt1 00:39:28.140 17:36:05 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:28.140 17:36:05 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 2 00:39:28.140 17:36:05 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:39:28.140 17:36:05 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:39:28.140 17:36:05 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:39:28.140 17:36:05 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:39:28.140 17:36:05 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:39:28.140 17:36:05 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:39:28.140 17:36:05 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:39:28.140 17:36:05 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:39:28.140 17:36:05 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:39:28.140 17:36:05 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:39:28.140 17:36:05 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:39:28.140 17:36:05 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:28.140 17:36:05 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:39:28.140 17:36:05 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:28.140 17:36:05 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:39:28.140 "name": "raid_bdev1", 00:39:28.140 "uuid": "fb789a53-f4f2-4ae9-9147-b3272519d778", 00:39:28.140 "strip_size_kb": 0, 00:39:28.140 "state": "configuring", 00:39:28.140 "raid_level": "raid1", 00:39:28.140 "superblock": true, 00:39:28.140 "num_base_bdevs": 2, 00:39:28.140 "num_base_bdevs_discovered": 1, 00:39:28.140 "num_base_bdevs_operational": 2, 00:39:28.140 "base_bdevs_list": [ 00:39:28.140 { 00:39:28.140 "name": "pt1", 00:39:28.140 "uuid": "00000000-0000-0000-0000-000000000001", 00:39:28.140 "is_configured": true, 00:39:28.140 "data_offset": 256, 00:39:28.140 "data_size": 7936 00:39:28.140 }, 00:39:28.140 { 00:39:28.140 "name": null, 00:39:28.140 "uuid": "00000000-0000-0000-0000-000000000002", 00:39:28.140 "is_configured": false, 00:39:28.140 "data_offset": 256, 00:39:28.140 "data_size": 7936 00:39:28.140 } 00:39:28.140 ] 00:39:28.140 }' 00:39:28.140 17:36:05 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:39:28.140 17:36:05 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:39:28.708 17:36:05 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@470 -- # '[' 2 -gt 2 ']' 00:39:28.708 17:36:05 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:39:28.708 17:36:05 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:39:28.708 17:36:05 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:39:28.708 17:36:05 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:28.708 17:36:05 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:39:28.708 [2024-11-26 17:36:05.922748] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:39:28.708 [2024-11-26 17:36:05.922948] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:39:28.708 [2024-11-26 17:36:05.922976] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:39:28.708 [2024-11-26 17:36:05.922991] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:39:28.708 [2024-11-26 17:36:05.923178] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:39:28.708 [2024-11-26 17:36:05.923201] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:39:28.708 [2024-11-26 17:36:05.923245] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:39:28.708 [2024-11-26 17:36:05.923267] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:39:28.708 [2024-11-26 17:36:05.923370] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:39:28.708 [2024-11-26 17:36:05.923394] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:39:28.708 [2024-11-26 17:36:05.923467] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:39:28.708 [2024-11-26 17:36:05.923581] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:39:28.708 [2024-11-26 17:36:05.923590] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:39:28.708 [2024-11-26 17:36:05.923688] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:39:28.708 pt2 00:39:28.708 17:36:05 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:28.708 17:36:05 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:39:28.708 17:36:05 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:39:28.708 17:36:05 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:39:28.708 17:36:05 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:39:28.708 17:36:05 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:39:28.708 17:36:05 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:39:28.708 17:36:05 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:39:28.708 17:36:05 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:39:28.708 17:36:05 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:39:28.708 17:36:05 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:39:28.708 17:36:05 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:39:28.708 17:36:05 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:39:28.708 17:36:05 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:39:28.708 17:36:05 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:28.708 17:36:05 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:39:28.708 17:36:05 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:39:28.708 17:36:05 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:28.708 17:36:05 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:39:28.708 "name": "raid_bdev1", 00:39:28.708 "uuid": "fb789a53-f4f2-4ae9-9147-b3272519d778", 00:39:28.708 "strip_size_kb": 0, 00:39:28.708 "state": "online", 00:39:28.708 "raid_level": "raid1", 00:39:28.708 "superblock": true, 00:39:28.708 "num_base_bdevs": 2, 00:39:28.708 "num_base_bdevs_discovered": 2, 00:39:28.708 "num_base_bdevs_operational": 2, 00:39:28.708 "base_bdevs_list": [ 00:39:28.708 { 00:39:28.708 "name": "pt1", 00:39:28.708 "uuid": "00000000-0000-0000-0000-000000000001", 00:39:28.708 "is_configured": true, 00:39:28.708 "data_offset": 256, 00:39:28.708 "data_size": 7936 00:39:28.708 }, 00:39:28.708 { 00:39:28.708 "name": "pt2", 00:39:28.708 "uuid": "00000000-0000-0000-0000-000000000002", 00:39:28.708 "is_configured": true, 00:39:28.708 "data_offset": 256, 00:39:28.708 "data_size": 7936 00:39:28.708 } 00:39:28.708 ] 00:39:28.708 }' 00:39:28.708 17:36:05 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:39:28.708 17:36:05 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:39:28.966 17:36:06 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:39:28.966 17:36:06 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:39:28.966 17:36:06 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:39:28.966 17:36:06 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:39:28.966 17:36:06 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@184 -- # local name 00:39:28.966 17:36:06 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:39:28.966 17:36:06 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:39:28.966 17:36:06 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:39:28.966 17:36:06 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:28.966 17:36:06 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:39:28.966 [2024-11-26 17:36:06.387135] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:39:29.224 17:36:06 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:29.224 17:36:06 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:39:29.224 "name": "raid_bdev1", 00:39:29.224 "aliases": [ 00:39:29.224 "fb789a53-f4f2-4ae9-9147-b3272519d778" 00:39:29.224 ], 00:39:29.224 "product_name": "Raid Volume", 00:39:29.224 "block_size": 4096, 00:39:29.224 "num_blocks": 7936, 00:39:29.224 "uuid": "fb789a53-f4f2-4ae9-9147-b3272519d778", 00:39:29.224 "md_size": 32, 00:39:29.224 "md_interleave": false, 00:39:29.224 "dif_type": 0, 00:39:29.224 "assigned_rate_limits": { 00:39:29.224 "rw_ios_per_sec": 0, 00:39:29.224 "rw_mbytes_per_sec": 0, 00:39:29.224 "r_mbytes_per_sec": 0, 00:39:29.224 "w_mbytes_per_sec": 0 00:39:29.224 }, 00:39:29.224 "claimed": false, 00:39:29.224 "zoned": false, 00:39:29.224 "supported_io_types": { 00:39:29.224 "read": true, 00:39:29.224 "write": true, 00:39:29.224 "unmap": false, 00:39:29.224 "flush": false, 00:39:29.224 "reset": true, 00:39:29.224 "nvme_admin": false, 00:39:29.224 "nvme_io": false, 00:39:29.224 "nvme_io_md": false, 00:39:29.224 "write_zeroes": true, 00:39:29.224 "zcopy": false, 00:39:29.224 "get_zone_info": false, 00:39:29.224 "zone_management": false, 00:39:29.224 "zone_append": false, 00:39:29.224 "compare": false, 00:39:29.224 "compare_and_write": false, 00:39:29.224 "abort": false, 00:39:29.224 "seek_hole": false, 00:39:29.224 "seek_data": false, 00:39:29.224 "copy": false, 00:39:29.224 "nvme_iov_md": false 00:39:29.224 }, 00:39:29.224 "memory_domains": [ 00:39:29.224 { 00:39:29.224 "dma_device_id": "system", 00:39:29.224 "dma_device_type": 1 00:39:29.224 }, 00:39:29.224 { 00:39:29.224 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:39:29.224 "dma_device_type": 2 00:39:29.224 }, 00:39:29.224 { 00:39:29.224 "dma_device_id": "system", 00:39:29.224 "dma_device_type": 1 00:39:29.224 }, 00:39:29.224 { 00:39:29.224 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:39:29.224 "dma_device_type": 2 00:39:29.224 } 00:39:29.224 ], 00:39:29.224 "driver_specific": { 00:39:29.224 "raid": { 00:39:29.224 "uuid": "fb789a53-f4f2-4ae9-9147-b3272519d778", 00:39:29.224 "strip_size_kb": 0, 00:39:29.224 "state": "online", 00:39:29.224 "raid_level": "raid1", 00:39:29.224 "superblock": true, 00:39:29.224 "num_base_bdevs": 2, 00:39:29.224 "num_base_bdevs_discovered": 2, 00:39:29.224 "num_base_bdevs_operational": 2, 00:39:29.224 "base_bdevs_list": [ 00:39:29.224 { 00:39:29.224 "name": "pt1", 00:39:29.224 "uuid": "00000000-0000-0000-0000-000000000001", 00:39:29.224 "is_configured": true, 00:39:29.224 "data_offset": 256, 00:39:29.224 "data_size": 7936 00:39:29.224 }, 00:39:29.224 { 00:39:29.224 "name": "pt2", 00:39:29.224 "uuid": "00000000-0000-0000-0000-000000000002", 00:39:29.224 "is_configured": true, 00:39:29.224 "data_offset": 256, 00:39:29.224 "data_size": 7936 00:39:29.224 } 00:39:29.224 ] 00:39:29.224 } 00:39:29.224 } 00:39:29.224 }' 00:39:29.224 17:36:06 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:39:29.224 17:36:06 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:39:29.224 pt2' 00:39:29.224 17:36:06 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:39:29.224 17:36:06 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='4096 32 false 0' 00:39:29.224 17:36:06 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:39:29.224 17:36:06 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:39:29.224 17:36:06 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:39:29.224 17:36:06 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:29.224 17:36:06 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:39:29.224 17:36:06 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:29.224 17:36:06 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4096 32 false 0' 00:39:29.224 17:36:06 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@193 -- # [[ 4096 32 false 0 == \4\0\9\6\ \3\2\ \f\a\l\s\e\ \0 ]] 00:39:29.224 17:36:06 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:39:29.224 17:36:06 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:39:29.224 17:36:06 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:29.224 17:36:06 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:39:29.224 17:36:06 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:39:29.224 17:36:06 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:29.224 17:36:06 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4096 32 false 0' 00:39:29.224 17:36:06 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@193 -- # [[ 4096 32 false 0 == \4\0\9\6\ \3\2\ \f\a\l\s\e\ \0 ]] 00:39:29.224 17:36:06 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:39:29.224 17:36:06 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:29.224 17:36:06 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:39:29.224 17:36:06 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:39:29.224 [2024-11-26 17:36:06.619184] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:39:29.224 17:36:06 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:29.224 17:36:06 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@487 -- # '[' fb789a53-f4f2-4ae9-9147-b3272519d778 '!=' fb789a53-f4f2-4ae9-9147-b3272519d778 ']' 00:39:29.224 17:36:06 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@491 -- # has_redundancy raid1 00:39:29.224 17:36:06 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@198 -- # case $1 in 00:39:29.225 17:36:06 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@199 -- # return 0 00:39:29.225 17:36:06 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@493 -- # rpc_cmd bdev_passthru_delete pt1 00:39:29.225 17:36:06 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:29.225 17:36:06 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:39:29.225 [2024-11-26 17:36:06.662947] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: pt1 00:39:29.225 17:36:06 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:29.225 17:36:06 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@496 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:39:29.225 17:36:06 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:39:29.225 17:36:06 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:39:29.225 17:36:06 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:39:29.225 17:36:06 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:39:29.225 17:36:06 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:39:29.225 17:36:06 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:39:29.225 17:36:06 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:39:29.225 17:36:06 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:39:29.225 17:36:06 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:39:29.500 17:36:06 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:39:29.500 17:36:06 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:39:29.500 17:36:06 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:29.500 17:36:06 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:39:29.500 17:36:06 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:29.500 17:36:06 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:39:29.500 "name": "raid_bdev1", 00:39:29.500 "uuid": "fb789a53-f4f2-4ae9-9147-b3272519d778", 00:39:29.500 "strip_size_kb": 0, 00:39:29.500 "state": "online", 00:39:29.500 "raid_level": "raid1", 00:39:29.500 "superblock": true, 00:39:29.500 "num_base_bdevs": 2, 00:39:29.500 "num_base_bdevs_discovered": 1, 00:39:29.500 "num_base_bdevs_operational": 1, 00:39:29.500 "base_bdevs_list": [ 00:39:29.500 { 00:39:29.500 "name": null, 00:39:29.500 "uuid": "00000000-0000-0000-0000-000000000000", 00:39:29.500 "is_configured": false, 00:39:29.500 "data_offset": 0, 00:39:29.500 "data_size": 7936 00:39:29.500 }, 00:39:29.500 { 00:39:29.500 "name": "pt2", 00:39:29.500 "uuid": "00000000-0000-0000-0000-000000000002", 00:39:29.500 "is_configured": true, 00:39:29.500 "data_offset": 256, 00:39:29.500 "data_size": 7936 00:39:29.500 } 00:39:29.500 ] 00:39:29.500 }' 00:39:29.500 17:36:06 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:39:29.500 17:36:06 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:39:29.787 17:36:07 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@499 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:39:29.787 17:36:07 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:29.787 17:36:07 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:39:29.787 [2024-11-26 17:36:07.115014] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:39:29.787 [2024-11-26 17:36:07.115216] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:39:29.787 [2024-11-26 17:36:07.115306] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:39:29.787 [2024-11-26 17:36:07.115359] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:39:29.787 [2024-11-26 17:36:07.115375] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:39:29.787 17:36:07 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:29.787 17:36:07 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@500 -- # rpc_cmd bdev_raid_get_bdevs all 00:39:29.787 17:36:07 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:29.787 17:36:07 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:39:29.787 17:36:07 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@500 -- # jq -r '.[]' 00:39:29.787 17:36:07 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:29.787 17:36:07 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@500 -- # raid_bdev= 00:39:29.787 17:36:07 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@501 -- # '[' -n '' ']' 00:39:29.787 17:36:07 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@506 -- # (( i = 1 )) 00:39:29.787 17:36:07 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:39:29.787 17:36:07 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt2 00:39:29.787 17:36:07 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:29.787 17:36:07 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:39:29.787 17:36:07 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:29.787 17:36:07 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:39:29.787 17:36:07 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:39:29.787 17:36:07 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@511 -- # (( i = 1 )) 00:39:29.787 17:36:07 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:39:29.787 17:36:07 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@519 -- # i=1 00:39:29.787 17:36:07 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@520 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:39:29.787 17:36:07 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:29.787 17:36:07 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:39:29.787 [2024-11-26 17:36:07.183035] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:39:29.787 [2024-11-26 17:36:07.183108] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:39:29.787 [2024-11-26 17:36:07.183130] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:39:29.787 [2024-11-26 17:36:07.183147] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:39:29.787 [2024-11-26 17:36:07.185764] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:39:29.787 [2024-11-26 17:36:07.185945] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:39:29.787 [2024-11-26 17:36:07.186014] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:39:29.787 [2024-11-26 17:36:07.186085] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:39:29.787 [2024-11-26 17:36:07.186207] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:39:29.788 [2024-11-26 17:36:07.186223] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:39:29.788 [2024-11-26 17:36:07.186301] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:39:29.788 [2024-11-26 17:36:07.186437] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:39:29.788 [2024-11-26 17:36:07.186447] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008200 00:39:29.788 [2024-11-26 17:36:07.186550] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:39:29.788 pt2 00:39:29.788 17:36:07 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:29.788 17:36:07 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@523 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:39:29.788 17:36:07 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:39:29.788 17:36:07 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:39:29.788 17:36:07 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:39:29.788 17:36:07 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:39:29.788 17:36:07 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:39:29.788 17:36:07 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:39:29.788 17:36:07 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:39:29.788 17:36:07 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:39:29.788 17:36:07 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:39:29.788 17:36:07 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:39:29.788 17:36:07 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:29.788 17:36:07 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:39:29.788 17:36:07 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:39:29.788 17:36:07 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:30.046 17:36:07 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:39:30.046 "name": "raid_bdev1", 00:39:30.046 "uuid": "fb789a53-f4f2-4ae9-9147-b3272519d778", 00:39:30.046 "strip_size_kb": 0, 00:39:30.046 "state": "online", 00:39:30.046 "raid_level": "raid1", 00:39:30.046 "superblock": true, 00:39:30.046 "num_base_bdevs": 2, 00:39:30.046 "num_base_bdevs_discovered": 1, 00:39:30.046 "num_base_bdevs_operational": 1, 00:39:30.046 "base_bdevs_list": [ 00:39:30.046 { 00:39:30.046 "name": null, 00:39:30.046 "uuid": "00000000-0000-0000-0000-000000000000", 00:39:30.046 "is_configured": false, 00:39:30.046 "data_offset": 256, 00:39:30.046 "data_size": 7936 00:39:30.046 }, 00:39:30.046 { 00:39:30.046 "name": "pt2", 00:39:30.046 "uuid": "00000000-0000-0000-0000-000000000002", 00:39:30.046 "is_configured": true, 00:39:30.046 "data_offset": 256, 00:39:30.046 "data_size": 7936 00:39:30.046 } 00:39:30.046 ] 00:39:30.046 }' 00:39:30.046 17:36:07 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:39:30.046 17:36:07 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:39:30.304 17:36:07 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@526 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:39:30.304 17:36:07 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:30.304 17:36:07 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:39:30.304 [2024-11-26 17:36:07.651097] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:39:30.304 [2024-11-26 17:36:07.651127] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:39:30.304 [2024-11-26 17:36:07.651191] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:39:30.304 [2024-11-26 17:36:07.651243] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:39:30.304 [2024-11-26 17:36:07.651254] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name raid_bdev1, state offline 00:39:30.304 17:36:07 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:30.304 17:36:07 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@527 -- # rpc_cmd bdev_raid_get_bdevs all 00:39:30.304 17:36:07 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@527 -- # jq -r '.[]' 00:39:30.304 17:36:07 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:30.304 17:36:07 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:39:30.304 17:36:07 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:30.304 17:36:07 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@527 -- # raid_bdev= 00:39:30.304 17:36:07 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@528 -- # '[' -n '' ']' 00:39:30.304 17:36:07 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@532 -- # '[' 2 -gt 2 ']' 00:39:30.304 17:36:07 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@540 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:39:30.304 17:36:07 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:30.304 17:36:07 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:39:30.304 [2024-11-26 17:36:07.711161] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:39:30.304 [2024-11-26 17:36:07.711328] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:39:30.304 [2024-11-26 17:36:07.711388] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009980 00:39:30.304 [2024-11-26 17:36:07.711469] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:39:30.304 [2024-11-26 17:36:07.714115] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:39:30.304 [2024-11-26 17:36:07.714151] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:39:30.304 [2024-11-26 17:36:07.714209] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:39:30.304 [2024-11-26 17:36:07.714256] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:39:30.304 [2024-11-26 17:36:07.714416] bdev_raid.c:3685:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev pt2 (4) greater than existing raid bdev raid_bdev1 (2) 00:39:30.304 [2024-11-26 17:36:07.714428] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:39:30.304 [2024-11-26 17:36:07.714447] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008580 name raid_bdev1, state configuring 00:39:30.304 [2024-11-26 17:36:07.714527] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:39:30.304 [2024-11-26 17:36:07.714596] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008900 00:39:30.304 [2024-11-26 17:36:07.714606] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:39:30.304 [2024-11-26 17:36:07.714670] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:39:30.304 [2024-11-26 17:36:07.714778] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008900 00:39:30.304 [2024-11-26 17:36:07.714790] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008900 00:39:30.304 [2024-11-26 17:36:07.714891] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:39:30.304 pt1 00:39:30.304 17:36:07 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:30.304 17:36:07 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@542 -- # '[' 2 -gt 2 ']' 00:39:30.304 17:36:07 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@554 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:39:30.304 17:36:07 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:39:30.304 17:36:07 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:39:30.304 17:36:07 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:39:30.304 17:36:07 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:39:30.304 17:36:07 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:39:30.304 17:36:07 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:39:30.304 17:36:07 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:39:30.304 17:36:07 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:39:30.304 17:36:07 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:39:30.304 17:36:07 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:39:30.304 17:36:07 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:39:30.304 17:36:07 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:30.304 17:36:07 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:39:30.304 17:36:07 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:30.562 17:36:07 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:39:30.562 "name": "raid_bdev1", 00:39:30.562 "uuid": "fb789a53-f4f2-4ae9-9147-b3272519d778", 00:39:30.562 "strip_size_kb": 0, 00:39:30.562 "state": "online", 00:39:30.562 "raid_level": "raid1", 00:39:30.562 "superblock": true, 00:39:30.562 "num_base_bdevs": 2, 00:39:30.562 "num_base_bdevs_discovered": 1, 00:39:30.562 "num_base_bdevs_operational": 1, 00:39:30.562 "base_bdevs_list": [ 00:39:30.562 { 00:39:30.562 "name": null, 00:39:30.562 "uuid": "00000000-0000-0000-0000-000000000000", 00:39:30.562 "is_configured": false, 00:39:30.562 "data_offset": 256, 00:39:30.562 "data_size": 7936 00:39:30.562 }, 00:39:30.562 { 00:39:30.562 "name": "pt2", 00:39:30.562 "uuid": "00000000-0000-0000-0000-000000000002", 00:39:30.562 "is_configured": true, 00:39:30.562 "data_offset": 256, 00:39:30.562 "data_size": 7936 00:39:30.562 } 00:39:30.562 ] 00:39:30.562 }' 00:39:30.562 17:36:07 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:39:30.562 17:36:07 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:39:30.821 17:36:08 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@555 -- # rpc_cmd bdev_raid_get_bdevs online 00:39:30.821 17:36:08 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:30.821 17:36:08 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:39:30.821 17:36:08 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@555 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:39:30.821 17:36:08 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:30.821 17:36:08 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@555 -- # [[ false == \f\a\l\s\e ]] 00:39:30.821 17:36:08 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@558 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:39:30.821 17:36:08 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:30.821 17:36:08 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:39:30.821 17:36:08 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@558 -- # jq -r '.[] | .uuid' 00:39:30.821 [2024-11-26 17:36:08.215469] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:39:30.821 17:36:08 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:30.821 17:36:08 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@558 -- # '[' fb789a53-f4f2-4ae9-9147-b3272519d778 '!=' fb789a53-f4f2-4ae9-9147-b3272519d778 ']' 00:39:30.821 17:36:08 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@563 -- # killprocess 87933 00:39:30.821 17:36:08 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@954 -- # '[' -z 87933 ']' 00:39:30.821 17:36:08 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@958 -- # kill -0 87933 00:39:30.821 17:36:08 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@959 -- # uname 00:39:30.821 17:36:08 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:39:31.079 17:36:08 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 87933 00:39:31.079 killing process with pid 87933 00:39:31.079 17:36:08 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:39:31.079 17:36:08 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:39:31.079 17:36:08 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@972 -- # echo 'killing process with pid 87933' 00:39:31.079 17:36:08 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@973 -- # kill 87933 00:39:31.079 [2024-11-26 17:36:08.300870] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:39:31.079 [2024-11-26 17:36:08.300944] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:39:31.079 [2024-11-26 17:36:08.300985] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:39:31.079 17:36:08 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@978 -- # wait 87933 00:39:31.079 [2024-11-26 17:36:08.301006] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008900 name raid_bdev1, state offline 00:39:31.337 [2024-11-26 17:36:08.544605] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:39:32.723 ************************************ 00:39:32.723 END TEST raid_superblock_test_md_separate 00:39:32.723 ************************************ 00:39:32.723 17:36:09 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@565 -- # return 0 00:39:32.723 00:39:32.723 real 0m6.402s 00:39:32.723 user 0m9.536s 00:39:32.723 sys 0m1.342s 00:39:32.723 17:36:09 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@1130 -- # xtrace_disable 00:39:32.723 17:36:09 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:39:32.723 17:36:09 bdev_raid -- bdev/bdev_raid.sh@1006 -- # '[' true = true ']' 00:39:32.723 17:36:09 bdev_raid -- bdev/bdev_raid.sh@1007 -- # run_test raid_rebuild_test_sb_md_separate raid_rebuild_test raid1 2 true false true 00:39:32.723 17:36:09 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 7 -le 1 ']' 00:39:32.723 17:36:09 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:39:32.723 17:36:09 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:39:32.723 ************************************ 00:39:32.723 START TEST raid_rebuild_test_sb_md_separate 00:39:32.723 ************************************ 00:39:32.723 17:36:09 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@1129 -- # raid_rebuild_test raid1 2 true false true 00:39:32.723 17:36:09 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@569 -- # local raid_level=raid1 00:39:32.723 17:36:09 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=2 00:39:32.723 17:36:09 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@571 -- # local superblock=true 00:39:32.723 17:36:09 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@572 -- # local background_io=false 00:39:32.723 17:36:09 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@573 -- # local verify=true 00:39:32.723 17:36:09 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:39:32.723 17:36:09 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:39:32.723 17:36:09 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:39:32.723 17:36:09 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:39:32.723 17:36:09 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:39:32.723 17:36:09 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:39:32.723 17:36:09 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:39:32.724 17:36:09 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:39:32.724 17:36:09 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:39:32.724 17:36:09 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:39:32.724 17:36:09 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:39:32.724 17:36:09 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@576 -- # local strip_size 00:39:32.724 17:36:09 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@577 -- # local create_arg 00:39:32.724 17:36:09 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:39:32.724 17:36:09 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@579 -- # local data_offset 00:39:32.724 17:36:09 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@581 -- # '[' raid1 '!=' raid1 ']' 00:39:32.724 17:36:09 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@589 -- # strip_size=0 00:39:32.724 17:36:09 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@592 -- # '[' true = true ']' 00:39:32.724 17:36:09 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@593 -- # create_arg+=' -s' 00:39:32.724 17:36:09 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@597 -- # raid_pid=88261 00:39:32.724 17:36:09 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@598 -- # waitforlisten 88261 00:39:32.724 17:36:09 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@835 -- # '[' -z 88261 ']' 00:39:32.724 17:36:09 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:39:32.724 17:36:09 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:39:32.724 17:36:09 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@840 -- # local max_retries=100 00:39:32.724 17:36:09 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:39:32.724 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:39:32.724 17:36:09 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@844 -- # xtrace_disable 00:39:32.724 17:36:09 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:39:32.724 [2024-11-26 17:36:09.956768] Starting SPDK v25.01-pre git sha1 f7ce15267 / DPDK 24.03.0 initialization... 00:39:32.724 [2024-11-26 17:36:09.957214] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --matchI/O size of 3145728 is greater than zero copy threshold (65536). 00:39:32.724 Zero copy mechanism will not be used. 00:39:32.724 -allocations --file-prefix=spdk_pid88261 ] 00:39:32.724 [2024-11-26 17:36:10.147055] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:39:32.983 [2024-11-26 17:36:10.286002] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:39:33.242 [2024-11-26 17:36:10.521948] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:39:33.242 [2024-11-26 17:36:10.522242] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:39:33.501 17:36:10 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:39:33.501 17:36:10 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@868 -- # return 0 00:39:33.501 17:36:10 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:39:33.501 17:36:10 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -b BaseBdev1_malloc 00:39:33.501 17:36:10 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:33.501 17:36:10 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:39:33.760 BaseBdev1_malloc 00:39:33.760 17:36:10 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:33.760 17:36:10 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:39:33.760 17:36:10 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:33.761 17:36:10 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:39:33.761 [2024-11-26 17:36:10.985302] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:39:33.761 [2024-11-26 17:36:10.985378] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:39:33.761 [2024-11-26 17:36:10.985405] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:39:33.761 [2024-11-26 17:36:10.985421] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:39:33.761 [2024-11-26 17:36:10.987845] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:39:33.761 [2024-11-26 17:36:10.988098] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:39:33.761 BaseBdev1 00:39:33.761 17:36:10 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:33.761 17:36:10 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:39:33.761 17:36:10 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -b BaseBdev2_malloc 00:39:33.761 17:36:10 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:33.761 17:36:10 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:39:33.761 BaseBdev2_malloc 00:39:33.761 17:36:11 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:33.761 17:36:11 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:39:33.761 17:36:11 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:33.761 17:36:11 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:39:33.761 [2024-11-26 17:36:11.049708] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:39:33.761 [2024-11-26 17:36:11.049774] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:39:33.761 [2024-11-26 17:36:11.049796] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:39:33.761 [2024-11-26 17:36:11.049813] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:39:33.761 [2024-11-26 17:36:11.052348] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:39:33.761 [2024-11-26 17:36:11.052388] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:39:33.761 BaseBdev2 00:39:33.761 17:36:11 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:33.761 17:36:11 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -b spare_malloc 00:39:33.761 17:36:11 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:33.761 17:36:11 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:39:33.761 spare_malloc 00:39:33.761 17:36:11 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:33.761 17:36:11 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:39:33.761 17:36:11 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:33.761 17:36:11 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:39:33.761 spare_delay 00:39:33.761 17:36:11 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:33.761 17:36:11 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:39:33.761 17:36:11 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:33.761 17:36:11 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:39:33.761 [2024-11-26 17:36:11.128708] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:39:33.761 [2024-11-26 17:36:11.128770] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:39:33.761 [2024-11-26 17:36:11.128792] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:39:33.761 [2024-11-26 17:36:11.128807] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:39:33.761 [2024-11-26 17:36:11.131265] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:39:33.761 [2024-11-26 17:36:11.131307] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:39:33.761 spare 00:39:33.761 17:36:11 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:33.761 17:36:11 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n raid_bdev1 00:39:33.761 17:36:11 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:33.761 17:36:11 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:39:33.761 [2024-11-26 17:36:11.136760] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:39:33.761 [2024-11-26 17:36:11.139123] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:39:33.761 [2024-11-26 17:36:11.139309] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:39:33.761 [2024-11-26 17:36:11.139326] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:39:33.761 [2024-11-26 17:36:11.139401] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:39:33.761 [2024-11-26 17:36:11.139562] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:39:33.761 [2024-11-26 17:36:11.139574] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:39:33.761 [2024-11-26 17:36:11.139675] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:39:33.761 17:36:11 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:33.761 17:36:11 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:39:33.761 17:36:11 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:39:33.761 17:36:11 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:39:33.761 17:36:11 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:39:33.761 17:36:11 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:39:33.761 17:36:11 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:39:33.761 17:36:11 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:39:33.761 17:36:11 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:39:33.761 17:36:11 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:39:33.761 17:36:11 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:39:33.761 17:36:11 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:39:33.761 17:36:11 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:39:33.761 17:36:11 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:33.761 17:36:11 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:39:33.761 17:36:11 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:33.761 17:36:11 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:39:33.761 "name": "raid_bdev1", 00:39:33.761 "uuid": "b1fe168b-dd02-4609-a252-3f26dcd9899e", 00:39:33.761 "strip_size_kb": 0, 00:39:33.761 "state": "online", 00:39:33.761 "raid_level": "raid1", 00:39:33.761 "superblock": true, 00:39:33.761 "num_base_bdevs": 2, 00:39:33.761 "num_base_bdevs_discovered": 2, 00:39:33.761 "num_base_bdevs_operational": 2, 00:39:33.761 "base_bdevs_list": [ 00:39:33.761 { 00:39:33.761 "name": "BaseBdev1", 00:39:33.761 "uuid": "f33984e5-4ff1-5d7c-abae-2eadeba27a48", 00:39:33.761 "is_configured": true, 00:39:33.761 "data_offset": 256, 00:39:33.761 "data_size": 7936 00:39:33.761 }, 00:39:33.761 { 00:39:33.761 "name": "BaseBdev2", 00:39:33.761 "uuid": "c478f87d-d383-5293-8cda-48bb89935ee2", 00:39:33.761 "is_configured": true, 00:39:33.761 "data_offset": 256, 00:39:33.761 "data_size": 7936 00:39:33.761 } 00:39:33.761 ] 00:39:33.761 }' 00:39:33.761 17:36:11 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:39:33.761 17:36:11 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:39:34.329 17:36:11 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:39:34.329 17:36:11 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:39:34.329 17:36:11 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:34.329 17:36:11 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:39:34.329 [2024-11-26 17:36:11.593098] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:39:34.329 17:36:11 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:34.329 17:36:11 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=7936 00:39:34.329 17:36:11 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:39:34.329 17:36:11 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:39:34.329 17:36:11 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:34.329 17:36:11 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:39:34.329 17:36:11 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:34.329 17:36:11 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@619 -- # data_offset=256 00:39:34.329 17:36:11 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@621 -- # '[' false = true ']' 00:39:34.329 17:36:11 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@624 -- # '[' true = true ']' 00:39:34.329 17:36:11 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@625 -- # local write_unit_size 00:39:34.329 17:36:11 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@628 -- # nbd_start_disks /var/tmp/spdk.sock raid_bdev1 /dev/nbd0 00:39:34.329 17:36:11 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:39:34.329 17:36:11 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@10 -- # bdev_list=('raid_bdev1') 00:39:34.329 17:36:11 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@10 -- # local bdev_list 00:39:34.329 17:36:11 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:39:34.329 17:36:11 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@11 -- # local nbd_list 00:39:34.329 17:36:11 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@12 -- # local i 00:39:34.329 17:36:11 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:39:34.329 17:36:11 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:39:34.329 17:36:11 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk raid_bdev1 /dev/nbd0 00:39:34.589 [2024-11-26 17:36:11.840995] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:39:34.589 /dev/nbd0 00:39:34.589 17:36:11 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:39:34.589 17:36:11 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:39:34.589 17:36:11 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:39:34.589 17:36:11 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@873 -- # local i 00:39:34.589 17:36:11 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:39:34.589 17:36:11 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:39:34.589 17:36:11 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:39:34.589 17:36:11 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@877 -- # break 00:39:34.589 17:36:11 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:39:34.589 17:36:11 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:39:34.589 17:36:11 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:39:34.589 1+0 records in 00:39:34.589 1+0 records out 00:39:34.589 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000514439 s, 8.0 MB/s 00:39:34.589 17:36:11 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:39:34.589 17:36:11 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@890 -- # size=4096 00:39:34.589 17:36:11 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:39:34.589 17:36:11 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:39:34.589 17:36:11 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@893 -- # return 0 00:39:34.589 17:36:11 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:39:34.589 17:36:11 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:39:34.589 17:36:11 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@629 -- # '[' raid1 = raid5f ']' 00:39:34.589 17:36:11 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@633 -- # write_unit_size=1 00:39:34.589 17:36:11 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@635 -- # dd if=/dev/urandom of=/dev/nbd0 bs=4096 count=7936 oflag=direct 00:39:35.524 7936+0 records in 00:39:35.524 7936+0 records out 00:39:35.524 32505856 bytes (33 MB, 31 MiB) copied, 0.800788 s, 40.6 MB/s 00:39:35.524 17:36:12 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@636 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:39:35.524 17:36:12 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:39:35.524 17:36:12 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:39:35.524 17:36:12 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@50 -- # local nbd_list 00:39:35.524 17:36:12 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@51 -- # local i 00:39:35.524 17:36:12 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:39:35.524 17:36:12 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:39:35.783 [2024-11-26 17:36:13.053980] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:39:35.783 17:36:13 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:39:35.783 17:36:13 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:39:35.783 17:36:13 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:39:35.783 17:36:13 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:39:35.783 17:36:13 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:39:35.783 17:36:13 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:39:35.783 17:36:13 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@41 -- # break 00:39:35.783 17:36:13 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@45 -- # return 0 00:39:35.783 17:36:13 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:39:35.783 17:36:13 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:35.783 17:36:13 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:39:35.783 [2024-11-26 17:36:13.076142] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:39:35.783 17:36:13 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:35.783 17:36:13 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:39:35.783 17:36:13 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:39:35.783 17:36:13 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:39:35.783 17:36:13 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:39:35.783 17:36:13 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:39:35.783 17:36:13 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:39:35.783 17:36:13 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:39:35.783 17:36:13 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:39:35.783 17:36:13 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:39:35.783 17:36:13 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:39:35.783 17:36:13 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:39:35.783 17:36:13 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:39:35.783 17:36:13 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:35.783 17:36:13 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:39:35.783 17:36:13 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:35.783 17:36:13 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:39:35.783 "name": "raid_bdev1", 00:39:35.783 "uuid": "b1fe168b-dd02-4609-a252-3f26dcd9899e", 00:39:35.783 "strip_size_kb": 0, 00:39:35.783 "state": "online", 00:39:35.783 "raid_level": "raid1", 00:39:35.783 "superblock": true, 00:39:35.783 "num_base_bdevs": 2, 00:39:35.783 "num_base_bdevs_discovered": 1, 00:39:35.783 "num_base_bdevs_operational": 1, 00:39:35.783 "base_bdevs_list": [ 00:39:35.783 { 00:39:35.783 "name": null, 00:39:35.783 "uuid": "00000000-0000-0000-0000-000000000000", 00:39:35.783 "is_configured": false, 00:39:35.783 "data_offset": 0, 00:39:35.783 "data_size": 7936 00:39:35.783 }, 00:39:35.783 { 00:39:35.783 "name": "BaseBdev2", 00:39:35.783 "uuid": "c478f87d-d383-5293-8cda-48bb89935ee2", 00:39:35.783 "is_configured": true, 00:39:35.783 "data_offset": 256, 00:39:35.783 "data_size": 7936 00:39:35.783 } 00:39:35.783 ] 00:39:35.783 }' 00:39:35.783 17:36:13 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:39:35.783 17:36:13 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:39:36.349 17:36:13 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:39:36.349 17:36:13 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:36.349 17:36:13 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:39:36.349 [2024-11-26 17:36:13.556219] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:39:36.349 [2024-11-26 17:36:13.569450] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00018d260 00:39:36.349 17:36:13 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:36.349 17:36:13 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@647 -- # sleep 1 00:39:36.349 [2024-11-26 17:36:13.571906] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:39:37.281 17:36:14 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:39:37.281 17:36:14 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:39:37.281 17:36:14 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:39:37.281 17:36:14 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@171 -- # local target=spare 00:39:37.281 17:36:14 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:39:37.281 17:36:14 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:39:37.281 17:36:14 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:37.281 17:36:14 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:39:37.281 17:36:14 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:39:37.281 17:36:14 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:37.281 17:36:14 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:39:37.281 "name": "raid_bdev1", 00:39:37.281 "uuid": "b1fe168b-dd02-4609-a252-3f26dcd9899e", 00:39:37.281 "strip_size_kb": 0, 00:39:37.281 "state": "online", 00:39:37.281 "raid_level": "raid1", 00:39:37.281 "superblock": true, 00:39:37.281 "num_base_bdevs": 2, 00:39:37.281 "num_base_bdevs_discovered": 2, 00:39:37.281 "num_base_bdevs_operational": 2, 00:39:37.281 "process": { 00:39:37.281 "type": "rebuild", 00:39:37.281 "target": "spare", 00:39:37.281 "progress": { 00:39:37.281 "blocks": 2560, 00:39:37.281 "percent": 32 00:39:37.281 } 00:39:37.281 }, 00:39:37.281 "base_bdevs_list": [ 00:39:37.281 { 00:39:37.281 "name": "spare", 00:39:37.281 "uuid": "dd86305a-35ea-58d0-ad22-a193828ce190", 00:39:37.281 "is_configured": true, 00:39:37.281 "data_offset": 256, 00:39:37.281 "data_size": 7936 00:39:37.281 }, 00:39:37.281 { 00:39:37.281 "name": "BaseBdev2", 00:39:37.281 "uuid": "c478f87d-d383-5293-8cda-48bb89935ee2", 00:39:37.281 "is_configured": true, 00:39:37.281 "data_offset": 256, 00:39:37.281 "data_size": 7936 00:39:37.281 } 00:39:37.281 ] 00:39:37.281 }' 00:39:37.281 17:36:14 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:39:37.281 17:36:14 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:39:37.281 17:36:14 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:39:37.281 17:36:14 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:39:37.281 17:36:14 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:39:37.281 17:36:14 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:37.281 17:36:14 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:39:37.281 [2024-11-26 17:36:14.706447] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:39:37.539 [2024-11-26 17:36:14.784426] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:39:37.539 [2024-11-26 17:36:14.784529] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:39:37.539 [2024-11-26 17:36:14.784551] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:39:37.539 [2024-11-26 17:36:14.784572] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:39:37.539 17:36:14 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:37.539 17:36:14 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:39:37.539 17:36:14 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:39:37.539 17:36:14 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:39:37.539 17:36:14 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:39:37.539 17:36:14 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:39:37.539 17:36:14 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:39:37.539 17:36:14 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:39:37.539 17:36:14 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:39:37.539 17:36:14 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:39:37.539 17:36:14 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:39:37.539 17:36:14 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:39:37.539 17:36:14 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:39:37.539 17:36:14 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:37.539 17:36:14 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:39:37.539 17:36:14 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:37.539 17:36:14 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:39:37.539 "name": "raid_bdev1", 00:39:37.539 "uuid": "b1fe168b-dd02-4609-a252-3f26dcd9899e", 00:39:37.539 "strip_size_kb": 0, 00:39:37.539 "state": "online", 00:39:37.539 "raid_level": "raid1", 00:39:37.539 "superblock": true, 00:39:37.539 "num_base_bdevs": 2, 00:39:37.539 "num_base_bdevs_discovered": 1, 00:39:37.539 "num_base_bdevs_operational": 1, 00:39:37.539 "base_bdevs_list": [ 00:39:37.539 { 00:39:37.539 "name": null, 00:39:37.539 "uuid": "00000000-0000-0000-0000-000000000000", 00:39:37.539 "is_configured": false, 00:39:37.539 "data_offset": 0, 00:39:37.539 "data_size": 7936 00:39:37.539 }, 00:39:37.539 { 00:39:37.539 "name": "BaseBdev2", 00:39:37.539 "uuid": "c478f87d-d383-5293-8cda-48bb89935ee2", 00:39:37.539 "is_configured": true, 00:39:37.539 "data_offset": 256, 00:39:37.539 "data_size": 7936 00:39:37.539 } 00:39:37.539 ] 00:39:37.539 }' 00:39:37.539 17:36:14 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:39:37.539 17:36:14 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:39:38.107 17:36:15 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:39:38.107 17:36:15 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:39:38.107 17:36:15 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:39:38.107 17:36:15 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@171 -- # local target=none 00:39:38.107 17:36:15 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:39:38.107 17:36:15 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:39:38.107 17:36:15 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:39:38.107 17:36:15 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:38.107 17:36:15 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:39:38.107 17:36:15 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:38.107 17:36:15 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:39:38.107 "name": "raid_bdev1", 00:39:38.107 "uuid": "b1fe168b-dd02-4609-a252-3f26dcd9899e", 00:39:38.107 "strip_size_kb": 0, 00:39:38.107 "state": "online", 00:39:38.107 "raid_level": "raid1", 00:39:38.107 "superblock": true, 00:39:38.107 "num_base_bdevs": 2, 00:39:38.107 "num_base_bdevs_discovered": 1, 00:39:38.107 "num_base_bdevs_operational": 1, 00:39:38.107 "base_bdevs_list": [ 00:39:38.107 { 00:39:38.107 "name": null, 00:39:38.107 "uuid": "00000000-0000-0000-0000-000000000000", 00:39:38.107 "is_configured": false, 00:39:38.107 "data_offset": 0, 00:39:38.107 "data_size": 7936 00:39:38.107 }, 00:39:38.107 { 00:39:38.107 "name": "BaseBdev2", 00:39:38.107 "uuid": "c478f87d-d383-5293-8cda-48bb89935ee2", 00:39:38.107 "is_configured": true, 00:39:38.107 "data_offset": 256, 00:39:38.107 "data_size": 7936 00:39:38.107 } 00:39:38.107 ] 00:39:38.107 }' 00:39:38.107 17:36:15 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:39:38.107 17:36:15 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:39:38.107 17:36:15 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:39:38.107 17:36:15 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:39:38.107 17:36:15 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:39:38.107 17:36:15 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:38.107 17:36:15 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:39:38.107 [2024-11-26 17:36:15.403770] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:39:38.107 [2024-11-26 17:36:15.418154] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00018d330 00:39:38.107 17:36:15 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:38.107 17:36:15 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@663 -- # sleep 1 00:39:38.107 [2024-11-26 17:36:15.420662] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:39:39.043 17:36:16 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:39:39.043 17:36:16 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:39:39.043 17:36:16 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:39:39.043 17:36:16 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@171 -- # local target=spare 00:39:39.043 17:36:16 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:39:39.043 17:36:16 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:39:39.043 17:36:16 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:39.043 17:36:16 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:39:39.043 17:36:16 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:39:39.043 17:36:16 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:39.043 17:36:16 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:39:39.043 "name": "raid_bdev1", 00:39:39.043 "uuid": "b1fe168b-dd02-4609-a252-3f26dcd9899e", 00:39:39.043 "strip_size_kb": 0, 00:39:39.043 "state": "online", 00:39:39.043 "raid_level": "raid1", 00:39:39.043 "superblock": true, 00:39:39.043 "num_base_bdevs": 2, 00:39:39.043 "num_base_bdevs_discovered": 2, 00:39:39.043 "num_base_bdevs_operational": 2, 00:39:39.043 "process": { 00:39:39.043 "type": "rebuild", 00:39:39.043 "target": "spare", 00:39:39.043 "progress": { 00:39:39.043 "blocks": 2560, 00:39:39.043 "percent": 32 00:39:39.043 } 00:39:39.043 }, 00:39:39.043 "base_bdevs_list": [ 00:39:39.043 { 00:39:39.043 "name": "spare", 00:39:39.043 "uuid": "dd86305a-35ea-58d0-ad22-a193828ce190", 00:39:39.043 "is_configured": true, 00:39:39.043 "data_offset": 256, 00:39:39.043 "data_size": 7936 00:39:39.043 }, 00:39:39.043 { 00:39:39.043 "name": "BaseBdev2", 00:39:39.043 "uuid": "c478f87d-d383-5293-8cda-48bb89935ee2", 00:39:39.043 "is_configured": true, 00:39:39.043 "data_offset": 256, 00:39:39.043 "data_size": 7936 00:39:39.043 } 00:39:39.043 ] 00:39:39.043 }' 00:39:39.043 17:36:16 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:39:39.302 17:36:16 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:39:39.302 17:36:16 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:39:39.302 17:36:16 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:39:39.302 17:36:16 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@666 -- # '[' true = true ']' 00:39:39.302 17:36:16 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@666 -- # '[' = false ']' 00:39:39.302 /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh: line 666: [: =: unary operator expected 00:39:39.302 17:36:16 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=2 00:39:39.302 17:36:16 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@693 -- # '[' raid1 = raid1 ']' 00:39:39.302 17:36:16 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@693 -- # '[' 2 -gt 2 ']' 00:39:39.302 17:36:16 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@706 -- # local timeout=730 00:39:39.302 17:36:16 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:39:39.302 17:36:16 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:39:39.302 17:36:16 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:39:39.302 17:36:16 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:39:39.302 17:36:16 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@171 -- # local target=spare 00:39:39.302 17:36:16 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:39:39.302 17:36:16 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:39:39.302 17:36:16 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:39:39.302 17:36:16 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:39.302 17:36:16 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:39:39.302 17:36:16 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:39.302 17:36:16 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:39:39.302 "name": "raid_bdev1", 00:39:39.302 "uuid": "b1fe168b-dd02-4609-a252-3f26dcd9899e", 00:39:39.302 "strip_size_kb": 0, 00:39:39.302 "state": "online", 00:39:39.302 "raid_level": "raid1", 00:39:39.302 "superblock": true, 00:39:39.302 "num_base_bdevs": 2, 00:39:39.302 "num_base_bdevs_discovered": 2, 00:39:39.302 "num_base_bdevs_operational": 2, 00:39:39.302 "process": { 00:39:39.302 "type": "rebuild", 00:39:39.302 "target": "spare", 00:39:39.302 "progress": { 00:39:39.302 "blocks": 2816, 00:39:39.302 "percent": 35 00:39:39.302 } 00:39:39.302 }, 00:39:39.302 "base_bdevs_list": [ 00:39:39.302 { 00:39:39.302 "name": "spare", 00:39:39.302 "uuid": "dd86305a-35ea-58d0-ad22-a193828ce190", 00:39:39.302 "is_configured": true, 00:39:39.302 "data_offset": 256, 00:39:39.302 "data_size": 7936 00:39:39.302 }, 00:39:39.302 { 00:39:39.302 "name": "BaseBdev2", 00:39:39.302 "uuid": "c478f87d-d383-5293-8cda-48bb89935ee2", 00:39:39.302 "is_configured": true, 00:39:39.302 "data_offset": 256, 00:39:39.302 "data_size": 7936 00:39:39.302 } 00:39:39.302 ] 00:39:39.302 }' 00:39:39.302 17:36:16 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:39:39.302 17:36:16 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:39:39.302 17:36:16 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:39:39.302 17:36:16 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:39:39.302 17:36:16 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@711 -- # sleep 1 00:39:40.690 17:36:17 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:39:40.690 17:36:17 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:39:40.690 17:36:17 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:39:40.690 17:36:17 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:39:40.690 17:36:17 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@171 -- # local target=spare 00:39:40.690 17:36:17 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:39:40.690 17:36:17 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:39:40.690 17:36:17 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:40.690 17:36:17 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:39:40.690 17:36:17 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:39:40.690 17:36:17 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:40.690 17:36:17 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:39:40.690 "name": "raid_bdev1", 00:39:40.690 "uuid": "b1fe168b-dd02-4609-a252-3f26dcd9899e", 00:39:40.690 "strip_size_kb": 0, 00:39:40.690 "state": "online", 00:39:40.690 "raid_level": "raid1", 00:39:40.690 "superblock": true, 00:39:40.690 "num_base_bdevs": 2, 00:39:40.690 "num_base_bdevs_discovered": 2, 00:39:40.690 "num_base_bdevs_operational": 2, 00:39:40.690 "process": { 00:39:40.690 "type": "rebuild", 00:39:40.690 "target": "spare", 00:39:40.690 "progress": { 00:39:40.690 "blocks": 5632, 00:39:40.690 "percent": 70 00:39:40.690 } 00:39:40.690 }, 00:39:40.690 "base_bdevs_list": [ 00:39:40.690 { 00:39:40.690 "name": "spare", 00:39:40.690 "uuid": "dd86305a-35ea-58d0-ad22-a193828ce190", 00:39:40.690 "is_configured": true, 00:39:40.690 "data_offset": 256, 00:39:40.690 "data_size": 7936 00:39:40.690 }, 00:39:40.690 { 00:39:40.690 "name": "BaseBdev2", 00:39:40.690 "uuid": "c478f87d-d383-5293-8cda-48bb89935ee2", 00:39:40.690 "is_configured": true, 00:39:40.690 "data_offset": 256, 00:39:40.690 "data_size": 7936 00:39:40.690 } 00:39:40.690 ] 00:39:40.690 }' 00:39:40.690 17:36:17 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:39:40.690 17:36:17 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:39:40.690 17:36:17 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:39:40.690 17:36:17 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:39:40.690 17:36:17 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@711 -- # sleep 1 00:39:41.272 [2024-11-26 17:36:18.549285] bdev_raid.c:2900:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:39:41.272 [2024-11-26 17:36:18.549372] bdev_raid.c:2562:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:39:41.272 [2024-11-26 17:36:18.549493] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:39:41.531 17:36:18 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:39:41.531 17:36:18 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:39:41.531 17:36:18 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:39:41.531 17:36:18 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:39:41.531 17:36:18 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@171 -- # local target=spare 00:39:41.531 17:36:18 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:39:41.531 17:36:18 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:39:41.531 17:36:18 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:41.531 17:36:18 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:39:41.531 17:36:18 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:39:41.531 17:36:18 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:41.531 17:36:18 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:39:41.531 "name": "raid_bdev1", 00:39:41.531 "uuid": "b1fe168b-dd02-4609-a252-3f26dcd9899e", 00:39:41.531 "strip_size_kb": 0, 00:39:41.531 "state": "online", 00:39:41.531 "raid_level": "raid1", 00:39:41.531 "superblock": true, 00:39:41.531 "num_base_bdevs": 2, 00:39:41.531 "num_base_bdevs_discovered": 2, 00:39:41.531 "num_base_bdevs_operational": 2, 00:39:41.531 "base_bdevs_list": [ 00:39:41.531 { 00:39:41.531 "name": "spare", 00:39:41.531 "uuid": "dd86305a-35ea-58d0-ad22-a193828ce190", 00:39:41.531 "is_configured": true, 00:39:41.531 "data_offset": 256, 00:39:41.531 "data_size": 7936 00:39:41.531 }, 00:39:41.531 { 00:39:41.531 "name": "BaseBdev2", 00:39:41.531 "uuid": "c478f87d-d383-5293-8cda-48bb89935ee2", 00:39:41.531 "is_configured": true, 00:39:41.531 "data_offset": 256, 00:39:41.531 "data_size": 7936 00:39:41.531 } 00:39:41.531 ] 00:39:41.531 }' 00:39:41.531 17:36:18 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:39:41.531 17:36:18 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:39:41.531 17:36:18 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:39:41.531 17:36:18 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:39:41.531 17:36:18 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@709 -- # break 00:39:41.531 17:36:18 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:39:41.531 17:36:18 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:39:41.531 17:36:18 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:39:41.531 17:36:18 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@171 -- # local target=none 00:39:41.531 17:36:18 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:39:41.791 17:36:18 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:39:41.791 17:36:18 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:39:41.791 17:36:18 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:41.791 17:36:18 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:39:41.791 17:36:18 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:41.791 17:36:19 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:39:41.791 "name": "raid_bdev1", 00:39:41.791 "uuid": "b1fe168b-dd02-4609-a252-3f26dcd9899e", 00:39:41.791 "strip_size_kb": 0, 00:39:41.791 "state": "online", 00:39:41.791 "raid_level": "raid1", 00:39:41.791 "superblock": true, 00:39:41.791 "num_base_bdevs": 2, 00:39:41.791 "num_base_bdevs_discovered": 2, 00:39:41.791 "num_base_bdevs_operational": 2, 00:39:41.791 "base_bdevs_list": [ 00:39:41.791 { 00:39:41.791 "name": "spare", 00:39:41.791 "uuid": "dd86305a-35ea-58d0-ad22-a193828ce190", 00:39:41.791 "is_configured": true, 00:39:41.791 "data_offset": 256, 00:39:41.791 "data_size": 7936 00:39:41.791 }, 00:39:41.791 { 00:39:41.791 "name": "BaseBdev2", 00:39:41.791 "uuid": "c478f87d-d383-5293-8cda-48bb89935ee2", 00:39:41.791 "is_configured": true, 00:39:41.791 "data_offset": 256, 00:39:41.791 "data_size": 7936 00:39:41.791 } 00:39:41.791 ] 00:39:41.791 }' 00:39:41.791 17:36:19 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:39:41.791 17:36:19 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:39:41.791 17:36:19 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:39:41.791 17:36:19 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:39:41.791 17:36:19 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:39:41.791 17:36:19 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:39:41.791 17:36:19 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:39:41.791 17:36:19 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:39:41.791 17:36:19 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:39:41.791 17:36:19 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:39:41.791 17:36:19 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:39:41.791 17:36:19 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:39:41.791 17:36:19 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:39:41.791 17:36:19 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:39:41.791 17:36:19 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:39:41.791 17:36:19 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:39:41.791 17:36:19 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:41.791 17:36:19 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:39:41.791 17:36:19 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:41.791 17:36:19 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:39:41.791 "name": "raid_bdev1", 00:39:41.791 "uuid": "b1fe168b-dd02-4609-a252-3f26dcd9899e", 00:39:41.792 "strip_size_kb": 0, 00:39:41.792 "state": "online", 00:39:41.792 "raid_level": "raid1", 00:39:41.792 "superblock": true, 00:39:41.792 "num_base_bdevs": 2, 00:39:41.792 "num_base_bdevs_discovered": 2, 00:39:41.792 "num_base_bdevs_operational": 2, 00:39:41.792 "base_bdevs_list": [ 00:39:41.792 { 00:39:41.792 "name": "spare", 00:39:41.792 "uuid": "dd86305a-35ea-58d0-ad22-a193828ce190", 00:39:41.792 "is_configured": true, 00:39:41.792 "data_offset": 256, 00:39:41.792 "data_size": 7936 00:39:41.792 }, 00:39:41.792 { 00:39:41.792 "name": "BaseBdev2", 00:39:41.792 "uuid": "c478f87d-d383-5293-8cda-48bb89935ee2", 00:39:41.792 "is_configured": true, 00:39:41.792 "data_offset": 256, 00:39:41.792 "data_size": 7936 00:39:41.792 } 00:39:41.792 ] 00:39:41.792 }' 00:39:41.792 17:36:19 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:39:41.792 17:36:19 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:39:42.367 17:36:19 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:39:42.367 17:36:19 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:42.367 17:36:19 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:39:42.367 [2024-11-26 17:36:19.555424] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:39:42.367 [2024-11-26 17:36:19.555660] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:39:42.367 [2024-11-26 17:36:19.555804] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:39:42.367 [2024-11-26 17:36:19.555890] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:39:42.367 [2024-11-26 17:36:19.555903] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:39:42.367 17:36:19 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:42.367 17:36:19 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:39:42.367 17:36:19 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:42.367 17:36:19 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:39:42.367 17:36:19 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@720 -- # jq length 00:39:42.367 17:36:19 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:42.367 17:36:19 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:39:42.367 17:36:19 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:39:42.367 17:36:19 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@723 -- # '[' false = true ']' 00:39:42.367 17:36:19 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@737 -- # nbd_start_disks /var/tmp/spdk.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 00:39:42.367 17:36:19 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:39:42.367 17:36:19 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev1' 'spare') 00:39:42.367 17:36:19 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@10 -- # local bdev_list 00:39:42.367 17:36:19 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:39:42.367 17:36:19 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@11 -- # local nbd_list 00:39:42.367 17:36:19 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@12 -- # local i 00:39:42.367 17:36:19 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:39:42.367 17:36:19 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:39:42.367 17:36:19 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev1 /dev/nbd0 00:39:42.625 /dev/nbd0 00:39:42.625 17:36:19 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:39:42.625 17:36:19 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:39:42.625 17:36:19 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:39:42.625 17:36:19 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@873 -- # local i 00:39:42.625 17:36:19 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:39:42.625 17:36:19 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:39:42.625 17:36:19 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:39:42.625 17:36:19 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@877 -- # break 00:39:42.626 17:36:19 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:39:42.626 17:36:19 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:39:42.626 17:36:19 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:39:42.626 1+0 records in 00:39:42.626 1+0 records out 00:39:42.626 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000377048 s, 10.9 MB/s 00:39:42.626 17:36:19 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:39:42.626 17:36:19 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@890 -- # size=4096 00:39:42.626 17:36:19 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:39:42.626 17:36:19 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:39:42.626 17:36:19 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@893 -- # return 0 00:39:42.626 17:36:19 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:39:42.626 17:36:19 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:39:42.626 17:36:19 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd1 00:39:42.884 /dev/nbd1 00:39:42.884 17:36:20 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:39:42.884 17:36:20 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:39:42.884 17:36:20 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:39:42.884 17:36:20 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@873 -- # local i 00:39:42.884 17:36:20 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:39:42.884 17:36:20 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:39:42.884 17:36:20 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:39:42.884 17:36:20 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@877 -- # break 00:39:42.884 17:36:20 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:39:42.884 17:36:20 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:39:42.884 17:36:20 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:39:42.884 1+0 records in 00:39:42.884 1+0 records out 00:39:42.884 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000506398 s, 8.1 MB/s 00:39:42.884 17:36:20 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:39:42.884 17:36:20 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@890 -- # size=4096 00:39:42.884 17:36:20 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:39:42.884 17:36:20 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:39:42.884 17:36:20 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@893 -- # return 0 00:39:42.884 17:36:20 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:39:42.884 17:36:20 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:39:42.884 17:36:20 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@738 -- # cmp -i 1048576 /dev/nbd0 /dev/nbd1 00:39:43.143 17:36:20 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@739 -- # nbd_stop_disks /var/tmp/spdk.sock '/dev/nbd0 /dev/nbd1' 00:39:43.143 17:36:20 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:39:43.143 17:36:20 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:39:43.143 17:36:20 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@50 -- # local nbd_list 00:39:43.143 17:36:20 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@51 -- # local i 00:39:43.143 17:36:20 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:39:43.143 17:36:20 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:39:43.403 17:36:20 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:39:43.403 17:36:20 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:39:43.403 17:36:20 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:39:43.403 17:36:20 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:39:43.403 17:36:20 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:39:43.403 17:36:20 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:39:43.403 17:36:20 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@41 -- # break 00:39:43.403 17:36:20 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@45 -- # return 0 00:39:43.403 17:36:20 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:39:43.403 17:36:20 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:39:43.662 17:36:20 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:39:43.662 17:36:20 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:39:43.662 17:36:20 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:39:43.662 17:36:20 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:39:43.662 17:36:20 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:39:43.662 17:36:20 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:39:43.662 17:36:20 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@41 -- # break 00:39:43.662 17:36:20 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@45 -- # return 0 00:39:43.662 17:36:20 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@743 -- # '[' true = true ']' 00:39:43.662 17:36:20 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@745 -- # rpc_cmd bdev_passthru_delete spare 00:39:43.662 17:36:20 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:43.662 17:36:20 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:39:43.662 17:36:21 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:43.662 17:36:21 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@746 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:39:43.663 17:36:21 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:43.663 17:36:21 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:39:43.663 [2024-11-26 17:36:21.008767] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:39:43.663 [2024-11-26 17:36:21.008833] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:39:43.663 [2024-11-26 17:36:21.008863] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:39:43.663 [2024-11-26 17:36:21.008875] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:39:43.663 [2024-11-26 17:36:21.011610] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:39:43.663 [2024-11-26 17:36:21.011788] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:39:43.663 [2024-11-26 17:36:21.011877] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:39:43.663 [2024-11-26 17:36:21.011966] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:39:43.663 [2024-11-26 17:36:21.012161] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:39:43.663 spare 00:39:43.663 17:36:21 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:43.663 17:36:21 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@747 -- # rpc_cmd bdev_wait_for_examine 00:39:43.663 17:36:21 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:43.663 17:36:21 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:39:43.922 [2024-11-26 17:36:21.112259] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007b00 00:39:43.922 [2024-11-26 17:36:21.112301] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:39:43.922 [2024-11-26 17:36:21.112442] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0001c1b50 00:39:43.922 [2024-11-26 17:36:21.112629] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007b00 00:39:43.922 [2024-11-26 17:36:21.112640] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007b00 00:39:43.922 [2024-11-26 17:36:21.112830] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:39:43.922 17:36:21 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:43.922 17:36:21 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@749 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:39:43.922 17:36:21 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:39:43.922 17:36:21 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:39:43.922 17:36:21 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:39:43.922 17:36:21 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:39:43.922 17:36:21 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:39:43.922 17:36:21 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:39:43.922 17:36:21 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:39:43.922 17:36:21 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:39:43.922 17:36:21 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:39:43.922 17:36:21 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:39:43.922 17:36:21 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:39:43.922 17:36:21 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:43.922 17:36:21 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:39:43.922 17:36:21 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:43.922 17:36:21 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:39:43.922 "name": "raid_bdev1", 00:39:43.922 "uuid": "b1fe168b-dd02-4609-a252-3f26dcd9899e", 00:39:43.922 "strip_size_kb": 0, 00:39:43.922 "state": "online", 00:39:43.922 "raid_level": "raid1", 00:39:43.922 "superblock": true, 00:39:43.922 "num_base_bdevs": 2, 00:39:43.922 "num_base_bdevs_discovered": 2, 00:39:43.922 "num_base_bdevs_operational": 2, 00:39:43.922 "base_bdevs_list": [ 00:39:43.922 { 00:39:43.922 "name": "spare", 00:39:43.922 "uuid": "dd86305a-35ea-58d0-ad22-a193828ce190", 00:39:43.922 "is_configured": true, 00:39:43.922 "data_offset": 256, 00:39:43.922 "data_size": 7936 00:39:43.922 }, 00:39:43.922 { 00:39:43.922 "name": "BaseBdev2", 00:39:43.922 "uuid": "c478f87d-d383-5293-8cda-48bb89935ee2", 00:39:43.922 "is_configured": true, 00:39:43.922 "data_offset": 256, 00:39:43.922 "data_size": 7936 00:39:43.923 } 00:39:43.923 ] 00:39:43.923 }' 00:39:43.923 17:36:21 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:39:43.923 17:36:21 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:39:44.182 17:36:21 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@750 -- # verify_raid_bdev_process raid_bdev1 none none 00:39:44.182 17:36:21 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:39:44.182 17:36:21 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:39:44.182 17:36:21 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@171 -- # local target=none 00:39:44.182 17:36:21 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:39:44.182 17:36:21 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:39:44.182 17:36:21 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:39:44.182 17:36:21 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:44.182 17:36:21 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:39:44.182 17:36:21 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:44.182 17:36:21 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:39:44.182 "name": "raid_bdev1", 00:39:44.182 "uuid": "b1fe168b-dd02-4609-a252-3f26dcd9899e", 00:39:44.182 "strip_size_kb": 0, 00:39:44.182 "state": "online", 00:39:44.182 "raid_level": "raid1", 00:39:44.182 "superblock": true, 00:39:44.182 "num_base_bdevs": 2, 00:39:44.182 "num_base_bdevs_discovered": 2, 00:39:44.182 "num_base_bdevs_operational": 2, 00:39:44.182 "base_bdevs_list": [ 00:39:44.182 { 00:39:44.182 "name": "spare", 00:39:44.182 "uuid": "dd86305a-35ea-58d0-ad22-a193828ce190", 00:39:44.182 "is_configured": true, 00:39:44.182 "data_offset": 256, 00:39:44.182 "data_size": 7936 00:39:44.182 }, 00:39:44.182 { 00:39:44.182 "name": "BaseBdev2", 00:39:44.182 "uuid": "c478f87d-d383-5293-8cda-48bb89935ee2", 00:39:44.182 "is_configured": true, 00:39:44.182 "data_offset": 256, 00:39:44.182 "data_size": 7936 00:39:44.182 } 00:39:44.182 ] 00:39:44.182 }' 00:39:44.182 17:36:21 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:39:44.442 17:36:21 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:39:44.442 17:36:21 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:39:44.442 17:36:21 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:39:44.442 17:36:21 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@751 -- # rpc_cmd bdev_raid_get_bdevs all 00:39:44.442 17:36:21 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:44.442 17:36:21 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:39:44.442 17:36:21 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@751 -- # jq -r '.[].base_bdevs_list[0].name' 00:39:44.442 17:36:21 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:44.442 17:36:21 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@751 -- # [[ spare == \s\p\a\r\e ]] 00:39:44.442 17:36:21 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@754 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:39:44.442 17:36:21 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:44.442 17:36:21 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:39:44.442 [2024-11-26 17:36:21.728994] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:39:44.442 17:36:21 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:44.442 17:36:21 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@755 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:39:44.442 17:36:21 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:39:44.442 17:36:21 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:39:44.442 17:36:21 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:39:44.442 17:36:21 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:39:44.442 17:36:21 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:39:44.442 17:36:21 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:39:44.442 17:36:21 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:39:44.442 17:36:21 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:39:44.442 17:36:21 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:39:44.442 17:36:21 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:39:44.442 17:36:21 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:44.442 17:36:21 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:39:44.442 17:36:21 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:39:44.442 17:36:21 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:44.442 17:36:21 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:39:44.442 "name": "raid_bdev1", 00:39:44.442 "uuid": "b1fe168b-dd02-4609-a252-3f26dcd9899e", 00:39:44.442 "strip_size_kb": 0, 00:39:44.442 "state": "online", 00:39:44.442 "raid_level": "raid1", 00:39:44.442 "superblock": true, 00:39:44.442 "num_base_bdevs": 2, 00:39:44.442 "num_base_bdevs_discovered": 1, 00:39:44.442 "num_base_bdevs_operational": 1, 00:39:44.442 "base_bdevs_list": [ 00:39:44.442 { 00:39:44.442 "name": null, 00:39:44.442 "uuid": "00000000-0000-0000-0000-000000000000", 00:39:44.442 "is_configured": false, 00:39:44.442 "data_offset": 0, 00:39:44.442 "data_size": 7936 00:39:44.442 }, 00:39:44.442 { 00:39:44.442 "name": "BaseBdev2", 00:39:44.442 "uuid": "c478f87d-d383-5293-8cda-48bb89935ee2", 00:39:44.442 "is_configured": true, 00:39:44.442 "data_offset": 256, 00:39:44.442 "data_size": 7936 00:39:44.442 } 00:39:44.442 ] 00:39:44.442 }' 00:39:44.442 17:36:21 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:39:44.442 17:36:21 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:39:44.701 17:36:22 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@756 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:39:44.701 17:36:22 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:44.701 17:36:22 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:39:44.701 [2024-11-26 17:36:22.129156] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:39:44.701 [2024-11-26 17:36:22.129433] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:39:44.701 [2024-11-26 17:36:22.129456] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:39:44.701 [2024-11-26 17:36:22.129500] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:39:44.701 [2024-11-26 17:36:22.143712] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0001c1c20 00:39:44.701 17:36:22 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:44.701 17:36:22 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@757 -- # sleep 1 00:39:44.701 [2024-11-26 17:36:22.146340] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:39:46.080 17:36:23 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@758 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:39:46.080 17:36:23 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:39:46.080 17:36:23 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:39:46.080 17:36:23 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@171 -- # local target=spare 00:39:46.080 17:36:23 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:39:46.080 17:36:23 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:39:46.080 17:36:23 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:46.080 17:36:23 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:39:46.080 17:36:23 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:39:46.080 17:36:23 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:46.080 17:36:23 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:39:46.080 "name": "raid_bdev1", 00:39:46.080 "uuid": "b1fe168b-dd02-4609-a252-3f26dcd9899e", 00:39:46.080 "strip_size_kb": 0, 00:39:46.080 "state": "online", 00:39:46.080 "raid_level": "raid1", 00:39:46.080 "superblock": true, 00:39:46.080 "num_base_bdevs": 2, 00:39:46.080 "num_base_bdevs_discovered": 2, 00:39:46.080 "num_base_bdevs_operational": 2, 00:39:46.080 "process": { 00:39:46.080 "type": "rebuild", 00:39:46.080 "target": "spare", 00:39:46.080 "progress": { 00:39:46.080 "blocks": 2560, 00:39:46.080 "percent": 32 00:39:46.080 } 00:39:46.080 }, 00:39:46.080 "base_bdevs_list": [ 00:39:46.080 { 00:39:46.080 "name": "spare", 00:39:46.080 "uuid": "dd86305a-35ea-58d0-ad22-a193828ce190", 00:39:46.080 "is_configured": true, 00:39:46.080 "data_offset": 256, 00:39:46.080 "data_size": 7936 00:39:46.080 }, 00:39:46.080 { 00:39:46.080 "name": "BaseBdev2", 00:39:46.080 "uuid": "c478f87d-d383-5293-8cda-48bb89935ee2", 00:39:46.080 "is_configured": true, 00:39:46.080 "data_offset": 256, 00:39:46.080 "data_size": 7936 00:39:46.080 } 00:39:46.080 ] 00:39:46.080 }' 00:39:46.080 17:36:23 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:39:46.080 17:36:23 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:39:46.080 17:36:23 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:39:46.080 17:36:23 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:39:46.080 17:36:23 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@761 -- # rpc_cmd bdev_passthru_delete spare 00:39:46.080 17:36:23 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:46.080 17:36:23 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:39:46.080 [2024-11-26 17:36:23.320600] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:39:46.080 [2024-11-26 17:36:23.357327] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:39:46.080 [2024-11-26 17:36:23.357389] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:39:46.080 [2024-11-26 17:36:23.357406] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:39:46.080 [2024-11-26 17:36:23.357430] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:39:46.080 17:36:23 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:46.080 17:36:23 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@762 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:39:46.080 17:36:23 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:39:46.080 17:36:23 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:39:46.080 17:36:23 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:39:46.080 17:36:23 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:39:46.080 17:36:23 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:39:46.080 17:36:23 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:39:46.080 17:36:23 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:39:46.080 17:36:23 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:39:46.080 17:36:23 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:39:46.080 17:36:23 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:39:46.080 17:36:23 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:46.080 17:36:23 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:39:46.080 17:36:23 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:39:46.080 17:36:23 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:46.080 17:36:23 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:39:46.080 "name": "raid_bdev1", 00:39:46.080 "uuid": "b1fe168b-dd02-4609-a252-3f26dcd9899e", 00:39:46.080 "strip_size_kb": 0, 00:39:46.080 "state": "online", 00:39:46.080 "raid_level": "raid1", 00:39:46.080 "superblock": true, 00:39:46.080 "num_base_bdevs": 2, 00:39:46.080 "num_base_bdevs_discovered": 1, 00:39:46.080 "num_base_bdevs_operational": 1, 00:39:46.080 "base_bdevs_list": [ 00:39:46.080 { 00:39:46.080 "name": null, 00:39:46.080 "uuid": "00000000-0000-0000-0000-000000000000", 00:39:46.080 "is_configured": false, 00:39:46.080 "data_offset": 0, 00:39:46.080 "data_size": 7936 00:39:46.080 }, 00:39:46.080 { 00:39:46.080 "name": "BaseBdev2", 00:39:46.080 "uuid": "c478f87d-d383-5293-8cda-48bb89935ee2", 00:39:46.080 "is_configured": true, 00:39:46.080 "data_offset": 256, 00:39:46.080 "data_size": 7936 00:39:46.080 } 00:39:46.080 ] 00:39:46.080 }' 00:39:46.080 17:36:23 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:39:46.080 17:36:23 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:39:46.649 17:36:23 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@763 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:39:46.649 17:36:23 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:46.649 17:36:23 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:39:46.649 [2024-11-26 17:36:23.818681] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:39:46.649 [2024-11-26 17:36:23.818902] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:39:46.649 [2024-11-26 17:36:23.818942] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ab80 00:39:46.649 [2024-11-26 17:36:23.818959] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:39:46.649 [2024-11-26 17:36:23.819285] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:39:46.649 [2024-11-26 17:36:23.819307] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:39:46.649 [2024-11-26 17:36:23.819375] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:39:46.649 [2024-11-26 17:36:23.819393] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:39:46.649 [2024-11-26 17:36:23.819407] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:39:46.649 [2024-11-26 17:36:23.819433] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:39:46.649 [2024-11-26 17:36:23.834344] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0001c1cf0 00:39:46.649 spare 00:39:46.649 17:36:23 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:46.649 17:36:23 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@764 -- # sleep 1 00:39:46.649 [2024-11-26 17:36:23.836812] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:39:47.587 17:36:24 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@765 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:39:47.587 17:36:24 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:39:47.587 17:36:24 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:39:47.587 17:36:24 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@171 -- # local target=spare 00:39:47.587 17:36:24 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:39:47.587 17:36:24 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:39:47.587 17:36:24 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:47.587 17:36:24 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:39:47.587 17:36:24 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:39:47.587 17:36:24 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:47.587 17:36:24 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:39:47.587 "name": "raid_bdev1", 00:39:47.587 "uuid": "b1fe168b-dd02-4609-a252-3f26dcd9899e", 00:39:47.587 "strip_size_kb": 0, 00:39:47.587 "state": "online", 00:39:47.587 "raid_level": "raid1", 00:39:47.587 "superblock": true, 00:39:47.587 "num_base_bdevs": 2, 00:39:47.587 "num_base_bdevs_discovered": 2, 00:39:47.587 "num_base_bdevs_operational": 2, 00:39:47.587 "process": { 00:39:47.587 "type": "rebuild", 00:39:47.587 "target": "spare", 00:39:47.587 "progress": { 00:39:47.587 "blocks": 2560, 00:39:47.587 "percent": 32 00:39:47.587 } 00:39:47.587 }, 00:39:47.587 "base_bdevs_list": [ 00:39:47.587 { 00:39:47.587 "name": "spare", 00:39:47.587 "uuid": "dd86305a-35ea-58d0-ad22-a193828ce190", 00:39:47.587 "is_configured": true, 00:39:47.587 "data_offset": 256, 00:39:47.587 "data_size": 7936 00:39:47.587 }, 00:39:47.587 { 00:39:47.587 "name": "BaseBdev2", 00:39:47.588 "uuid": "c478f87d-d383-5293-8cda-48bb89935ee2", 00:39:47.588 "is_configured": true, 00:39:47.588 "data_offset": 256, 00:39:47.588 "data_size": 7936 00:39:47.588 } 00:39:47.588 ] 00:39:47.588 }' 00:39:47.588 17:36:24 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:39:47.588 17:36:24 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:39:47.588 17:36:24 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:39:47.588 17:36:24 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:39:47.588 17:36:24 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@768 -- # rpc_cmd bdev_passthru_delete spare 00:39:47.588 17:36:24 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:47.588 17:36:24 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:39:47.588 [2024-11-26 17:36:24.978978] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:39:47.847 [2024-11-26 17:36:25.047763] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:39:47.847 [2024-11-26 17:36:25.047828] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:39:47.847 [2024-11-26 17:36:25.047848] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:39:47.847 [2024-11-26 17:36:25.047857] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:39:47.847 17:36:25 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:47.847 17:36:25 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@769 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:39:47.847 17:36:25 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:39:47.847 17:36:25 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:39:47.847 17:36:25 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:39:47.847 17:36:25 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:39:47.847 17:36:25 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:39:47.847 17:36:25 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:39:47.847 17:36:25 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:39:47.847 17:36:25 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:39:47.847 17:36:25 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:39:47.847 17:36:25 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:39:47.847 17:36:25 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:39:47.847 17:36:25 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:47.847 17:36:25 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:39:47.847 17:36:25 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:47.847 17:36:25 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:39:47.847 "name": "raid_bdev1", 00:39:47.847 "uuid": "b1fe168b-dd02-4609-a252-3f26dcd9899e", 00:39:47.847 "strip_size_kb": 0, 00:39:47.847 "state": "online", 00:39:47.847 "raid_level": "raid1", 00:39:47.847 "superblock": true, 00:39:47.847 "num_base_bdevs": 2, 00:39:47.847 "num_base_bdevs_discovered": 1, 00:39:47.847 "num_base_bdevs_operational": 1, 00:39:47.847 "base_bdevs_list": [ 00:39:47.847 { 00:39:47.847 "name": null, 00:39:47.847 "uuid": "00000000-0000-0000-0000-000000000000", 00:39:47.847 "is_configured": false, 00:39:47.847 "data_offset": 0, 00:39:47.847 "data_size": 7936 00:39:47.847 }, 00:39:47.847 { 00:39:47.847 "name": "BaseBdev2", 00:39:47.847 "uuid": "c478f87d-d383-5293-8cda-48bb89935ee2", 00:39:47.847 "is_configured": true, 00:39:47.847 "data_offset": 256, 00:39:47.847 "data_size": 7936 00:39:47.847 } 00:39:47.847 ] 00:39:47.847 }' 00:39:47.847 17:36:25 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:39:47.847 17:36:25 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:39:48.107 17:36:25 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@770 -- # verify_raid_bdev_process raid_bdev1 none none 00:39:48.107 17:36:25 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:39:48.107 17:36:25 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:39:48.107 17:36:25 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@171 -- # local target=none 00:39:48.107 17:36:25 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:39:48.107 17:36:25 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:39:48.107 17:36:25 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:39:48.107 17:36:25 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:48.107 17:36:25 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:39:48.107 17:36:25 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:48.107 17:36:25 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:39:48.107 "name": "raid_bdev1", 00:39:48.107 "uuid": "b1fe168b-dd02-4609-a252-3f26dcd9899e", 00:39:48.107 "strip_size_kb": 0, 00:39:48.107 "state": "online", 00:39:48.107 "raid_level": "raid1", 00:39:48.107 "superblock": true, 00:39:48.107 "num_base_bdevs": 2, 00:39:48.107 "num_base_bdevs_discovered": 1, 00:39:48.107 "num_base_bdevs_operational": 1, 00:39:48.107 "base_bdevs_list": [ 00:39:48.107 { 00:39:48.107 "name": null, 00:39:48.107 "uuid": "00000000-0000-0000-0000-000000000000", 00:39:48.107 "is_configured": false, 00:39:48.107 "data_offset": 0, 00:39:48.107 "data_size": 7936 00:39:48.107 }, 00:39:48.107 { 00:39:48.107 "name": "BaseBdev2", 00:39:48.107 "uuid": "c478f87d-d383-5293-8cda-48bb89935ee2", 00:39:48.107 "is_configured": true, 00:39:48.107 "data_offset": 256, 00:39:48.107 "data_size": 7936 00:39:48.107 } 00:39:48.107 ] 00:39:48.107 }' 00:39:48.107 17:36:25 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:39:48.366 17:36:25 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:39:48.366 17:36:25 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:39:48.366 17:36:25 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:39:48.366 17:36:25 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@773 -- # rpc_cmd bdev_passthru_delete BaseBdev1 00:39:48.366 17:36:25 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:48.366 17:36:25 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:39:48.366 17:36:25 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:48.366 17:36:25 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@774 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:39:48.366 17:36:25 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:48.366 17:36:25 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:39:48.366 [2024-11-26 17:36:25.617442] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:39:48.366 [2024-11-26 17:36:25.617635] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:39:48.366 [2024-11-26 17:36:25.617672] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b180 00:39:48.366 [2024-11-26 17:36:25.617686] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:39:48.366 [2024-11-26 17:36:25.617965] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:39:48.366 [2024-11-26 17:36:25.617982] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:39:48.366 [2024-11-26 17:36:25.618038] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev BaseBdev1 00:39:48.366 [2024-11-26 17:36:25.618067] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:39:48.367 [2024-11-26 17:36:25.618087] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:39:48.367 [2024-11-26 17:36:25.618100] bdev_raid.c:3894:raid_bdev_examine_done: *ERROR*: Failed to examine bdev BaseBdev1: Invalid argument 00:39:48.367 BaseBdev1 00:39:48.367 17:36:25 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:48.367 17:36:25 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@775 -- # sleep 1 00:39:49.305 17:36:26 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@776 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:39:49.305 17:36:26 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:39:49.305 17:36:26 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:39:49.305 17:36:26 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:39:49.305 17:36:26 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:39:49.305 17:36:26 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:39:49.305 17:36:26 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:39:49.305 17:36:26 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:39:49.305 17:36:26 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:39:49.305 17:36:26 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:39:49.305 17:36:26 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:39:49.305 17:36:26 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:49.305 17:36:26 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:39:49.305 17:36:26 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:39:49.305 17:36:26 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:49.305 17:36:26 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:39:49.305 "name": "raid_bdev1", 00:39:49.305 "uuid": "b1fe168b-dd02-4609-a252-3f26dcd9899e", 00:39:49.305 "strip_size_kb": 0, 00:39:49.305 "state": "online", 00:39:49.305 "raid_level": "raid1", 00:39:49.305 "superblock": true, 00:39:49.305 "num_base_bdevs": 2, 00:39:49.305 "num_base_bdevs_discovered": 1, 00:39:49.305 "num_base_bdevs_operational": 1, 00:39:49.305 "base_bdevs_list": [ 00:39:49.305 { 00:39:49.305 "name": null, 00:39:49.305 "uuid": "00000000-0000-0000-0000-000000000000", 00:39:49.305 "is_configured": false, 00:39:49.305 "data_offset": 0, 00:39:49.305 "data_size": 7936 00:39:49.305 }, 00:39:49.305 { 00:39:49.305 "name": "BaseBdev2", 00:39:49.305 "uuid": "c478f87d-d383-5293-8cda-48bb89935ee2", 00:39:49.305 "is_configured": true, 00:39:49.305 "data_offset": 256, 00:39:49.305 "data_size": 7936 00:39:49.305 } 00:39:49.305 ] 00:39:49.305 }' 00:39:49.305 17:36:26 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:39:49.305 17:36:26 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:39:49.874 17:36:27 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@777 -- # verify_raid_bdev_process raid_bdev1 none none 00:39:49.874 17:36:27 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:39:49.874 17:36:27 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:39:49.874 17:36:27 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@171 -- # local target=none 00:39:49.874 17:36:27 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:39:49.874 17:36:27 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:39:49.874 17:36:27 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:49.874 17:36:27 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:39:49.874 17:36:27 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:39:49.874 17:36:27 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:49.874 17:36:27 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:39:49.874 "name": "raid_bdev1", 00:39:49.874 "uuid": "b1fe168b-dd02-4609-a252-3f26dcd9899e", 00:39:49.874 "strip_size_kb": 0, 00:39:49.874 "state": "online", 00:39:49.874 "raid_level": "raid1", 00:39:49.874 "superblock": true, 00:39:49.874 "num_base_bdevs": 2, 00:39:49.874 "num_base_bdevs_discovered": 1, 00:39:49.874 "num_base_bdevs_operational": 1, 00:39:49.874 "base_bdevs_list": [ 00:39:49.874 { 00:39:49.874 "name": null, 00:39:49.874 "uuid": "00000000-0000-0000-0000-000000000000", 00:39:49.874 "is_configured": false, 00:39:49.874 "data_offset": 0, 00:39:49.874 "data_size": 7936 00:39:49.874 }, 00:39:49.874 { 00:39:49.874 "name": "BaseBdev2", 00:39:49.874 "uuid": "c478f87d-d383-5293-8cda-48bb89935ee2", 00:39:49.874 "is_configured": true, 00:39:49.874 "data_offset": 256, 00:39:49.874 "data_size": 7936 00:39:49.874 } 00:39:49.874 ] 00:39:49.874 }' 00:39:49.874 17:36:27 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:39:49.874 17:36:27 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:39:49.874 17:36:27 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:39:49.874 17:36:27 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:39:49.874 17:36:27 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@778 -- # NOT rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:39:49.874 17:36:27 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@652 -- # local es=0 00:39:49.874 17:36:27 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:39:49.874 17:36:27 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:39:49.874 17:36:27 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:39:49.874 17:36:27 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:39:49.874 17:36:27 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:39:49.875 17:36:27 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:39:49.875 17:36:27 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:49.875 17:36:27 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:39:49.875 [2024-11-26 17:36:27.213813] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:39:49.875 [2024-11-26 17:36:27.213997] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:39:49.875 [2024-11-26 17:36:27.214018] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:39:49.875 request: 00:39:49.875 { 00:39:49.875 "base_bdev": "BaseBdev1", 00:39:49.875 "raid_bdev": "raid_bdev1", 00:39:49.875 "method": "bdev_raid_add_base_bdev", 00:39:49.875 "req_id": 1 00:39:49.875 } 00:39:49.875 Got JSON-RPC error response 00:39:49.875 response: 00:39:49.875 { 00:39:49.875 "code": -22, 00:39:49.875 "message": "Failed to add base bdev to RAID bdev: Invalid argument" 00:39:49.875 } 00:39:49.875 17:36:27 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:39:49.875 17:36:27 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@655 -- # es=1 00:39:49.875 17:36:27 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:39:49.875 17:36:27 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:39:49.875 17:36:27 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:39:49.875 17:36:27 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@779 -- # sleep 1 00:39:50.812 17:36:28 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@780 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:39:50.812 17:36:28 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:39:50.812 17:36:28 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:39:50.812 17:36:28 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:39:50.812 17:36:28 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:39:50.812 17:36:28 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:39:50.812 17:36:28 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:39:50.812 17:36:28 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:39:50.812 17:36:28 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:39:50.812 17:36:28 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:39:50.812 17:36:28 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:39:50.812 17:36:28 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:39:50.812 17:36:28 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:50.812 17:36:28 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:39:50.812 17:36:28 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:51.071 17:36:28 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:39:51.071 "name": "raid_bdev1", 00:39:51.071 "uuid": "b1fe168b-dd02-4609-a252-3f26dcd9899e", 00:39:51.071 "strip_size_kb": 0, 00:39:51.071 "state": "online", 00:39:51.071 "raid_level": "raid1", 00:39:51.071 "superblock": true, 00:39:51.071 "num_base_bdevs": 2, 00:39:51.071 "num_base_bdevs_discovered": 1, 00:39:51.071 "num_base_bdevs_operational": 1, 00:39:51.071 "base_bdevs_list": [ 00:39:51.071 { 00:39:51.071 "name": null, 00:39:51.071 "uuid": "00000000-0000-0000-0000-000000000000", 00:39:51.071 "is_configured": false, 00:39:51.071 "data_offset": 0, 00:39:51.071 "data_size": 7936 00:39:51.071 }, 00:39:51.071 { 00:39:51.071 "name": "BaseBdev2", 00:39:51.072 "uuid": "c478f87d-d383-5293-8cda-48bb89935ee2", 00:39:51.072 "is_configured": true, 00:39:51.072 "data_offset": 256, 00:39:51.072 "data_size": 7936 00:39:51.072 } 00:39:51.072 ] 00:39:51.072 }' 00:39:51.072 17:36:28 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:39:51.072 17:36:28 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:39:51.331 17:36:28 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@781 -- # verify_raid_bdev_process raid_bdev1 none none 00:39:51.331 17:36:28 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:39:51.331 17:36:28 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:39:51.331 17:36:28 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@171 -- # local target=none 00:39:51.331 17:36:28 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:39:51.331 17:36:28 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:39:51.331 17:36:28 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:51.331 17:36:28 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:39:51.331 17:36:28 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:39:51.331 17:36:28 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:51.331 17:36:28 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:39:51.331 "name": "raid_bdev1", 00:39:51.331 "uuid": "b1fe168b-dd02-4609-a252-3f26dcd9899e", 00:39:51.331 "strip_size_kb": 0, 00:39:51.331 "state": "online", 00:39:51.331 "raid_level": "raid1", 00:39:51.331 "superblock": true, 00:39:51.331 "num_base_bdevs": 2, 00:39:51.331 "num_base_bdevs_discovered": 1, 00:39:51.331 "num_base_bdevs_operational": 1, 00:39:51.331 "base_bdevs_list": [ 00:39:51.331 { 00:39:51.331 "name": null, 00:39:51.331 "uuid": "00000000-0000-0000-0000-000000000000", 00:39:51.331 "is_configured": false, 00:39:51.331 "data_offset": 0, 00:39:51.331 "data_size": 7936 00:39:51.331 }, 00:39:51.331 { 00:39:51.331 "name": "BaseBdev2", 00:39:51.331 "uuid": "c478f87d-d383-5293-8cda-48bb89935ee2", 00:39:51.331 "is_configured": true, 00:39:51.331 "data_offset": 256, 00:39:51.331 "data_size": 7936 00:39:51.331 } 00:39:51.331 ] 00:39:51.331 }' 00:39:51.331 17:36:28 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:39:51.331 17:36:28 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:39:51.331 17:36:28 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:39:51.591 17:36:28 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:39:51.591 17:36:28 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@784 -- # killprocess 88261 00:39:51.591 17:36:28 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@954 -- # '[' -z 88261 ']' 00:39:51.591 17:36:28 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@958 -- # kill -0 88261 00:39:51.591 17:36:28 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@959 -- # uname 00:39:51.591 17:36:28 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:39:51.591 17:36:28 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 88261 00:39:51.591 killing process with pid 88261 00:39:51.591 Received shutdown signal, test time was about 60.000000 seconds 00:39:51.591 00:39:51.591 Latency(us) 00:39:51.591 [2024-11-26T17:36:29.038Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:39:51.591 [2024-11-26T17:36:29.038Z] =================================================================================================================== 00:39:51.591 [2024-11-26T17:36:29.038Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:39:51.591 17:36:28 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:39:51.591 17:36:28 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:39:51.591 17:36:28 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@972 -- # echo 'killing process with pid 88261' 00:39:51.591 17:36:28 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@973 -- # kill 88261 00:39:51.591 [2024-11-26 17:36:28.850057] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:39:51.591 [2024-11-26 17:36:28.850203] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:39:51.591 [2024-11-26 17:36:28.850256] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:39:51.591 [2024-11-26 17:36:28.850272] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state offline 00:39:51.591 17:36:28 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@978 -- # wait 88261 00:39:51.850 [2024-11-26 17:36:29.204158] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:39:53.229 17:36:30 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@786 -- # return 0 00:39:53.229 00:39:53.229 real 0m20.600s 00:39:53.229 user 0m26.616s 00:39:53.229 sys 0m3.203s 00:39:53.229 17:36:30 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@1130 -- # xtrace_disable 00:39:53.229 17:36:30 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:39:53.229 ************************************ 00:39:53.229 END TEST raid_rebuild_test_sb_md_separate 00:39:53.229 ************************************ 00:39:53.229 17:36:30 bdev_raid -- bdev/bdev_raid.sh@1010 -- # base_malloc_params='-m 32 -i' 00:39:53.229 17:36:30 bdev_raid -- bdev/bdev_raid.sh@1011 -- # run_test raid_state_function_test_sb_md_interleaved raid_state_function_test raid1 2 true 00:39:53.229 17:36:30 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:39:53.229 17:36:30 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:39:53.229 17:36:30 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:39:53.229 ************************************ 00:39:53.229 START TEST raid_state_function_test_sb_md_interleaved 00:39:53.229 ************************************ 00:39:53.229 17:36:30 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@1129 -- # raid_state_function_test raid1 2 true 00:39:53.229 17:36:30 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@205 -- # local raid_level=raid1 00:39:53.229 17:36:30 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=2 00:39:53.229 17:36:30 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:39:53.229 17:36:30 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:39:53.229 17:36:30 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:39:53.229 17:36:30 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:39:53.229 17:36:30 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:39:53.229 17:36:30 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:39:53.229 17:36:30 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:39:53.229 17:36:30 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:39:53.229 17:36:30 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:39:53.229 17:36:30 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:39:53.229 17:36:30 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:39:53.229 17:36:30 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:39:53.229 17:36:30 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:39:53.229 17:36:30 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@211 -- # local strip_size 00:39:53.229 17:36:30 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:39:53.229 17:36:30 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:39:53.229 17:36:30 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@215 -- # '[' raid1 '!=' raid1 ']' 00:39:53.229 17:36:30 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@219 -- # strip_size=0 00:39:53.229 17:36:30 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:39:53.229 17:36:30 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:39:53.229 17:36:30 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@229 -- # raid_pid=88954 00:39:53.229 Process raid pid: 88954 00:39:53.229 17:36:30 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 88954' 00:39:53.229 17:36:30 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@231 -- # waitforlisten 88954 00:39:53.229 17:36:30 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:39:53.229 17:36:30 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@835 -- # '[' -z 88954 ']' 00:39:53.229 17:36:30 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:39:53.229 17:36:30 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@840 -- # local max_retries=100 00:39:53.229 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:39:53.230 17:36:30 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:39:53.230 17:36:30 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@844 -- # xtrace_disable 00:39:53.230 17:36:30 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:39:53.230 [2024-11-26 17:36:30.628153] Starting SPDK v25.01-pre git sha1 f7ce15267 / DPDK 24.03.0 initialization... 00:39:53.230 [2024-11-26 17:36:30.628323] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:39:53.489 [2024-11-26 17:36:30.813332] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:39:53.748 [2024-11-26 17:36:30.952537] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:39:54.006 [2024-11-26 17:36:31.204187] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:39:54.006 [2024-11-26 17:36:31.204228] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:39:54.265 17:36:31 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:39:54.265 17:36:31 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@868 -- # return 0 00:39:54.265 17:36:31 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:39:54.265 17:36:31 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:54.265 17:36:31 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:39:54.265 [2024-11-26 17:36:31.498942] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:39:54.265 [2024-11-26 17:36:31.499003] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:39:54.265 [2024-11-26 17:36:31.499015] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:39:54.265 [2024-11-26 17:36:31.499029] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:39:54.265 17:36:31 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:54.265 17:36:31 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:39:54.265 17:36:31 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:39:54.265 17:36:31 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:39:54.265 17:36:31 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:39:54.265 17:36:31 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:39:54.265 17:36:31 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:39:54.266 17:36:31 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:39:54.266 17:36:31 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:39:54.266 17:36:31 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:39:54.266 17:36:31 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:39:54.266 17:36:31 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:39:54.266 17:36:31 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:39:54.266 17:36:31 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:54.266 17:36:31 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:39:54.266 17:36:31 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:54.266 17:36:31 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:39:54.266 "name": "Existed_Raid", 00:39:54.266 "uuid": "2138bc18-9f15-4a1e-b03f-10a1bcbee6a5", 00:39:54.266 "strip_size_kb": 0, 00:39:54.266 "state": "configuring", 00:39:54.266 "raid_level": "raid1", 00:39:54.266 "superblock": true, 00:39:54.266 "num_base_bdevs": 2, 00:39:54.266 "num_base_bdevs_discovered": 0, 00:39:54.266 "num_base_bdevs_operational": 2, 00:39:54.266 "base_bdevs_list": [ 00:39:54.266 { 00:39:54.266 "name": "BaseBdev1", 00:39:54.266 "uuid": "00000000-0000-0000-0000-000000000000", 00:39:54.266 "is_configured": false, 00:39:54.266 "data_offset": 0, 00:39:54.266 "data_size": 0 00:39:54.266 }, 00:39:54.266 { 00:39:54.266 "name": "BaseBdev2", 00:39:54.266 "uuid": "00000000-0000-0000-0000-000000000000", 00:39:54.266 "is_configured": false, 00:39:54.266 "data_offset": 0, 00:39:54.266 "data_size": 0 00:39:54.266 } 00:39:54.266 ] 00:39:54.266 }' 00:39:54.266 17:36:31 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:39:54.266 17:36:31 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:39:54.524 17:36:31 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:39:54.524 17:36:31 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:54.524 17:36:31 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:39:54.524 [2024-11-26 17:36:31.954930] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:39:54.524 [2024-11-26 17:36:31.954966] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:39:54.524 17:36:31 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:54.524 17:36:31 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:39:54.524 17:36:31 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:54.524 17:36:31 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:39:54.524 [2024-11-26 17:36:31.962927] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:39:54.524 [2024-11-26 17:36:31.962964] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:39:54.524 [2024-11-26 17:36:31.962974] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:39:54.524 [2024-11-26 17:36:31.962991] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:39:54.524 17:36:31 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:54.524 17:36:31 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -i -b BaseBdev1 00:39:54.524 17:36:31 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:54.524 17:36:31 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:39:54.783 [2024-11-26 17:36:32.015980] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:39:54.783 BaseBdev1 00:39:54.783 17:36:32 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:54.783 17:36:32 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:39:54.783 17:36:32 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:39:54.783 17:36:32 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:39:54.783 17:36:32 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@905 -- # local i 00:39:54.783 17:36:32 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:39:54.783 17:36:32 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:39:54.783 17:36:32 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:39:54.783 17:36:32 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:54.783 17:36:32 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:39:54.783 17:36:32 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:54.783 17:36:32 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:39:54.783 17:36:32 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:54.783 17:36:32 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:39:54.783 [ 00:39:54.783 { 00:39:54.783 "name": "BaseBdev1", 00:39:54.783 "aliases": [ 00:39:54.783 "d3ffc213-627d-4d07-a5e8-4453766b0967" 00:39:54.783 ], 00:39:54.783 "product_name": "Malloc disk", 00:39:54.783 "block_size": 4128, 00:39:54.783 "num_blocks": 8192, 00:39:54.783 "uuid": "d3ffc213-627d-4d07-a5e8-4453766b0967", 00:39:54.783 "md_size": 32, 00:39:54.783 "md_interleave": true, 00:39:54.783 "dif_type": 0, 00:39:54.783 "assigned_rate_limits": { 00:39:54.783 "rw_ios_per_sec": 0, 00:39:54.783 "rw_mbytes_per_sec": 0, 00:39:54.783 "r_mbytes_per_sec": 0, 00:39:54.783 "w_mbytes_per_sec": 0 00:39:54.783 }, 00:39:54.783 "claimed": true, 00:39:54.783 "claim_type": "exclusive_write", 00:39:54.783 "zoned": false, 00:39:54.783 "supported_io_types": { 00:39:54.783 "read": true, 00:39:54.783 "write": true, 00:39:54.783 "unmap": true, 00:39:54.783 "flush": true, 00:39:54.783 "reset": true, 00:39:54.783 "nvme_admin": false, 00:39:54.783 "nvme_io": false, 00:39:54.783 "nvme_io_md": false, 00:39:54.783 "write_zeroes": true, 00:39:54.783 "zcopy": true, 00:39:54.783 "get_zone_info": false, 00:39:54.783 "zone_management": false, 00:39:54.783 "zone_append": false, 00:39:54.783 "compare": false, 00:39:54.783 "compare_and_write": false, 00:39:54.783 "abort": true, 00:39:54.783 "seek_hole": false, 00:39:54.783 "seek_data": false, 00:39:54.783 "copy": true, 00:39:54.783 "nvme_iov_md": false 00:39:54.783 }, 00:39:54.783 "memory_domains": [ 00:39:54.783 { 00:39:54.783 "dma_device_id": "system", 00:39:54.783 "dma_device_type": 1 00:39:54.783 }, 00:39:54.783 { 00:39:54.783 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:39:54.783 "dma_device_type": 2 00:39:54.783 } 00:39:54.783 ], 00:39:54.783 "driver_specific": {} 00:39:54.783 } 00:39:54.783 ] 00:39:54.783 17:36:32 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:54.783 17:36:32 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@911 -- # return 0 00:39:54.783 17:36:32 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:39:54.783 17:36:32 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:39:54.783 17:36:32 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:39:54.783 17:36:32 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:39:54.783 17:36:32 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:39:54.783 17:36:32 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:39:54.783 17:36:32 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:39:54.783 17:36:32 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:39:54.783 17:36:32 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:39:54.783 17:36:32 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:39:54.783 17:36:32 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:39:54.783 17:36:32 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:54.783 17:36:32 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:39:54.783 17:36:32 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:39:54.783 17:36:32 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:54.783 17:36:32 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:39:54.783 "name": "Existed_Raid", 00:39:54.783 "uuid": "b8b365f9-b32e-41f7-a27d-2717b82ea0ab", 00:39:54.783 "strip_size_kb": 0, 00:39:54.783 "state": "configuring", 00:39:54.783 "raid_level": "raid1", 00:39:54.783 "superblock": true, 00:39:54.783 "num_base_bdevs": 2, 00:39:54.783 "num_base_bdevs_discovered": 1, 00:39:54.783 "num_base_bdevs_operational": 2, 00:39:54.783 "base_bdevs_list": [ 00:39:54.783 { 00:39:54.783 "name": "BaseBdev1", 00:39:54.783 "uuid": "d3ffc213-627d-4d07-a5e8-4453766b0967", 00:39:54.783 "is_configured": true, 00:39:54.783 "data_offset": 256, 00:39:54.783 "data_size": 7936 00:39:54.783 }, 00:39:54.783 { 00:39:54.783 "name": "BaseBdev2", 00:39:54.783 "uuid": "00000000-0000-0000-0000-000000000000", 00:39:54.783 "is_configured": false, 00:39:54.783 "data_offset": 0, 00:39:54.783 "data_size": 0 00:39:54.783 } 00:39:54.783 ] 00:39:54.783 }' 00:39:54.783 17:36:32 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:39:54.783 17:36:32 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:39:55.350 17:36:32 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:39:55.350 17:36:32 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:55.351 17:36:32 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:39:55.351 [2024-11-26 17:36:32.508134] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:39:55.351 [2024-11-26 17:36:32.508174] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:39:55.351 17:36:32 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:55.351 17:36:32 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:39:55.351 17:36:32 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:55.351 17:36:32 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:39:55.351 [2024-11-26 17:36:32.520201] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:39:55.351 [2024-11-26 17:36:32.522701] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:39:55.351 [2024-11-26 17:36:32.522853] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:39:55.351 17:36:32 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:55.351 17:36:32 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:39:55.351 17:36:32 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:39:55.351 17:36:32 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:39:55.351 17:36:32 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:39:55.351 17:36:32 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:39:55.351 17:36:32 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:39:55.351 17:36:32 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:39:55.351 17:36:32 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:39:55.351 17:36:32 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:39:55.351 17:36:32 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:39:55.351 17:36:32 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:39:55.351 17:36:32 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:39:55.351 17:36:32 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:39:55.351 17:36:32 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:39:55.351 17:36:32 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:55.351 17:36:32 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:39:55.351 17:36:32 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:55.351 17:36:32 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:39:55.351 "name": "Existed_Raid", 00:39:55.351 "uuid": "b4673749-5ba9-4907-a9d4-eca0d9f3dbae", 00:39:55.351 "strip_size_kb": 0, 00:39:55.351 "state": "configuring", 00:39:55.351 "raid_level": "raid1", 00:39:55.351 "superblock": true, 00:39:55.351 "num_base_bdevs": 2, 00:39:55.351 "num_base_bdevs_discovered": 1, 00:39:55.351 "num_base_bdevs_operational": 2, 00:39:55.351 "base_bdevs_list": [ 00:39:55.351 { 00:39:55.351 "name": "BaseBdev1", 00:39:55.351 "uuid": "d3ffc213-627d-4d07-a5e8-4453766b0967", 00:39:55.351 "is_configured": true, 00:39:55.351 "data_offset": 256, 00:39:55.351 "data_size": 7936 00:39:55.351 }, 00:39:55.351 { 00:39:55.351 "name": "BaseBdev2", 00:39:55.351 "uuid": "00000000-0000-0000-0000-000000000000", 00:39:55.351 "is_configured": false, 00:39:55.351 "data_offset": 0, 00:39:55.351 "data_size": 0 00:39:55.351 } 00:39:55.351 ] 00:39:55.351 }' 00:39:55.351 17:36:32 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:39:55.351 17:36:32 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:39:55.610 17:36:32 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -i -b BaseBdev2 00:39:55.610 17:36:32 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:55.610 17:36:32 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:39:55.610 [2024-11-26 17:36:33.032695] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:39:55.610 [2024-11-26 17:36:33.032935] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:39:55.610 [2024-11-26 17:36:33.032951] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4128 00:39:55.610 [2024-11-26 17:36:33.033071] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:39:55.610 [2024-11-26 17:36:33.033162] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:39:55.610 [2024-11-26 17:36:33.033190] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:39:55.610 [2024-11-26 17:36:33.033260] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:39:55.610 BaseBdev2 00:39:55.610 17:36:33 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:55.610 17:36:33 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:39:55.610 17:36:33 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:39:55.610 17:36:33 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:39:55.610 17:36:33 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@905 -- # local i 00:39:55.610 17:36:33 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:39:55.610 17:36:33 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:39:55.610 17:36:33 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:39:55.610 17:36:33 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:55.610 17:36:33 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:39:55.610 17:36:33 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:55.610 17:36:33 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:39:55.610 17:36:33 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:55.610 17:36:33 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:39:55.869 [ 00:39:55.869 { 00:39:55.869 "name": "BaseBdev2", 00:39:55.869 "aliases": [ 00:39:55.869 "b6e8aa12-5caf-4c85-9bf5-01deeb2008f2" 00:39:55.869 ], 00:39:55.869 "product_name": "Malloc disk", 00:39:55.869 "block_size": 4128, 00:39:55.869 "num_blocks": 8192, 00:39:55.869 "uuid": "b6e8aa12-5caf-4c85-9bf5-01deeb2008f2", 00:39:55.869 "md_size": 32, 00:39:55.869 "md_interleave": true, 00:39:55.869 "dif_type": 0, 00:39:55.869 "assigned_rate_limits": { 00:39:55.869 "rw_ios_per_sec": 0, 00:39:55.869 "rw_mbytes_per_sec": 0, 00:39:55.869 "r_mbytes_per_sec": 0, 00:39:55.869 "w_mbytes_per_sec": 0 00:39:55.869 }, 00:39:55.869 "claimed": true, 00:39:55.869 "claim_type": "exclusive_write", 00:39:55.869 "zoned": false, 00:39:55.869 "supported_io_types": { 00:39:55.869 "read": true, 00:39:55.869 "write": true, 00:39:55.869 "unmap": true, 00:39:55.869 "flush": true, 00:39:55.869 "reset": true, 00:39:55.869 "nvme_admin": false, 00:39:55.869 "nvme_io": false, 00:39:55.869 "nvme_io_md": false, 00:39:55.869 "write_zeroes": true, 00:39:55.869 "zcopy": true, 00:39:55.869 "get_zone_info": false, 00:39:55.869 "zone_management": false, 00:39:55.869 "zone_append": false, 00:39:55.869 "compare": false, 00:39:55.869 "compare_and_write": false, 00:39:55.869 "abort": true, 00:39:55.869 "seek_hole": false, 00:39:55.869 "seek_data": false, 00:39:55.869 "copy": true, 00:39:55.869 "nvme_iov_md": false 00:39:55.869 }, 00:39:55.869 "memory_domains": [ 00:39:55.869 { 00:39:55.869 "dma_device_id": "system", 00:39:55.869 "dma_device_type": 1 00:39:55.869 }, 00:39:55.869 { 00:39:55.869 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:39:55.869 "dma_device_type": 2 00:39:55.869 } 00:39:55.869 ], 00:39:55.869 "driver_specific": {} 00:39:55.869 } 00:39:55.869 ] 00:39:55.869 17:36:33 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:55.869 17:36:33 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@911 -- # return 0 00:39:55.869 17:36:33 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:39:55.869 17:36:33 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:39:55.869 17:36:33 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid1 0 2 00:39:55.869 17:36:33 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:39:55.869 17:36:33 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:39:55.869 17:36:33 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:39:55.869 17:36:33 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:39:55.869 17:36:33 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:39:55.869 17:36:33 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:39:55.869 17:36:33 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:39:55.869 17:36:33 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:39:55.869 17:36:33 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:39:55.869 17:36:33 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:39:55.869 17:36:33 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:39:55.869 17:36:33 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:55.869 17:36:33 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:39:55.869 17:36:33 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:55.869 17:36:33 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:39:55.869 "name": "Existed_Raid", 00:39:55.869 "uuid": "b4673749-5ba9-4907-a9d4-eca0d9f3dbae", 00:39:55.869 "strip_size_kb": 0, 00:39:55.869 "state": "online", 00:39:55.869 "raid_level": "raid1", 00:39:55.869 "superblock": true, 00:39:55.869 "num_base_bdevs": 2, 00:39:55.869 "num_base_bdevs_discovered": 2, 00:39:55.869 "num_base_bdevs_operational": 2, 00:39:55.869 "base_bdevs_list": [ 00:39:55.869 { 00:39:55.869 "name": "BaseBdev1", 00:39:55.869 "uuid": "d3ffc213-627d-4d07-a5e8-4453766b0967", 00:39:55.869 "is_configured": true, 00:39:55.869 "data_offset": 256, 00:39:55.869 "data_size": 7936 00:39:55.869 }, 00:39:55.869 { 00:39:55.869 "name": "BaseBdev2", 00:39:55.869 "uuid": "b6e8aa12-5caf-4c85-9bf5-01deeb2008f2", 00:39:55.869 "is_configured": true, 00:39:55.869 "data_offset": 256, 00:39:55.869 "data_size": 7936 00:39:55.869 } 00:39:55.869 ] 00:39:55.869 }' 00:39:55.869 17:36:33 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:39:55.869 17:36:33 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:39:56.128 17:36:33 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:39:56.128 17:36:33 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:39:56.128 17:36:33 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:39:56.129 17:36:33 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:39:56.129 17:36:33 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@184 -- # local name 00:39:56.129 17:36:33 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:39:56.129 17:36:33 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:39:56.129 17:36:33 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:56.129 17:36:33 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:39:56.129 17:36:33 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:39:56.129 [2024-11-26 17:36:33.501197] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:39:56.129 17:36:33 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:56.129 17:36:33 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:39:56.129 "name": "Existed_Raid", 00:39:56.129 "aliases": [ 00:39:56.129 "b4673749-5ba9-4907-a9d4-eca0d9f3dbae" 00:39:56.129 ], 00:39:56.129 "product_name": "Raid Volume", 00:39:56.129 "block_size": 4128, 00:39:56.129 "num_blocks": 7936, 00:39:56.129 "uuid": "b4673749-5ba9-4907-a9d4-eca0d9f3dbae", 00:39:56.129 "md_size": 32, 00:39:56.129 "md_interleave": true, 00:39:56.129 "dif_type": 0, 00:39:56.129 "assigned_rate_limits": { 00:39:56.129 "rw_ios_per_sec": 0, 00:39:56.129 "rw_mbytes_per_sec": 0, 00:39:56.129 "r_mbytes_per_sec": 0, 00:39:56.129 "w_mbytes_per_sec": 0 00:39:56.129 }, 00:39:56.129 "claimed": false, 00:39:56.129 "zoned": false, 00:39:56.129 "supported_io_types": { 00:39:56.129 "read": true, 00:39:56.129 "write": true, 00:39:56.129 "unmap": false, 00:39:56.129 "flush": false, 00:39:56.129 "reset": true, 00:39:56.129 "nvme_admin": false, 00:39:56.129 "nvme_io": false, 00:39:56.129 "nvme_io_md": false, 00:39:56.129 "write_zeroes": true, 00:39:56.129 "zcopy": false, 00:39:56.129 "get_zone_info": false, 00:39:56.129 "zone_management": false, 00:39:56.129 "zone_append": false, 00:39:56.129 "compare": false, 00:39:56.129 "compare_and_write": false, 00:39:56.129 "abort": false, 00:39:56.129 "seek_hole": false, 00:39:56.129 "seek_data": false, 00:39:56.129 "copy": false, 00:39:56.129 "nvme_iov_md": false 00:39:56.129 }, 00:39:56.129 "memory_domains": [ 00:39:56.129 { 00:39:56.129 "dma_device_id": "system", 00:39:56.129 "dma_device_type": 1 00:39:56.129 }, 00:39:56.129 { 00:39:56.129 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:39:56.129 "dma_device_type": 2 00:39:56.129 }, 00:39:56.129 { 00:39:56.129 "dma_device_id": "system", 00:39:56.129 "dma_device_type": 1 00:39:56.129 }, 00:39:56.129 { 00:39:56.129 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:39:56.129 "dma_device_type": 2 00:39:56.129 } 00:39:56.129 ], 00:39:56.129 "driver_specific": { 00:39:56.129 "raid": { 00:39:56.129 "uuid": "b4673749-5ba9-4907-a9d4-eca0d9f3dbae", 00:39:56.129 "strip_size_kb": 0, 00:39:56.129 "state": "online", 00:39:56.129 "raid_level": "raid1", 00:39:56.129 "superblock": true, 00:39:56.129 "num_base_bdevs": 2, 00:39:56.129 "num_base_bdevs_discovered": 2, 00:39:56.129 "num_base_bdevs_operational": 2, 00:39:56.129 "base_bdevs_list": [ 00:39:56.129 { 00:39:56.129 "name": "BaseBdev1", 00:39:56.129 "uuid": "d3ffc213-627d-4d07-a5e8-4453766b0967", 00:39:56.129 "is_configured": true, 00:39:56.129 "data_offset": 256, 00:39:56.129 "data_size": 7936 00:39:56.129 }, 00:39:56.129 { 00:39:56.129 "name": "BaseBdev2", 00:39:56.129 "uuid": "b6e8aa12-5caf-4c85-9bf5-01deeb2008f2", 00:39:56.129 "is_configured": true, 00:39:56.129 "data_offset": 256, 00:39:56.129 "data_size": 7936 00:39:56.129 } 00:39:56.129 ] 00:39:56.129 } 00:39:56.129 } 00:39:56.129 }' 00:39:56.129 17:36:33 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:39:56.388 17:36:33 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:39:56.388 BaseBdev2' 00:39:56.388 17:36:33 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:39:56.388 17:36:33 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='4128 32 true 0' 00:39:56.388 17:36:33 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:39:56.388 17:36:33 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:39:56.388 17:36:33 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:39:56.388 17:36:33 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:56.388 17:36:33 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:39:56.388 17:36:33 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:56.388 17:36:33 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4128 32 true 0' 00:39:56.388 17:36:33 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@193 -- # [[ 4128 32 true 0 == \4\1\2\8\ \3\2\ \t\r\u\e\ \0 ]] 00:39:56.388 17:36:33 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:39:56.388 17:36:33 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:39:56.388 17:36:33 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:39:56.388 17:36:33 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:56.388 17:36:33 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:39:56.388 17:36:33 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:56.388 17:36:33 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4128 32 true 0' 00:39:56.388 17:36:33 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@193 -- # [[ 4128 32 true 0 == \4\1\2\8\ \3\2\ \t\r\u\e\ \0 ]] 00:39:56.388 17:36:33 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:39:56.388 17:36:33 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:56.388 17:36:33 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:39:56.388 [2024-11-26 17:36:33.736934] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:39:56.647 17:36:33 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:56.647 17:36:33 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@260 -- # local expected_state 00:39:56.647 17:36:33 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@261 -- # has_redundancy raid1 00:39:56.647 17:36:33 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@198 -- # case $1 in 00:39:56.647 17:36:33 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@199 -- # return 0 00:39:56.647 17:36:33 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@264 -- # expected_state=online 00:39:56.647 17:36:33 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid online raid1 0 1 00:39:56.647 17:36:33 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:39:56.647 17:36:33 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:39:56.647 17:36:33 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:39:56.647 17:36:33 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:39:56.647 17:36:33 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:39:56.647 17:36:33 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:39:56.647 17:36:33 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:39:56.647 17:36:33 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:39:56.647 17:36:33 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:39:56.647 17:36:33 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:39:56.647 17:36:33 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:39:56.647 17:36:33 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:56.647 17:36:33 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:39:56.647 17:36:33 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:56.647 17:36:33 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:39:56.647 "name": "Existed_Raid", 00:39:56.647 "uuid": "b4673749-5ba9-4907-a9d4-eca0d9f3dbae", 00:39:56.647 "strip_size_kb": 0, 00:39:56.647 "state": "online", 00:39:56.647 "raid_level": "raid1", 00:39:56.647 "superblock": true, 00:39:56.647 "num_base_bdevs": 2, 00:39:56.647 "num_base_bdevs_discovered": 1, 00:39:56.647 "num_base_bdevs_operational": 1, 00:39:56.647 "base_bdevs_list": [ 00:39:56.647 { 00:39:56.647 "name": null, 00:39:56.647 "uuid": "00000000-0000-0000-0000-000000000000", 00:39:56.647 "is_configured": false, 00:39:56.647 "data_offset": 0, 00:39:56.647 "data_size": 7936 00:39:56.647 }, 00:39:56.647 { 00:39:56.648 "name": "BaseBdev2", 00:39:56.648 "uuid": "b6e8aa12-5caf-4c85-9bf5-01deeb2008f2", 00:39:56.648 "is_configured": true, 00:39:56.648 "data_offset": 256, 00:39:56.648 "data_size": 7936 00:39:56.648 } 00:39:56.648 ] 00:39:56.648 }' 00:39:56.648 17:36:33 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:39:56.648 17:36:33 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:39:56.906 17:36:34 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:39:56.906 17:36:34 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:39:56.906 17:36:34 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:39:56.906 17:36:34 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:39:56.906 17:36:34 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:56.906 17:36:34 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:39:56.906 17:36:34 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:57.166 17:36:34 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:39:57.166 17:36:34 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:39:57.166 17:36:34 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:39:57.166 17:36:34 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:57.166 17:36:34 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:39:57.166 [2024-11-26 17:36:34.361753] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:39:57.166 [2024-11-26 17:36:34.361921] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:39:57.166 [2024-11-26 17:36:34.490415] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:39:57.166 [2024-11-26 17:36:34.490493] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:39:57.166 [2024-11-26 17:36:34.490514] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:39:57.166 17:36:34 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:57.166 17:36:34 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:39:57.166 17:36:34 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:39:57.166 17:36:34 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:39:57.166 17:36:34 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:39:57.166 17:36:34 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:57.166 17:36:34 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:39:57.166 17:36:34 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:57.166 17:36:34 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:39:57.166 17:36:34 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:39:57.166 17:36:34 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@284 -- # '[' 2 -gt 2 ']' 00:39:57.166 17:36:34 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@326 -- # killprocess 88954 00:39:57.166 17:36:34 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@954 -- # '[' -z 88954 ']' 00:39:57.166 17:36:34 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@958 -- # kill -0 88954 00:39:57.166 17:36:34 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@959 -- # uname 00:39:57.166 17:36:34 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:39:57.166 17:36:34 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 88954 00:39:57.166 killing process with pid 88954 00:39:57.166 17:36:34 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:39:57.166 17:36:34 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:39:57.166 17:36:34 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@972 -- # echo 'killing process with pid 88954' 00:39:57.166 17:36:34 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@973 -- # kill 88954 00:39:57.166 [2024-11-26 17:36:34.583220] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:39:57.166 17:36:34 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@978 -- # wait 88954 00:39:57.166 [2024-11-26 17:36:34.605018] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:39:59.105 17:36:36 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@328 -- # return 0 00:39:59.105 00:39:59.105 real 0m5.590s 00:39:59.105 user 0m7.772s 00:39:59.105 sys 0m1.051s 00:39:59.105 17:36:36 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@1130 -- # xtrace_disable 00:39:59.105 ************************************ 00:39:59.105 END TEST raid_state_function_test_sb_md_interleaved 00:39:59.105 ************************************ 00:39:59.105 17:36:36 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:39:59.105 17:36:36 bdev_raid -- bdev/bdev_raid.sh@1012 -- # run_test raid_superblock_test_md_interleaved raid_superblock_test raid1 2 00:39:59.105 17:36:36 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:39:59.105 17:36:36 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:39:59.105 17:36:36 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:39:59.105 ************************************ 00:39:59.105 START TEST raid_superblock_test_md_interleaved 00:39:59.105 ************************************ 00:39:59.105 17:36:36 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@1129 -- # raid_superblock_test raid1 2 00:39:59.105 17:36:36 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@393 -- # local raid_level=raid1 00:39:59.105 17:36:36 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=2 00:39:59.105 17:36:36 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:39:59.105 17:36:36 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:39:59.105 17:36:36 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:39:59.105 17:36:36 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:39:59.105 17:36:36 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:39:59.105 17:36:36 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:39:59.105 17:36:36 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:39:59.105 17:36:36 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@399 -- # local strip_size 00:39:59.105 17:36:36 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:39:59.105 17:36:36 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:39:59.105 17:36:36 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:39:59.105 17:36:36 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@404 -- # '[' raid1 '!=' raid1 ']' 00:39:59.105 17:36:36 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@408 -- # strip_size=0 00:39:59.105 17:36:36 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@412 -- # raid_pid=89212 00:39:59.105 17:36:36 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@413 -- # waitforlisten 89212 00:39:59.105 17:36:36 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:39:59.105 17:36:36 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@835 -- # '[' -z 89212 ']' 00:39:59.105 17:36:36 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:39:59.105 17:36:36 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@840 -- # local max_retries=100 00:39:59.105 17:36:36 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:39:59.105 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:39:59.105 17:36:36 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@844 -- # xtrace_disable 00:39:59.105 17:36:36 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:39:59.105 [2024-11-26 17:36:36.282155] Starting SPDK v25.01-pre git sha1 f7ce15267 / DPDK 24.03.0 initialization... 00:39:59.105 [2024-11-26 17:36:36.282345] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid89212 ] 00:39:59.105 [2024-11-26 17:36:36.476172] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:39:59.377 [2024-11-26 17:36:36.645542] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:39:59.636 [2024-11-26 17:36:36.939121] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:39:59.636 [2024-11-26 17:36:36.939169] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:39:59.894 17:36:37 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:39:59.894 17:36:37 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@868 -- # return 0 00:39:59.894 17:36:37 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:39:59.894 17:36:37 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:39:59.894 17:36:37 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:39:59.894 17:36:37 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:39:59.894 17:36:37 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:39:59.894 17:36:37 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:39:59.894 17:36:37 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:39:59.894 17:36:37 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:39:59.894 17:36:37 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -i -b malloc1 00:39:59.894 17:36:37 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:59.894 17:36:37 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:40:00.154 malloc1 00:40:00.154 17:36:37 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:40:00.154 17:36:37 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:40:00.154 17:36:37 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:40:00.154 17:36:37 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:40:00.154 [2024-11-26 17:36:37.361848] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:40:00.154 [2024-11-26 17:36:37.362188] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:40:00.154 [2024-11-26 17:36:37.362283] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:40:00.154 [2024-11-26 17:36:37.362414] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:40:00.154 [2024-11-26 17:36:37.365653] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:40:00.154 [2024-11-26 17:36:37.365837] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:40:00.154 pt1 00:40:00.154 17:36:37 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:40:00.154 17:36:37 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:40:00.154 17:36:37 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:40:00.154 17:36:37 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:40:00.154 17:36:37 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:40:00.154 17:36:37 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:40:00.154 17:36:37 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:40:00.154 17:36:37 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:40:00.154 17:36:37 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:40:00.154 17:36:37 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -i -b malloc2 00:40:00.154 17:36:37 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:40:00.154 17:36:37 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:40:00.154 malloc2 00:40:00.154 17:36:37 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:40:00.154 17:36:37 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:40:00.154 17:36:37 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:40:00.154 17:36:37 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:40:00.154 [2024-11-26 17:36:37.428501] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:40:00.154 [2024-11-26 17:36:37.428572] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:40:00.154 [2024-11-26 17:36:37.428602] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:40:00.154 [2024-11-26 17:36:37.428616] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:40:00.154 [2024-11-26 17:36:37.431435] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:40:00.154 [2024-11-26 17:36:37.431476] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:40:00.154 pt2 00:40:00.154 17:36:37 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:40:00.154 17:36:37 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:40:00.154 17:36:37 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:40:00.154 17:36:37 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''pt1 pt2'\''' -n raid_bdev1 -s 00:40:00.154 17:36:37 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:40:00.154 17:36:37 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:40:00.154 [2024-11-26 17:36:37.440537] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:40:00.154 [2024-11-26 17:36:37.443308] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:40:00.154 [2024-11-26 17:36:37.443535] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:40:00.154 [2024-11-26 17:36:37.443552] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4128 00:40:00.154 [2024-11-26 17:36:37.443641] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:40:00.154 [2024-11-26 17:36:37.443726] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:40:00.154 [2024-11-26 17:36:37.443742] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:40:00.154 [2024-11-26 17:36:37.443831] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:40:00.154 17:36:37 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:40:00.154 17:36:37 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:40:00.154 17:36:37 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:40:00.154 17:36:37 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:40:00.154 17:36:37 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:40:00.154 17:36:37 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:40:00.154 17:36:37 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:40:00.154 17:36:37 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:40:00.154 17:36:37 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:40:00.154 17:36:37 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:40:00.154 17:36:37 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:40:00.155 17:36:37 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:40:00.155 17:36:37 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:40:00.155 17:36:37 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:40:00.155 17:36:37 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:40:00.155 17:36:37 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:40:00.155 17:36:37 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:40:00.155 "name": "raid_bdev1", 00:40:00.155 "uuid": "6ba39f33-f7d5-454c-8231-d6b6e89b14e0", 00:40:00.155 "strip_size_kb": 0, 00:40:00.155 "state": "online", 00:40:00.155 "raid_level": "raid1", 00:40:00.155 "superblock": true, 00:40:00.155 "num_base_bdevs": 2, 00:40:00.155 "num_base_bdevs_discovered": 2, 00:40:00.155 "num_base_bdevs_operational": 2, 00:40:00.155 "base_bdevs_list": [ 00:40:00.155 { 00:40:00.155 "name": "pt1", 00:40:00.155 "uuid": "00000000-0000-0000-0000-000000000001", 00:40:00.155 "is_configured": true, 00:40:00.155 "data_offset": 256, 00:40:00.155 "data_size": 7936 00:40:00.155 }, 00:40:00.155 { 00:40:00.155 "name": "pt2", 00:40:00.155 "uuid": "00000000-0000-0000-0000-000000000002", 00:40:00.155 "is_configured": true, 00:40:00.155 "data_offset": 256, 00:40:00.155 "data_size": 7936 00:40:00.155 } 00:40:00.155 ] 00:40:00.155 }' 00:40:00.155 17:36:37 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:40:00.155 17:36:37 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:40:00.723 17:36:37 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:40:00.723 17:36:37 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:40:00.723 17:36:37 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:40:00.723 17:36:37 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:40:00.723 17:36:37 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@184 -- # local name 00:40:00.723 17:36:37 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:40:00.723 17:36:37 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:40:00.723 17:36:37 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:40:00.723 17:36:37 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:40:00.723 17:36:37 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:40:00.723 [2024-11-26 17:36:37.881006] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:40:00.723 17:36:37 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:40:00.723 17:36:37 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:40:00.723 "name": "raid_bdev1", 00:40:00.723 "aliases": [ 00:40:00.723 "6ba39f33-f7d5-454c-8231-d6b6e89b14e0" 00:40:00.723 ], 00:40:00.723 "product_name": "Raid Volume", 00:40:00.723 "block_size": 4128, 00:40:00.723 "num_blocks": 7936, 00:40:00.723 "uuid": "6ba39f33-f7d5-454c-8231-d6b6e89b14e0", 00:40:00.723 "md_size": 32, 00:40:00.723 "md_interleave": true, 00:40:00.723 "dif_type": 0, 00:40:00.723 "assigned_rate_limits": { 00:40:00.723 "rw_ios_per_sec": 0, 00:40:00.723 "rw_mbytes_per_sec": 0, 00:40:00.723 "r_mbytes_per_sec": 0, 00:40:00.723 "w_mbytes_per_sec": 0 00:40:00.723 }, 00:40:00.723 "claimed": false, 00:40:00.723 "zoned": false, 00:40:00.723 "supported_io_types": { 00:40:00.723 "read": true, 00:40:00.723 "write": true, 00:40:00.723 "unmap": false, 00:40:00.723 "flush": false, 00:40:00.723 "reset": true, 00:40:00.723 "nvme_admin": false, 00:40:00.723 "nvme_io": false, 00:40:00.723 "nvme_io_md": false, 00:40:00.723 "write_zeroes": true, 00:40:00.723 "zcopy": false, 00:40:00.723 "get_zone_info": false, 00:40:00.723 "zone_management": false, 00:40:00.723 "zone_append": false, 00:40:00.723 "compare": false, 00:40:00.723 "compare_and_write": false, 00:40:00.723 "abort": false, 00:40:00.723 "seek_hole": false, 00:40:00.723 "seek_data": false, 00:40:00.723 "copy": false, 00:40:00.723 "nvme_iov_md": false 00:40:00.723 }, 00:40:00.723 "memory_domains": [ 00:40:00.723 { 00:40:00.723 "dma_device_id": "system", 00:40:00.723 "dma_device_type": 1 00:40:00.723 }, 00:40:00.723 { 00:40:00.723 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:40:00.723 "dma_device_type": 2 00:40:00.723 }, 00:40:00.723 { 00:40:00.723 "dma_device_id": "system", 00:40:00.723 "dma_device_type": 1 00:40:00.723 }, 00:40:00.723 { 00:40:00.723 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:40:00.723 "dma_device_type": 2 00:40:00.723 } 00:40:00.723 ], 00:40:00.723 "driver_specific": { 00:40:00.723 "raid": { 00:40:00.723 "uuid": "6ba39f33-f7d5-454c-8231-d6b6e89b14e0", 00:40:00.723 "strip_size_kb": 0, 00:40:00.723 "state": "online", 00:40:00.723 "raid_level": "raid1", 00:40:00.723 "superblock": true, 00:40:00.723 "num_base_bdevs": 2, 00:40:00.723 "num_base_bdevs_discovered": 2, 00:40:00.723 "num_base_bdevs_operational": 2, 00:40:00.723 "base_bdevs_list": [ 00:40:00.723 { 00:40:00.723 "name": "pt1", 00:40:00.723 "uuid": "00000000-0000-0000-0000-000000000001", 00:40:00.723 "is_configured": true, 00:40:00.723 "data_offset": 256, 00:40:00.723 "data_size": 7936 00:40:00.723 }, 00:40:00.723 { 00:40:00.723 "name": "pt2", 00:40:00.723 "uuid": "00000000-0000-0000-0000-000000000002", 00:40:00.723 "is_configured": true, 00:40:00.723 "data_offset": 256, 00:40:00.723 "data_size": 7936 00:40:00.723 } 00:40:00.723 ] 00:40:00.723 } 00:40:00.723 } 00:40:00.723 }' 00:40:00.723 17:36:37 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:40:00.723 17:36:37 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:40:00.723 pt2' 00:40:00.723 17:36:37 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:40:00.723 17:36:37 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='4128 32 true 0' 00:40:00.723 17:36:37 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:40:00.723 17:36:37 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:40:00.723 17:36:37 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:40:00.723 17:36:37 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:40:00.723 17:36:37 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:40:00.723 17:36:38 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:40:00.723 17:36:38 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4128 32 true 0' 00:40:00.723 17:36:38 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@193 -- # [[ 4128 32 true 0 == \4\1\2\8\ \3\2\ \t\r\u\e\ \0 ]] 00:40:00.723 17:36:38 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:40:00.723 17:36:38 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:40:00.723 17:36:38 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:40:00.723 17:36:38 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:40:00.723 17:36:38 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:40:00.723 17:36:38 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:40:00.723 17:36:38 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4128 32 true 0' 00:40:00.723 17:36:38 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@193 -- # [[ 4128 32 true 0 == \4\1\2\8\ \3\2\ \t\r\u\e\ \0 ]] 00:40:00.723 17:36:38 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:40:00.723 17:36:38 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:40:00.723 17:36:38 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:40:00.723 17:36:38 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:40:00.723 [2024-11-26 17:36:38.116975] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:40:00.723 17:36:38 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:40:00.723 17:36:38 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=6ba39f33-f7d5-454c-8231-d6b6e89b14e0 00:40:00.723 17:36:38 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@436 -- # '[' -z 6ba39f33-f7d5-454c-8231-d6b6e89b14e0 ']' 00:40:00.723 17:36:38 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:40:00.723 17:36:38 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:40:00.723 17:36:38 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:40:00.723 [2024-11-26 17:36:38.160665] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:40:00.723 [2024-11-26 17:36:38.160694] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:40:00.723 [2024-11-26 17:36:38.160802] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:40:00.723 [2024-11-26 17:36:38.160876] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:40:00.723 [2024-11-26 17:36:38.160894] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:40:00.723 17:36:38 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:40:00.983 17:36:38 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:40:00.983 17:36:38 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:40:00.983 17:36:38 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:40:00.983 17:36:38 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:40:00.983 17:36:38 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:40:00.983 17:36:38 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:40:00.983 17:36:38 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:40:00.983 17:36:38 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:40:00.983 17:36:38 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:40:00.983 17:36:38 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:40:00.983 17:36:38 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:40:00.983 17:36:38 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:40:00.983 17:36:38 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:40:00.983 17:36:38 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:40:00.983 17:36:38 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:40:00.983 17:36:38 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:40:00.983 17:36:38 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:40:00.983 17:36:38 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:40:00.983 17:36:38 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:40:00.983 17:36:38 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:40:00.983 17:36:38 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:40:00.983 17:36:38 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:40:00.983 17:36:38 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:40:00.983 17:36:38 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:40:00.983 17:36:38 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@652 -- # local es=0 00:40:00.983 17:36:38 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:40:00.983 17:36:38 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:40:00.983 17:36:38 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:40:00.983 17:36:38 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:40:00.983 17:36:38 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:40:00.983 17:36:38 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:40:00.983 17:36:38 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:40:00.983 17:36:38 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:40:00.983 [2024-11-26 17:36:38.284772] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:40:00.983 [2024-11-26 17:36:38.287871] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:40:00.983 [2024-11-26 17:36:38.287959] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:40:00.983 [2024-11-26 17:36:38.288027] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:40:00.983 [2024-11-26 17:36:38.288064] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:40:00.983 [2024-11-26 17:36:38.288081] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state configuring 00:40:00.983 request: 00:40:00.983 { 00:40:00.983 "name": "raid_bdev1", 00:40:00.983 "raid_level": "raid1", 00:40:00.983 "base_bdevs": [ 00:40:00.983 "malloc1", 00:40:00.983 "malloc2" 00:40:00.983 ], 00:40:00.983 "superblock": false, 00:40:00.983 "method": "bdev_raid_create", 00:40:00.983 "req_id": 1 00:40:00.983 } 00:40:00.983 Got JSON-RPC error response 00:40:00.983 response: 00:40:00.983 { 00:40:00.983 "code": -17, 00:40:00.983 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:40:00.983 } 00:40:00.983 17:36:38 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:40:00.983 17:36:38 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@655 -- # es=1 00:40:00.983 17:36:38 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:40:00.983 17:36:38 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:40:00.983 17:36:38 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:40:00.983 17:36:38 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:40:00.983 17:36:38 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:40:00.983 17:36:38 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:40:00.983 17:36:38 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:40:00.983 17:36:38 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:40:00.983 17:36:38 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:40:00.983 17:36:38 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:40:00.983 17:36:38 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:40:00.983 17:36:38 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:40:00.983 17:36:38 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:40:00.983 [2024-11-26 17:36:38.348804] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:40:00.983 [2024-11-26 17:36:38.349015] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:40:00.983 [2024-11-26 17:36:38.349108] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:40:00.983 [2024-11-26 17:36:38.349334] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:40:00.983 [2024-11-26 17:36:38.352712] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:40:00.984 [2024-11-26 17:36:38.352883] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:40:00.984 [2024-11-26 17:36:38.353041] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:40:00.984 [2024-11-26 17:36:38.353196] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:40:00.984 pt1 00:40:00.984 17:36:38 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:40:00.984 17:36:38 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 2 00:40:00.984 17:36:38 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:40:00.984 17:36:38 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:40:00.984 17:36:38 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:40:00.984 17:36:38 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:40:00.984 17:36:38 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:40:00.984 17:36:38 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:40:00.984 17:36:38 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:40:00.984 17:36:38 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:40:00.984 17:36:38 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:40:00.984 17:36:38 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:40:00.984 17:36:38 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:40:00.984 17:36:38 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:40:00.984 17:36:38 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:40:00.984 17:36:38 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:40:00.984 17:36:38 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:40:00.984 "name": "raid_bdev1", 00:40:00.984 "uuid": "6ba39f33-f7d5-454c-8231-d6b6e89b14e0", 00:40:00.984 "strip_size_kb": 0, 00:40:00.984 "state": "configuring", 00:40:00.984 "raid_level": "raid1", 00:40:00.984 "superblock": true, 00:40:00.984 "num_base_bdevs": 2, 00:40:00.984 "num_base_bdevs_discovered": 1, 00:40:00.984 "num_base_bdevs_operational": 2, 00:40:00.984 "base_bdevs_list": [ 00:40:00.984 { 00:40:00.984 "name": "pt1", 00:40:00.984 "uuid": "00000000-0000-0000-0000-000000000001", 00:40:00.984 "is_configured": true, 00:40:00.984 "data_offset": 256, 00:40:00.984 "data_size": 7936 00:40:00.984 }, 00:40:00.984 { 00:40:00.984 "name": null, 00:40:00.984 "uuid": "00000000-0000-0000-0000-000000000002", 00:40:00.984 "is_configured": false, 00:40:00.984 "data_offset": 256, 00:40:00.984 "data_size": 7936 00:40:00.984 } 00:40:00.984 ] 00:40:00.984 }' 00:40:00.984 17:36:38 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:40:00.984 17:36:38 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:40:01.551 17:36:38 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@470 -- # '[' 2 -gt 2 ']' 00:40:01.551 17:36:38 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:40:01.551 17:36:38 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:40:01.551 17:36:38 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:40:01.551 17:36:38 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:40:01.551 17:36:38 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:40:01.551 [2024-11-26 17:36:38.797295] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:40:01.551 [2024-11-26 17:36:38.797387] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:40:01.551 [2024-11-26 17:36:38.797416] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:40:01.551 [2024-11-26 17:36:38.797435] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:40:01.551 [2024-11-26 17:36:38.797661] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:40:01.551 [2024-11-26 17:36:38.797687] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:40:01.551 [2024-11-26 17:36:38.797749] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:40:01.551 [2024-11-26 17:36:38.797779] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:40:01.551 [2024-11-26 17:36:38.797884] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:40:01.551 [2024-11-26 17:36:38.797901] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4128 00:40:01.551 [2024-11-26 17:36:38.797990] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:40:01.551 [2024-11-26 17:36:38.798088] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:40:01.551 [2024-11-26 17:36:38.798100] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:40:01.551 [2024-11-26 17:36:38.798179] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:40:01.551 pt2 00:40:01.551 17:36:38 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:40:01.551 17:36:38 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:40:01.551 17:36:38 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:40:01.551 17:36:38 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:40:01.551 17:36:38 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:40:01.551 17:36:38 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:40:01.551 17:36:38 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:40:01.551 17:36:38 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:40:01.551 17:36:38 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:40:01.551 17:36:38 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:40:01.551 17:36:38 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:40:01.551 17:36:38 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:40:01.551 17:36:38 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:40:01.551 17:36:38 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:40:01.551 17:36:38 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:40:01.551 17:36:38 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:40:01.551 17:36:38 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:40:01.551 17:36:38 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:40:01.551 17:36:38 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:40:01.551 "name": "raid_bdev1", 00:40:01.551 "uuid": "6ba39f33-f7d5-454c-8231-d6b6e89b14e0", 00:40:01.551 "strip_size_kb": 0, 00:40:01.551 "state": "online", 00:40:01.551 "raid_level": "raid1", 00:40:01.551 "superblock": true, 00:40:01.551 "num_base_bdevs": 2, 00:40:01.551 "num_base_bdevs_discovered": 2, 00:40:01.551 "num_base_bdevs_operational": 2, 00:40:01.551 "base_bdevs_list": [ 00:40:01.551 { 00:40:01.551 "name": "pt1", 00:40:01.551 "uuid": "00000000-0000-0000-0000-000000000001", 00:40:01.551 "is_configured": true, 00:40:01.551 "data_offset": 256, 00:40:01.551 "data_size": 7936 00:40:01.551 }, 00:40:01.551 { 00:40:01.551 "name": "pt2", 00:40:01.551 "uuid": "00000000-0000-0000-0000-000000000002", 00:40:01.551 "is_configured": true, 00:40:01.551 "data_offset": 256, 00:40:01.551 "data_size": 7936 00:40:01.551 } 00:40:01.551 ] 00:40:01.551 }' 00:40:01.551 17:36:38 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:40:01.551 17:36:38 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:40:02.120 17:36:39 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:40:02.120 17:36:39 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:40:02.120 17:36:39 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:40:02.120 17:36:39 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:40:02.120 17:36:39 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@184 -- # local name 00:40:02.120 17:36:39 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:40:02.120 17:36:39 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:40:02.120 17:36:39 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:40:02.120 17:36:39 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:40:02.120 17:36:39 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:40:02.120 [2024-11-26 17:36:39.285764] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:40:02.120 17:36:39 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:40:02.120 17:36:39 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:40:02.120 "name": "raid_bdev1", 00:40:02.120 "aliases": [ 00:40:02.120 "6ba39f33-f7d5-454c-8231-d6b6e89b14e0" 00:40:02.120 ], 00:40:02.120 "product_name": "Raid Volume", 00:40:02.120 "block_size": 4128, 00:40:02.120 "num_blocks": 7936, 00:40:02.120 "uuid": "6ba39f33-f7d5-454c-8231-d6b6e89b14e0", 00:40:02.120 "md_size": 32, 00:40:02.120 "md_interleave": true, 00:40:02.120 "dif_type": 0, 00:40:02.120 "assigned_rate_limits": { 00:40:02.120 "rw_ios_per_sec": 0, 00:40:02.120 "rw_mbytes_per_sec": 0, 00:40:02.120 "r_mbytes_per_sec": 0, 00:40:02.120 "w_mbytes_per_sec": 0 00:40:02.120 }, 00:40:02.120 "claimed": false, 00:40:02.120 "zoned": false, 00:40:02.120 "supported_io_types": { 00:40:02.120 "read": true, 00:40:02.120 "write": true, 00:40:02.120 "unmap": false, 00:40:02.120 "flush": false, 00:40:02.120 "reset": true, 00:40:02.120 "nvme_admin": false, 00:40:02.120 "nvme_io": false, 00:40:02.120 "nvme_io_md": false, 00:40:02.120 "write_zeroes": true, 00:40:02.120 "zcopy": false, 00:40:02.120 "get_zone_info": false, 00:40:02.120 "zone_management": false, 00:40:02.120 "zone_append": false, 00:40:02.120 "compare": false, 00:40:02.120 "compare_and_write": false, 00:40:02.120 "abort": false, 00:40:02.120 "seek_hole": false, 00:40:02.120 "seek_data": false, 00:40:02.120 "copy": false, 00:40:02.120 "nvme_iov_md": false 00:40:02.120 }, 00:40:02.120 "memory_domains": [ 00:40:02.120 { 00:40:02.120 "dma_device_id": "system", 00:40:02.120 "dma_device_type": 1 00:40:02.120 }, 00:40:02.120 { 00:40:02.120 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:40:02.120 "dma_device_type": 2 00:40:02.120 }, 00:40:02.120 { 00:40:02.120 "dma_device_id": "system", 00:40:02.120 "dma_device_type": 1 00:40:02.120 }, 00:40:02.120 { 00:40:02.120 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:40:02.120 "dma_device_type": 2 00:40:02.120 } 00:40:02.120 ], 00:40:02.120 "driver_specific": { 00:40:02.120 "raid": { 00:40:02.120 "uuid": "6ba39f33-f7d5-454c-8231-d6b6e89b14e0", 00:40:02.120 "strip_size_kb": 0, 00:40:02.120 "state": "online", 00:40:02.120 "raid_level": "raid1", 00:40:02.120 "superblock": true, 00:40:02.120 "num_base_bdevs": 2, 00:40:02.120 "num_base_bdevs_discovered": 2, 00:40:02.120 "num_base_bdevs_operational": 2, 00:40:02.120 "base_bdevs_list": [ 00:40:02.120 { 00:40:02.120 "name": "pt1", 00:40:02.120 "uuid": "00000000-0000-0000-0000-000000000001", 00:40:02.120 "is_configured": true, 00:40:02.120 "data_offset": 256, 00:40:02.120 "data_size": 7936 00:40:02.120 }, 00:40:02.120 { 00:40:02.120 "name": "pt2", 00:40:02.120 "uuid": "00000000-0000-0000-0000-000000000002", 00:40:02.121 "is_configured": true, 00:40:02.121 "data_offset": 256, 00:40:02.121 "data_size": 7936 00:40:02.121 } 00:40:02.121 ] 00:40:02.121 } 00:40:02.121 } 00:40:02.121 }' 00:40:02.121 17:36:39 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:40:02.121 17:36:39 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:40:02.121 pt2' 00:40:02.121 17:36:39 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:40:02.121 17:36:39 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='4128 32 true 0' 00:40:02.121 17:36:39 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:40:02.121 17:36:39 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:40:02.121 17:36:39 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:40:02.121 17:36:39 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:40:02.121 17:36:39 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:40:02.121 17:36:39 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:40:02.121 17:36:39 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4128 32 true 0' 00:40:02.121 17:36:39 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@193 -- # [[ 4128 32 true 0 == \4\1\2\8\ \3\2\ \t\r\u\e\ \0 ]] 00:40:02.121 17:36:39 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:40:02.121 17:36:39 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:40:02.121 17:36:39 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:40:02.121 17:36:39 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:40:02.121 17:36:39 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:40:02.121 17:36:39 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:40:02.121 17:36:39 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4128 32 true 0' 00:40:02.121 17:36:39 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@193 -- # [[ 4128 32 true 0 == \4\1\2\8\ \3\2\ \t\r\u\e\ \0 ]] 00:40:02.121 17:36:39 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:40:02.121 17:36:39 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:40:02.121 17:36:39 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:40:02.121 17:36:39 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:40:02.121 [2024-11-26 17:36:39.529740] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:40:02.121 17:36:39 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:40:02.121 17:36:39 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@487 -- # '[' 6ba39f33-f7d5-454c-8231-d6b6e89b14e0 '!=' 6ba39f33-f7d5-454c-8231-d6b6e89b14e0 ']' 00:40:02.121 17:36:39 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@491 -- # has_redundancy raid1 00:40:02.121 17:36:39 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@198 -- # case $1 in 00:40:02.121 17:36:39 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@199 -- # return 0 00:40:02.121 17:36:39 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@493 -- # rpc_cmd bdev_passthru_delete pt1 00:40:02.121 17:36:39 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:40:02.121 17:36:39 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:40:02.380 [2024-11-26 17:36:39.569536] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: pt1 00:40:02.380 17:36:39 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:40:02.380 17:36:39 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@496 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:40:02.380 17:36:39 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:40:02.380 17:36:39 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:40:02.380 17:36:39 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:40:02.380 17:36:39 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:40:02.380 17:36:39 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:40:02.380 17:36:39 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:40:02.380 17:36:39 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:40:02.380 17:36:39 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:40:02.380 17:36:39 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:40:02.380 17:36:39 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:40:02.380 17:36:39 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:40:02.380 17:36:39 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:40:02.380 17:36:39 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:40:02.380 17:36:39 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:40:02.380 17:36:39 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:40:02.380 "name": "raid_bdev1", 00:40:02.380 "uuid": "6ba39f33-f7d5-454c-8231-d6b6e89b14e0", 00:40:02.380 "strip_size_kb": 0, 00:40:02.380 "state": "online", 00:40:02.380 "raid_level": "raid1", 00:40:02.380 "superblock": true, 00:40:02.380 "num_base_bdevs": 2, 00:40:02.380 "num_base_bdevs_discovered": 1, 00:40:02.380 "num_base_bdevs_operational": 1, 00:40:02.380 "base_bdevs_list": [ 00:40:02.380 { 00:40:02.380 "name": null, 00:40:02.380 "uuid": "00000000-0000-0000-0000-000000000000", 00:40:02.380 "is_configured": false, 00:40:02.380 "data_offset": 0, 00:40:02.380 "data_size": 7936 00:40:02.380 }, 00:40:02.380 { 00:40:02.380 "name": "pt2", 00:40:02.380 "uuid": "00000000-0000-0000-0000-000000000002", 00:40:02.380 "is_configured": true, 00:40:02.380 "data_offset": 256, 00:40:02.380 "data_size": 7936 00:40:02.380 } 00:40:02.380 ] 00:40:02.380 }' 00:40:02.380 17:36:39 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:40:02.380 17:36:39 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:40:02.638 17:36:40 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@499 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:40:02.638 17:36:40 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:40:02.638 17:36:40 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:40:02.638 [2024-11-26 17:36:40.041620] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:40:02.638 [2024-11-26 17:36:40.041657] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:40:02.638 [2024-11-26 17:36:40.041745] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:40:02.638 [2024-11-26 17:36:40.041808] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:40:02.638 [2024-11-26 17:36:40.041826] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:40:02.638 17:36:40 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:40:02.638 17:36:40 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@500 -- # rpc_cmd bdev_raid_get_bdevs all 00:40:02.638 17:36:40 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:40:02.638 17:36:40 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:40:02.638 17:36:40 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@500 -- # jq -r '.[]' 00:40:02.638 17:36:40 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:40:02.897 17:36:40 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@500 -- # raid_bdev= 00:40:02.897 17:36:40 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@501 -- # '[' -n '' ']' 00:40:02.897 17:36:40 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@506 -- # (( i = 1 )) 00:40:02.897 17:36:40 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:40:02.897 17:36:40 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt2 00:40:02.897 17:36:40 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:40:02.897 17:36:40 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:40:02.897 17:36:40 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:40:02.897 17:36:40 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:40:02.897 17:36:40 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:40:02.897 17:36:40 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@511 -- # (( i = 1 )) 00:40:02.897 17:36:40 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:40:02.897 17:36:40 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@519 -- # i=1 00:40:02.897 17:36:40 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@520 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:40:02.897 17:36:40 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:40:02.897 17:36:40 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:40:02.897 [2024-11-26 17:36:40.113647] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:40:02.897 [2024-11-26 17:36:40.113716] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:40:02.897 [2024-11-26 17:36:40.113738] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:40:02.897 [2024-11-26 17:36:40.113756] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:40:02.897 [2024-11-26 17:36:40.116896] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:40:02.897 [2024-11-26 17:36:40.117079] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:40:02.897 [2024-11-26 17:36:40.117241] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:40:02.897 [2024-11-26 17:36:40.117350] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:40:02.897 pt2 00:40:02.897 [2024-11-26 17:36:40.117524] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:40:02.897 [2024-11-26 17:36:40.117546] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4128 00:40:02.897 [2024-11-26 17:36:40.117660] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:40:02.897 [2024-11-26 17:36:40.117737] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:40:02.897 [2024-11-26 17:36:40.117747] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008200 00:40:02.897 [2024-11-26 17:36:40.117871] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:40:02.897 17:36:40 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:40:02.897 17:36:40 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@523 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:40:02.897 17:36:40 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:40:02.897 17:36:40 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:40:02.897 17:36:40 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:40:02.897 17:36:40 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:40:02.897 17:36:40 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:40:02.897 17:36:40 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:40:02.897 17:36:40 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:40:02.897 17:36:40 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:40:02.897 17:36:40 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:40:02.897 17:36:40 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:40:02.897 17:36:40 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:40:02.897 17:36:40 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:40:02.897 17:36:40 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:40:02.897 17:36:40 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:40:02.897 17:36:40 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:40:02.897 "name": "raid_bdev1", 00:40:02.897 "uuid": "6ba39f33-f7d5-454c-8231-d6b6e89b14e0", 00:40:02.897 "strip_size_kb": 0, 00:40:02.897 "state": "online", 00:40:02.897 "raid_level": "raid1", 00:40:02.897 "superblock": true, 00:40:02.897 "num_base_bdevs": 2, 00:40:02.897 "num_base_bdevs_discovered": 1, 00:40:02.897 "num_base_bdevs_operational": 1, 00:40:02.897 "base_bdevs_list": [ 00:40:02.897 { 00:40:02.897 "name": null, 00:40:02.897 "uuid": "00000000-0000-0000-0000-000000000000", 00:40:02.897 "is_configured": false, 00:40:02.897 "data_offset": 256, 00:40:02.897 "data_size": 7936 00:40:02.897 }, 00:40:02.897 { 00:40:02.897 "name": "pt2", 00:40:02.897 "uuid": "00000000-0000-0000-0000-000000000002", 00:40:02.897 "is_configured": true, 00:40:02.897 "data_offset": 256, 00:40:02.897 "data_size": 7936 00:40:02.897 } 00:40:02.897 ] 00:40:02.897 }' 00:40:02.897 17:36:40 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:40:02.897 17:36:40 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:40:03.156 17:36:40 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@526 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:40:03.156 17:36:40 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:40:03.156 17:36:40 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:40:03.156 [2024-11-26 17:36:40.577778] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:40:03.156 [2024-11-26 17:36:40.577831] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:40:03.156 [2024-11-26 17:36:40.577936] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:40:03.156 [2024-11-26 17:36:40.578014] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:40:03.156 [2024-11-26 17:36:40.578031] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name raid_bdev1, state offline 00:40:03.156 17:36:40 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:40:03.156 17:36:40 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@527 -- # rpc_cmd bdev_raid_get_bdevs all 00:40:03.156 17:36:40 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@527 -- # jq -r '.[]' 00:40:03.156 17:36:40 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:40:03.156 17:36:40 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:40:03.156 17:36:40 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:40:03.414 17:36:40 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@527 -- # raid_bdev= 00:40:03.414 17:36:40 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@528 -- # '[' -n '' ']' 00:40:03.414 17:36:40 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@532 -- # '[' 2 -gt 2 ']' 00:40:03.414 17:36:40 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@540 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:40:03.414 17:36:40 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:40:03.414 17:36:40 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:40:03.414 [2024-11-26 17:36:40.641804] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:40:03.414 [2024-11-26 17:36:40.641882] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:40:03.414 [2024-11-26 17:36:40.641911] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009980 00:40:03.414 [2024-11-26 17:36:40.641926] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:40:03.414 [2024-11-26 17:36:40.644994] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:40:03.414 [2024-11-26 17:36:40.645037] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:40:03.414 [2024-11-26 17:36:40.645129] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:40:03.414 [2024-11-26 17:36:40.645193] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:40:03.414 [2024-11-26 17:36:40.645328] bdev_raid.c:3685:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev pt2 (4) greater than existing raid bdev raid_bdev1 (2) 00:40:03.414 [2024-11-26 17:36:40.645350] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:40:03.414 [2024-11-26 17:36:40.645379] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008580 name raid_bdev1, state configuring 00:40:03.414 [2024-11-26 17:36:40.645468] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:40:03.414 [2024-11-26 17:36:40.645561] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008900 00:40:03.414 [2024-11-26 17:36:40.645573] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4128 00:40:03.414 [2024-11-26 17:36:40.645661] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:40:03.415 [2024-11-26 17:36:40.645729] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008900 00:40:03.415 [2024-11-26 17:36:40.645743] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008900 00:40:03.415 [2024-11-26 17:36:40.645870] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:40:03.415 pt1 00:40:03.415 17:36:40 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:40:03.415 17:36:40 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@542 -- # '[' 2 -gt 2 ']' 00:40:03.415 17:36:40 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@554 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:40:03.415 17:36:40 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:40:03.415 17:36:40 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:40:03.415 17:36:40 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:40:03.415 17:36:40 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:40:03.415 17:36:40 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:40:03.415 17:36:40 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:40:03.415 17:36:40 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:40:03.415 17:36:40 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:40:03.415 17:36:40 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:40:03.415 17:36:40 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:40:03.415 17:36:40 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:40:03.415 17:36:40 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:40:03.415 17:36:40 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:40:03.415 17:36:40 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:40:03.415 17:36:40 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:40:03.415 "name": "raid_bdev1", 00:40:03.415 "uuid": "6ba39f33-f7d5-454c-8231-d6b6e89b14e0", 00:40:03.415 "strip_size_kb": 0, 00:40:03.415 "state": "online", 00:40:03.415 "raid_level": "raid1", 00:40:03.415 "superblock": true, 00:40:03.415 "num_base_bdevs": 2, 00:40:03.415 "num_base_bdevs_discovered": 1, 00:40:03.415 "num_base_bdevs_operational": 1, 00:40:03.415 "base_bdevs_list": [ 00:40:03.415 { 00:40:03.415 "name": null, 00:40:03.415 "uuid": "00000000-0000-0000-0000-000000000000", 00:40:03.415 "is_configured": false, 00:40:03.415 "data_offset": 256, 00:40:03.415 "data_size": 7936 00:40:03.415 }, 00:40:03.415 { 00:40:03.415 "name": "pt2", 00:40:03.415 "uuid": "00000000-0000-0000-0000-000000000002", 00:40:03.415 "is_configured": true, 00:40:03.415 "data_offset": 256, 00:40:03.415 "data_size": 7936 00:40:03.415 } 00:40:03.415 ] 00:40:03.415 }' 00:40:03.415 17:36:40 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:40:03.415 17:36:40 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:40:03.674 17:36:41 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@555 -- # rpc_cmd bdev_raid_get_bdevs online 00:40:03.674 17:36:41 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@555 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:40:03.674 17:36:41 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:40:03.674 17:36:41 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:40:03.934 17:36:41 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:40:03.934 17:36:41 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@555 -- # [[ false == \f\a\l\s\e ]] 00:40:03.934 17:36:41 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@558 -- # jq -r '.[] | .uuid' 00:40:03.934 17:36:41 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@558 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:40:03.934 17:36:41 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:40:03.934 17:36:41 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:40:03.934 [2024-11-26 17:36:41.166381] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:40:03.934 17:36:41 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:40:03.934 17:36:41 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@558 -- # '[' 6ba39f33-f7d5-454c-8231-d6b6e89b14e0 '!=' 6ba39f33-f7d5-454c-8231-d6b6e89b14e0 ']' 00:40:03.934 17:36:41 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@563 -- # killprocess 89212 00:40:03.934 17:36:41 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@954 -- # '[' -z 89212 ']' 00:40:03.934 17:36:41 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@958 -- # kill -0 89212 00:40:03.934 17:36:41 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@959 -- # uname 00:40:03.934 17:36:41 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:40:03.934 17:36:41 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 89212 00:40:03.934 killing process with pid 89212 00:40:03.934 17:36:41 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:40:03.934 17:36:41 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:40:03.934 17:36:41 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@972 -- # echo 'killing process with pid 89212' 00:40:03.934 17:36:41 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@973 -- # kill 89212 00:40:03.934 [2024-11-26 17:36:41.244774] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:40:03.934 17:36:41 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@978 -- # wait 89212 00:40:03.934 [2024-11-26 17:36:41.244890] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:40:03.934 [2024-11-26 17:36:41.244955] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:40:03.934 [2024-11-26 17:36:41.244980] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008900 name raid_bdev1, state offline 00:40:04.193 [2024-11-26 17:36:41.520300] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:40:05.570 17:36:42 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@565 -- # return 0 00:40:05.570 00:40:05.570 real 0m6.822s 00:40:05.570 user 0m10.101s 00:40:05.570 sys 0m1.325s 00:40:05.570 ************************************ 00:40:05.570 END TEST raid_superblock_test_md_interleaved 00:40:05.570 17:36:42 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@1130 -- # xtrace_disable 00:40:05.570 17:36:42 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:40:05.570 ************************************ 00:40:05.829 17:36:43 bdev_raid -- bdev/bdev_raid.sh@1013 -- # run_test raid_rebuild_test_sb_md_interleaved raid_rebuild_test raid1 2 true false false 00:40:05.829 17:36:43 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 7 -le 1 ']' 00:40:05.829 17:36:43 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:40:05.829 17:36:43 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:40:05.829 ************************************ 00:40:05.829 START TEST raid_rebuild_test_sb_md_interleaved 00:40:05.829 ************************************ 00:40:05.829 17:36:43 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@1129 -- # raid_rebuild_test raid1 2 true false false 00:40:05.829 17:36:43 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@569 -- # local raid_level=raid1 00:40:05.829 17:36:43 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=2 00:40:05.829 17:36:43 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@571 -- # local superblock=true 00:40:05.829 17:36:43 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@572 -- # local background_io=false 00:40:05.829 17:36:43 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@573 -- # local verify=false 00:40:05.829 17:36:43 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:40:05.829 17:36:43 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:40:05.829 17:36:43 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:40:05.829 17:36:43 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:40:05.829 17:36:43 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:40:05.829 17:36:43 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:40:05.829 17:36:43 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:40:05.829 17:36:43 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:40:05.829 17:36:43 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:40:05.829 17:36:43 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:40:05.829 17:36:43 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:40:05.829 17:36:43 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@576 -- # local strip_size 00:40:05.829 17:36:43 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@577 -- # local create_arg 00:40:05.829 17:36:43 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:40:05.829 17:36:43 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@579 -- # local data_offset 00:40:05.829 17:36:43 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@581 -- # '[' raid1 '!=' raid1 ']' 00:40:05.829 17:36:43 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@589 -- # strip_size=0 00:40:05.829 17:36:43 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@592 -- # '[' true = true ']' 00:40:05.829 17:36:43 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@593 -- # create_arg+=' -s' 00:40:05.829 17:36:43 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@597 -- # raid_pid=89540 00:40:05.829 17:36:43 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@598 -- # waitforlisten 89540 00:40:05.830 17:36:43 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@835 -- # '[' -z 89540 ']' 00:40:05.830 17:36:43 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:40:05.830 17:36:43 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:40:05.830 17:36:43 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@840 -- # local max_retries=100 00:40:05.830 17:36:43 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:40:05.830 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:40:05.830 17:36:43 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@844 -- # xtrace_disable 00:40:05.830 17:36:43 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:40:05.830 I/O size of 3145728 is greater than zero copy threshold (65536). 00:40:05.830 Zero copy mechanism will not be used. 00:40:05.830 [2024-11-26 17:36:43.161141] Starting SPDK v25.01-pre git sha1 f7ce15267 / DPDK 24.03.0 initialization... 00:40:05.830 [2024-11-26 17:36:43.161293] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid89540 ] 00:40:06.089 [2024-11-26 17:36:43.345081] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:40:06.089 [2024-11-26 17:36:43.497529] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:40:06.349 [2024-11-26 17:36:43.760996] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:40:06.349 [2024-11-26 17:36:43.761091] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:40:06.919 17:36:44 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:40:06.919 17:36:44 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@868 -- # return 0 00:40:06.919 17:36:44 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:40:06.919 17:36:44 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -i -b BaseBdev1_malloc 00:40:06.919 17:36:44 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:40:06.919 17:36:44 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:40:06.919 BaseBdev1_malloc 00:40:06.919 17:36:44 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:40:06.919 17:36:44 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:40:06.919 17:36:44 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:40:06.919 17:36:44 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:40:06.919 [2024-11-26 17:36:44.198175] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:40:06.919 [2024-11-26 17:36:44.198250] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:40:06.919 [2024-11-26 17:36:44.198277] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:40:06.919 [2024-11-26 17:36:44.198293] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:40:06.919 [2024-11-26 17:36:44.200801] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:40:06.919 [2024-11-26 17:36:44.200844] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:40:06.919 BaseBdev1 00:40:06.919 17:36:44 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:40:06.919 17:36:44 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:40:06.919 17:36:44 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -i -b BaseBdev2_malloc 00:40:06.919 17:36:44 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:40:06.919 17:36:44 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:40:06.919 BaseBdev2_malloc 00:40:06.919 17:36:44 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:40:06.919 17:36:44 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:40:06.919 17:36:44 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:40:06.919 17:36:44 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:40:06.919 [2024-11-26 17:36:44.255672] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:40:06.919 [2024-11-26 17:36:44.256020] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:40:06.919 [2024-11-26 17:36:44.256088] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:40:06.919 [2024-11-26 17:36:44.256110] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:40:06.919 [2024-11-26 17:36:44.258633] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:40:06.919 [2024-11-26 17:36:44.258675] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:40:06.919 BaseBdev2 00:40:06.919 17:36:44 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:40:06.919 17:36:44 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -i -b spare_malloc 00:40:06.919 17:36:44 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:40:06.919 17:36:44 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:40:06.919 spare_malloc 00:40:06.919 17:36:44 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:40:06.919 17:36:44 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:40:06.919 17:36:44 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:40:06.919 17:36:44 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:40:06.919 spare_delay 00:40:06.919 17:36:44 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:40:06.919 17:36:44 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:40:06.919 17:36:44 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:40:06.919 17:36:44 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:40:06.919 [2024-11-26 17:36:44.334263] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:40:06.919 [2024-11-26 17:36:44.334347] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:40:06.919 [2024-11-26 17:36:44.334378] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:40:06.919 [2024-11-26 17:36:44.334395] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:40:06.919 [2024-11-26 17:36:44.336870] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:40:06.919 [2024-11-26 17:36:44.337166] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:40:06.919 spare 00:40:06.919 17:36:44 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:40:06.919 17:36:44 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n raid_bdev1 00:40:06.919 17:36:44 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:40:06.919 17:36:44 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:40:06.919 [2024-11-26 17:36:44.342298] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:40:06.919 [2024-11-26 17:36:44.344894] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:40:06.919 [2024-11-26 17:36:44.345260] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:40:06.919 [2024-11-26 17:36:44.345285] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4128 00:40:06.919 [2024-11-26 17:36:44.345371] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:40:06.919 [2024-11-26 17:36:44.345452] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:40:06.919 [2024-11-26 17:36:44.345462] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:40:06.919 [2024-11-26 17:36:44.345538] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:40:06.919 17:36:44 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:40:06.919 17:36:44 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:40:06.919 17:36:44 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:40:06.919 17:36:44 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:40:06.919 17:36:44 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:40:06.919 17:36:44 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:40:06.919 17:36:44 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:40:06.919 17:36:44 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:40:06.919 17:36:44 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:40:06.919 17:36:44 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:40:06.919 17:36:44 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:40:06.919 17:36:44 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:40:06.919 17:36:44 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:40:06.919 17:36:44 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:40:06.919 17:36:44 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:40:07.179 17:36:44 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:40:07.179 17:36:44 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:40:07.179 "name": "raid_bdev1", 00:40:07.179 "uuid": "c30dcb71-ea2b-4340-880a-1b3395a140fd", 00:40:07.179 "strip_size_kb": 0, 00:40:07.179 "state": "online", 00:40:07.179 "raid_level": "raid1", 00:40:07.179 "superblock": true, 00:40:07.179 "num_base_bdevs": 2, 00:40:07.179 "num_base_bdevs_discovered": 2, 00:40:07.179 "num_base_bdevs_operational": 2, 00:40:07.179 "base_bdevs_list": [ 00:40:07.179 { 00:40:07.179 "name": "BaseBdev1", 00:40:07.179 "uuid": "9c55e96d-6877-5716-90bd-29dec146438c", 00:40:07.179 "is_configured": true, 00:40:07.179 "data_offset": 256, 00:40:07.179 "data_size": 7936 00:40:07.179 }, 00:40:07.179 { 00:40:07.179 "name": "BaseBdev2", 00:40:07.179 "uuid": "b2ab0913-911a-5e00-b5e5-9d0325507f06", 00:40:07.179 "is_configured": true, 00:40:07.179 "data_offset": 256, 00:40:07.179 "data_size": 7936 00:40:07.179 } 00:40:07.179 ] 00:40:07.179 }' 00:40:07.179 17:36:44 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:40:07.179 17:36:44 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:40:07.438 17:36:44 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:40:07.438 17:36:44 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:40:07.438 17:36:44 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:40:07.438 17:36:44 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:40:07.438 [2024-11-26 17:36:44.762716] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:40:07.438 17:36:44 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:40:07.438 17:36:44 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=7936 00:40:07.438 17:36:44 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:40:07.438 17:36:44 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:40:07.438 17:36:44 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:40:07.438 17:36:44 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:40:07.438 17:36:44 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:40:07.438 17:36:44 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@619 -- # data_offset=256 00:40:07.438 17:36:44 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@621 -- # '[' false = true ']' 00:40:07.438 17:36:44 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@624 -- # '[' false = true ']' 00:40:07.438 17:36:44 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:40:07.438 17:36:44 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:40:07.438 17:36:44 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:40:07.438 [2024-11-26 17:36:44.850387] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:40:07.438 17:36:44 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:40:07.438 17:36:44 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:40:07.438 17:36:44 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:40:07.438 17:36:44 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:40:07.438 17:36:44 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:40:07.438 17:36:44 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:40:07.438 17:36:44 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:40:07.438 17:36:44 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:40:07.438 17:36:44 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:40:07.438 17:36:44 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:40:07.438 17:36:44 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:40:07.438 17:36:44 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:40:07.438 17:36:44 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:40:07.438 17:36:44 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:40:07.438 17:36:44 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:40:07.438 17:36:44 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:40:07.698 17:36:44 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:40:07.698 "name": "raid_bdev1", 00:40:07.698 "uuid": "c30dcb71-ea2b-4340-880a-1b3395a140fd", 00:40:07.698 "strip_size_kb": 0, 00:40:07.698 "state": "online", 00:40:07.698 "raid_level": "raid1", 00:40:07.698 "superblock": true, 00:40:07.698 "num_base_bdevs": 2, 00:40:07.698 "num_base_bdevs_discovered": 1, 00:40:07.698 "num_base_bdevs_operational": 1, 00:40:07.698 "base_bdevs_list": [ 00:40:07.698 { 00:40:07.698 "name": null, 00:40:07.698 "uuid": "00000000-0000-0000-0000-000000000000", 00:40:07.698 "is_configured": false, 00:40:07.698 "data_offset": 0, 00:40:07.698 "data_size": 7936 00:40:07.698 }, 00:40:07.698 { 00:40:07.698 "name": "BaseBdev2", 00:40:07.698 "uuid": "b2ab0913-911a-5e00-b5e5-9d0325507f06", 00:40:07.698 "is_configured": true, 00:40:07.698 "data_offset": 256, 00:40:07.698 "data_size": 7936 00:40:07.698 } 00:40:07.698 ] 00:40:07.698 }' 00:40:07.698 17:36:44 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:40:07.698 17:36:44 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:40:07.957 17:36:45 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:40:07.957 17:36:45 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:40:07.957 17:36:45 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:40:07.957 [2024-11-26 17:36:45.294484] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:40:07.957 [2024-11-26 17:36:45.316274] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:40:07.957 17:36:45 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:40:07.957 17:36:45 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@647 -- # sleep 1 00:40:07.957 [2024-11-26 17:36:45.318637] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:40:08.894 17:36:46 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:40:08.894 17:36:46 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:40:08.894 17:36:46 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:40:08.894 17:36:46 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@171 -- # local target=spare 00:40:08.894 17:36:46 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:40:08.894 17:36:46 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:40:08.894 17:36:46 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:40:08.894 17:36:46 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:40:08.894 17:36:46 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:40:09.154 17:36:46 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:40:09.154 17:36:46 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:40:09.154 "name": "raid_bdev1", 00:40:09.154 "uuid": "c30dcb71-ea2b-4340-880a-1b3395a140fd", 00:40:09.154 "strip_size_kb": 0, 00:40:09.154 "state": "online", 00:40:09.154 "raid_level": "raid1", 00:40:09.154 "superblock": true, 00:40:09.154 "num_base_bdevs": 2, 00:40:09.154 "num_base_bdevs_discovered": 2, 00:40:09.154 "num_base_bdevs_operational": 2, 00:40:09.154 "process": { 00:40:09.154 "type": "rebuild", 00:40:09.154 "target": "spare", 00:40:09.154 "progress": { 00:40:09.154 "blocks": 2560, 00:40:09.154 "percent": 32 00:40:09.154 } 00:40:09.154 }, 00:40:09.154 "base_bdevs_list": [ 00:40:09.154 { 00:40:09.154 "name": "spare", 00:40:09.154 "uuid": "a5884499-5c6c-5a92-8953-5344dae4d49c", 00:40:09.154 "is_configured": true, 00:40:09.154 "data_offset": 256, 00:40:09.154 "data_size": 7936 00:40:09.154 }, 00:40:09.154 { 00:40:09.154 "name": "BaseBdev2", 00:40:09.154 "uuid": "b2ab0913-911a-5e00-b5e5-9d0325507f06", 00:40:09.154 "is_configured": true, 00:40:09.154 "data_offset": 256, 00:40:09.154 "data_size": 7936 00:40:09.154 } 00:40:09.154 ] 00:40:09.154 }' 00:40:09.154 17:36:46 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:40:09.154 17:36:46 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:40:09.154 17:36:46 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:40:09.154 17:36:46 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:40:09.154 17:36:46 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:40:09.154 17:36:46 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:40:09.154 17:36:46 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:40:09.154 [2024-11-26 17:36:46.459902] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:40:09.154 [2024-11-26 17:36:46.529589] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:40:09.154 [2024-11-26 17:36:46.529655] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:40:09.154 [2024-11-26 17:36:46.529672] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:40:09.154 [2024-11-26 17:36:46.529690] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:40:09.154 17:36:46 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:40:09.154 17:36:46 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:40:09.154 17:36:46 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:40:09.154 17:36:46 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:40:09.154 17:36:46 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:40:09.154 17:36:46 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:40:09.154 17:36:46 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:40:09.154 17:36:46 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:40:09.154 17:36:46 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:40:09.154 17:36:46 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:40:09.154 17:36:46 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:40:09.154 17:36:46 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:40:09.154 17:36:46 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:40:09.154 17:36:46 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:40:09.154 17:36:46 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:40:09.154 17:36:46 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:40:09.414 17:36:46 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:40:09.414 "name": "raid_bdev1", 00:40:09.414 "uuid": "c30dcb71-ea2b-4340-880a-1b3395a140fd", 00:40:09.414 "strip_size_kb": 0, 00:40:09.414 "state": "online", 00:40:09.414 "raid_level": "raid1", 00:40:09.414 "superblock": true, 00:40:09.414 "num_base_bdevs": 2, 00:40:09.414 "num_base_bdevs_discovered": 1, 00:40:09.414 "num_base_bdevs_operational": 1, 00:40:09.414 "base_bdevs_list": [ 00:40:09.414 { 00:40:09.414 "name": null, 00:40:09.414 "uuid": "00000000-0000-0000-0000-000000000000", 00:40:09.414 "is_configured": false, 00:40:09.414 "data_offset": 0, 00:40:09.414 "data_size": 7936 00:40:09.414 }, 00:40:09.414 { 00:40:09.414 "name": "BaseBdev2", 00:40:09.414 "uuid": "b2ab0913-911a-5e00-b5e5-9d0325507f06", 00:40:09.414 "is_configured": true, 00:40:09.414 "data_offset": 256, 00:40:09.414 "data_size": 7936 00:40:09.414 } 00:40:09.414 ] 00:40:09.414 }' 00:40:09.414 17:36:46 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:40:09.414 17:36:46 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:40:09.672 17:36:46 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:40:09.672 17:36:46 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:40:09.672 17:36:46 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:40:09.672 17:36:46 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@171 -- # local target=none 00:40:09.672 17:36:46 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:40:09.672 17:36:46 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:40:09.672 17:36:46 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:40:09.672 17:36:46 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:40:09.672 17:36:46 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:40:09.672 17:36:47 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:40:09.672 17:36:47 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:40:09.672 "name": "raid_bdev1", 00:40:09.672 "uuid": "c30dcb71-ea2b-4340-880a-1b3395a140fd", 00:40:09.672 "strip_size_kb": 0, 00:40:09.672 "state": "online", 00:40:09.672 "raid_level": "raid1", 00:40:09.672 "superblock": true, 00:40:09.673 "num_base_bdevs": 2, 00:40:09.673 "num_base_bdevs_discovered": 1, 00:40:09.673 "num_base_bdevs_operational": 1, 00:40:09.673 "base_bdevs_list": [ 00:40:09.673 { 00:40:09.673 "name": null, 00:40:09.673 "uuid": "00000000-0000-0000-0000-000000000000", 00:40:09.673 "is_configured": false, 00:40:09.673 "data_offset": 0, 00:40:09.673 "data_size": 7936 00:40:09.673 }, 00:40:09.673 { 00:40:09.673 "name": "BaseBdev2", 00:40:09.673 "uuid": "b2ab0913-911a-5e00-b5e5-9d0325507f06", 00:40:09.673 "is_configured": true, 00:40:09.673 "data_offset": 256, 00:40:09.673 "data_size": 7936 00:40:09.673 } 00:40:09.673 ] 00:40:09.673 }' 00:40:09.673 17:36:47 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:40:09.673 17:36:47 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:40:09.673 17:36:47 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:40:09.931 17:36:47 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:40:09.931 17:36:47 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:40:09.931 17:36:47 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:40:09.931 17:36:47 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:40:09.931 [2024-11-26 17:36:47.130377] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:40:09.931 [2024-11-26 17:36:47.148606] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:40:09.931 17:36:47 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:40:09.931 17:36:47 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@663 -- # sleep 1 00:40:09.931 [2024-11-26 17:36:47.151296] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:40:10.867 17:36:48 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:40:10.867 17:36:48 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:40:10.867 17:36:48 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:40:10.867 17:36:48 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@171 -- # local target=spare 00:40:10.867 17:36:48 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:40:10.867 17:36:48 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:40:10.867 17:36:48 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:40:10.867 17:36:48 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:40:10.867 17:36:48 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:40:10.867 17:36:48 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:40:10.867 17:36:48 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:40:10.867 "name": "raid_bdev1", 00:40:10.867 "uuid": "c30dcb71-ea2b-4340-880a-1b3395a140fd", 00:40:10.867 "strip_size_kb": 0, 00:40:10.867 "state": "online", 00:40:10.867 "raid_level": "raid1", 00:40:10.867 "superblock": true, 00:40:10.867 "num_base_bdevs": 2, 00:40:10.867 "num_base_bdevs_discovered": 2, 00:40:10.867 "num_base_bdevs_operational": 2, 00:40:10.867 "process": { 00:40:10.867 "type": "rebuild", 00:40:10.867 "target": "spare", 00:40:10.867 "progress": { 00:40:10.867 "blocks": 2560, 00:40:10.867 "percent": 32 00:40:10.867 } 00:40:10.867 }, 00:40:10.867 "base_bdevs_list": [ 00:40:10.867 { 00:40:10.867 "name": "spare", 00:40:10.867 "uuid": "a5884499-5c6c-5a92-8953-5344dae4d49c", 00:40:10.867 "is_configured": true, 00:40:10.867 "data_offset": 256, 00:40:10.867 "data_size": 7936 00:40:10.867 }, 00:40:10.867 { 00:40:10.867 "name": "BaseBdev2", 00:40:10.867 "uuid": "b2ab0913-911a-5e00-b5e5-9d0325507f06", 00:40:10.867 "is_configured": true, 00:40:10.867 "data_offset": 256, 00:40:10.867 "data_size": 7936 00:40:10.867 } 00:40:10.867 ] 00:40:10.867 }' 00:40:10.867 17:36:48 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:40:10.867 17:36:48 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:40:10.867 17:36:48 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:40:10.867 17:36:48 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:40:10.867 17:36:48 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@666 -- # '[' true = true ']' 00:40:10.867 17:36:48 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@666 -- # '[' = false ']' 00:40:10.867 /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh: line 666: [: =: unary operator expected 00:40:10.867 17:36:48 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=2 00:40:10.867 17:36:48 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@693 -- # '[' raid1 = raid1 ']' 00:40:10.867 17:36:48 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@693 -- # '[' 2 -gt 2 ']' 00:40:10.867 17:36:48 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@706 -- # local timeout=762 00:40:10.867 17:36:48 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:40:10.867 17:36:48 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:40:10.867 17:36:48 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:40:10.867 17:36:48 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:40:10.867 17:36:48 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@171 -- # local target=spare 00:40:10.867 17:36:48 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:40:10.867 17:36:48 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:40:10.867 17:36:48 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:40:10.867 17:36:48 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:40:10.867 17:36:48 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:40:10.867 17:36:48 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:40:11.126 17:36:48 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:40:11.126 "name": "raid_bdev1", 00:40:11.126 "uuid": "c30dcb71-ea2b-4340-880a-1b3395a140fd", 00:40:11.126 "strip_size_kb": 0, 00:40:11.126 "state": "online", 00:40:11.126 "raid_level": "raid1", 00:40:11.126 "superblock": true, 00:40:11.126 "num_base_bdevs": 2, 00:40:11.126 "num_base_bdevs_discovered": 2, 00:40:11.126 "num_base_bdevs_operational": 2, 00:40:11.126 "process": { 00:40:11.126 "type": "rebuild", 00:40:11.126 "target": "spare", 00:40:11.126 "progress": { 00:40:11.126 "blocks": 2816, 00:40:11.126 "percent": 35 00:40:11.126 } 00:40:11.126 }, 00:40:11.126 "base_bdevs_list": [ 00:40:11.126 { 00:40:11.126 "name": "spare", 00:40:11.126 "uuid": "a5884499-5c6c-5a92-8953-5344dae4d49c", 00:40:11.126 "is_configured": true, 00:40:11.126 "data_offset": 256, 00:40:11.126 "data_size": 7936 00:40:11.126 }, 00:40:11.126 { 00:40:11.126 "name": "BaseBdev2", 00:40:11.126 "uuid": "b2ab0913-911a-5e00-b5e5-9d0325507f06", 00:40:11.126 "is_configured": true, 00:40:11.126 "data_offset": 256, 00:40:11.126 "data_size": 7936 00:40:11.126 } 00:40:11.126 ] 00:40:11.126 }' 00:40:11.126 17:36:48 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:40:11.126 17:36:48 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:40:11.126 17:36:48 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:40:11.126 17:36:48 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:40:11.126 17:36:48 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@711 -- # sleep 1 00:40:12.063 17:36:49 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:40:12.063 17:36:49 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:40:12.063 17:36:49 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:40:12.063 17:36:49 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:40:12.063 17:36:49 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@171 -- # local target=spare 00:40:12.063 17:36:49 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:40:12.063 17:36:49 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:40:12.063 17:36:49 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:40:12.063 17:36:49 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:40:12.063 17:36:49 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:40:12.063 17:36:49 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:40:12.063 17:36:49 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:40:12.063 "name": "raid_bdev1", 00:40:12.063 "uuid": "c30dcb71-ea2b-4340-880a-1b3395a140fd", 00:40:12.063 "strip_size_kb": 0, 00:40:12.063 "state": "online", 00:40:12.063 "raid_level": "raid1", 00:40:12.063 "superblock": true, 00:40:12.063 "num_base_bdevs": 2, 00:40:12.063 "num_base_bdevs_discovered": 2, 00:40:12.063 "num_base_bdevs_operational": 2, 00:40:12.063 "process": { 00:40:12.063 "type": "rebuild", 00:40:12.063 "target": "spare", 00:40:12.063 "progress": { 00:40:12.063 "blocks": 5632, 00:40:12.063 "percent": 70 00:40:12.063 } 00:40:12.063 }, 00:40:12.063 "base_bdevs_list": [ 00:40:12.063 { 00:40:12.063 "name": "spare", 00:40:12.063 "uuid": "a5884499-5c6c-5a92-8953-5344dae4d49c", 00:40:12.063 "is_configured": true, 00:40:12.063 "data_offset": 256, 00:40:12.063 "data_size": 7936 00:40:12.063 }, 00:40:12.063 { 00:40:12.063 "name": "BaseBdev2", 00:40:12.063 "uuid": "b2ab0913-911a-5e00-b5e5-9d0325507f06", 00:40:12.063 "is_configured": true, 00:40:12.063 "data_offset": 256, 00:40:12.063 "data_size": 7936 00:40:12.063 } 00:40:12.063 ] 00:40:12.063 }' 00:40:12.063 17:36:49 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:40:12.322 17:36:49 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:40:12.322 17:36:49 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:40:12.322 17:36:49 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:40:12.322 17:36:49 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@711 -- # sleep 1 00:40:12.898 [2024-11-26 17:36:50.280560] bdev_raid.c:2900:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:40:12.898 [2024-11-26 17:36:50.280654] bdev_raid.c:2562:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:40:12.898 [2024-11-26 17:36:50.280775] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:40:13.168 17:36:50 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:40:13.168 17:36:50 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:40:13.168 17:36:50 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:40:13.168 17:36:50 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:40:13.168 17:36:50 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@171 -- # local target=spare 00:40:13.169 17:36:50 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:40:13.169 17:36:50 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:40:13.169 17:36:50 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:40:13.169 17:36:50 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:40:13.169 17:36:50 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:40:13.169 17:36:50 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:40:13.428 17:36:50 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:40:13.428 "name": "raid_bdev1", 00:40:13.428 "uuid": "c30dcb71-ea2b-4340-880a-1b3395a140fd", 00:40:13.428 "strip_size_kb": 0, 00:40:13.428 "state": "online", 00:40:13.428 "raid_level": "raid1", 00:40:13.428 "superblock": true, 00:40:13.428 "num_base_bdevs": 2, 00:40:13.428 "num_base_bdevs_discovered": 2, 00:40:13.428 "num_base_bdevs_operational": 2, 00:40:13.428 "base_bdevs_list": [ 00:40:13.428 { 00:40:13.428 "name": "spare", 00:40:13.428 "uuid": "a5884499-5c6c-5a92-8953-5344dae4d49c", 00:40:13.428 "is_configured": true, 00:40:13.428 "data_offset": 256, 00:40:13.428 "data_size": 7936 00:40:13.428 }, 00:40:13.428 { 00:40:13.428 "name": "BaseBdev2", 00:40:13.428 "uuid": "b2ab0913-911a-5e00-b5e5-9d0325507f06", 00:40:13.428 "is_configured": true, 00:40:13.428 "data_offset": 256, 00:40:13.428 "data_size": 7936 00:40:13.428 } 00:40:13.428 ] 00:40:13.428 }' 00:40:13.428 17:36:50 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:40:13.428 17:36:50 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:40:13.428 17:36:50 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:40:13.428 17:36:50 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:40:13.428 17:36:50 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@709 -- # break 00:40:13.428 17:36:50 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:40:13.428 17:36:50 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:40:13.428 17:36:50 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:40:13.428 17:36:50 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@171 -- # local target=none 00:40:13.428 17:36:50 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:40:13.428 17:36:50 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:40:13.428 17:36:50 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:40:13.428 17:36:50 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:40:13.428 17:36:50 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:40:13.428 17:36:50 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:40:13.428 17:36:50 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:40:13.428 "name": "raid_bdev1", 00:40:13.428 "uuid": "c30dcb71-ea2b-4340-880a-1b3395a140fd", 00:40:13.428 "strip_size_kb": 0, 00:40:13.428 "state": "online", 00:40:13.428 "raid_level": "raid1", 00:40:13.428 "superblock": true, 00:40:13.428 "num_base_bdevs": 2, 00:40:13.428 "num_base_bdevs_discovered": 2, 00:40:13.428 "num_base_bdevs_operational": 2, 00:40:13.428 "base_bdevs_list": [ 00:40:13.428 { 00:40:13.428 "name": "spare", 00:40:13.428 "uuid": "a5884499-5c6c-5a92-8953-5344dae4d49c", 00:40:13.428 "is_configured": true, 00:40:13.428 "data_offset": 256, 00:40:13.428 "data_size": 7936 00:40:13.428 }, 00:40:13.428 { 00:40:13.428 "name": "BaseBdev2", 00:40:13.428 "uuid": "b2ab0913-911a-5e00-b5e5-9d0325507f06", 00:40:13.428 "is_configured": true, 00:40:13.428 "data_offset": 256, 00:40:13.428 "data_size": 7936 00:40:13.428 } 00:40:13.428 ] 00:40:13.428 }' 00:40:13.428 17:36:50 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:40:13.428 17:36:50 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:40:13.428 17:36:50 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:40:13.428 17:36:50 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:40:13.428 17:36:50 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:40:13.428 17:36:50 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:40:13.428 17:36:50 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:40:13.428 17:36:50 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:40:13.428 17:36:50 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:40:13.428 17:36:50 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:40:13.428 17:36:50 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:40:13.428 17:36:50 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:40:13.428 17:36:50 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:40:13.428 17:36:50 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:40:13.428 17:36:50 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:40:13.428 17:36:50 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:40:13.428 17:36:50 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:40:13.428 17:36:50 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:40:13.688 17:36:50 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:40:13.688 17:36:50 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:40:13.688 "name": "raid_bdev1", 00:40:13.688 "uuid": "c30dcb71-ea2b-4340-880a-1b3395a140fd", 00:40:13.688 "strip_size_kb": 0, 00:40:13.688 "state": "online", 00:40:13.688 "raid_level": "raid1", 00:40:13.688 "superblock": true, 00:40:13.688 "num_base_bdevs": 2, 00:40:13.688 "num_base_bdevs_discovered": 2, 00:40:13.688 "num_base_bdevs_operational": 2, 00:40:13.688 "base_bdevs_list": [ 00:40:13.688 { 00:40:13.688 "name": "spare", 00:40:13.688 "uuid": "a5884499-5c6c-5a92-8953-5344dae4d49c", 00:40:13.688 "is_configured": true, 00:40:13.688 "data_offset": 256, 00:40:13.688 "data_size": 7936 00:40:13.688 }, 00:40:13.688 { 00:40:13.688 "name": "BaseBdev2", 00:40:13.688 "uuid": "b2ab0913-911a-5e00-b5e5-9d0325507f06", 00:40:13.688 "is_configured": true, 00:40:13.688 "data_offset": 256, 00:40:13.688 "data_size": 7936 00:40:13.688 } 00:40:13.688 ] 00:40:13.688 }' 00:40:13.688 17:36:50 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:40:13.688 17:36:50 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:40:13.947 17:36:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:40:13.948 17:36:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:40:13.948 17:36:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:40:13.948 [2024-11-26 17:36:51.291121] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:40:13.948 [2024-11-26 17:36:51.291179] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:40:13.948 [2024-11-26 17:36:51.291314] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:40:13.948 [2024-11-26 17:36:51.291404] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:40:13.948 [2024-11-26 17:36:51.291417] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:40:13.948 17:36:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:40:13.948 17:36:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@720 -- # jq length 00:40:13.948 17:36:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:40:13.948 17:36:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:40:13.948 17:36:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:40:13.948 17:36:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:40:13.948 17:36:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:40:13.948 17:36:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@722 -- # '[' false = true ']' 00:40:13.948 17:36:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@743 -- # '[' true = true ']' 00:40:13.948 17:36:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@745 -- # rpc_cmd bdev_passthru_delete spare 00:40:13.948 17:36:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:40:13.948 17:36:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:40:13.948 17:36:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:40:13.948 17:36:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@746 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:40:13.948 17:36:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:40:13.948 17:36:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:40:13.948 [2024-11-26 17:36:51.351081] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:40:13.948 [2024-11-26 17:36:51.351182] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:40:13.948 [2024-11-26 17:36:51.351213] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009f80 00:40:13.948 [2024-11-26 17:36:51.351227] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:40:13.948 [2024-11-26 17:36:51.353912] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:40:13.948 [2024-11-26 17:36:51.353949] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:40:13.948 [2024-11-26 17:36:51.354038] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:40:13.948 [2024-11-26 17:36:51.354118] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:40:13.948 [2024-11-26 17:36:51.354264] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:40:13.948 spare 00:40:13.948 17:36:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:40:13.948 17:36:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@747 -- # rpc_cmd bdev_wait_for_examine 00:40:13.948 17:36:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:40:13.948 17:36:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:40:14.207 [2024-11-26 17:36:51.454402] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007b00 00:40:14.207 [2024-11-26 17:36:51.454477] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4128 00:40:14.207 [2024-11-26 17:36:51.454676] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006080 00:40:14.207 [2024-11-26 17:36:51.454845] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007b00 00:40:14.207 [2024-11-26 17:36:51.454859] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007b00 00:40:14.207 [2024-11-26 17:36:51.454999] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:40:14.207 17:36:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:40:14.207 17:36:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@749 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:40:14.207 17:36:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:40:14.207 17:36:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:40:14.207 17:36:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:40:14.207 17:36:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:40:14.207 17:36:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:40:14.207 17:36:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:40:14.207 17:36:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:40:14.207 17:36:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:40:14.207 17:36:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:40:14.207 17:36:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:40:14.207 17:36:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:40:14.207 17:36:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:40:14.207 17:36:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:40:14.207 17:36:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:40:14.207 17:36:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:40:14.207 "name": "raid_bdev1", 00:40:14.207 "uuid": "c30dcb71-ea2b-4340-880a-1b3395a140fd", 00:40:14.207 "strip_size_kb": 0, 00:40:14.207 "state": "online", 00:40:14.207 "raid_level": "raid1", 00:40:14.207 "superblock": true, 00:40:14.207 "num_base_bdevs": 2, 00:40:14.208 "num_base_bdevs_discovered": 2, 00:40:14.208 "num_base_bdevs_operational": 2, 00:40:14.208 "base_bdevs_list": [ 00:40:14.208 { 00:40:14.208 "name": "spare", 00:40:14.208 "uuid": "a5884499-5c6c-5a92-8953-5344dae4d49c", 00:40:14.208 "is_configured": true, 00:40:14.208 "data_offset": 256, 00:40:14.208 "data_size": 7936 00:40:14.208 }, 00:40:14.208 { 00:40:14.208 "name": "BaseBdev2", 00:40:14.208 "uuid": "b2ab0913-911a-5e00-b5e5-9d0325507f06", 00:40:14.208 "is_configured": true, 00:40:14.208 "data_offset": 256, 00:40:14.208 "data_size": 7936 00:40:14.208 } 00:40:14.208 ] 00:40:14.208 }' 00:40:14.208 17:36:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:40:14.208 17:36:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:40:14.467 17:36:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@750 -- # verify_raid_bdev_process raid_bdev1 none none 00:40:14.467 17:36:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:40:14.467 17:36:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:40:14.467 17:36:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@171 -- # local target=none 00:40:14.467 17:36:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:40:14.467 17:36:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:40:14.467 17:36:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:40:14.467 17:36:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:40:14.467 17:36:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:40:14.467 17:36:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:40:14.726 17:36:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:40:14.726 "name": "raid_bdev1", 00:40:14.726 "uuid": "c30dcb71-ea2b-4340-880a-1b3395a140fd", 00:40:14.726 "strip_size_kb": 0, 00:40:14.726 "state": "online", 00:40:14.726 "raid_level": "raid1", 00:40:14.726 "superblock": true, 00:40:14.726 "num_base_bdevs": 2, 00:40:14.726 "num_base_bdevs_discovered": 2, 00:40:14.726 "num_base_bdevs_operational": 2, 00:40:14.726 "base_bdevs_list": [ 00:40:14.726 { 00:40:14.726 "name": "spare", 00:40:14.726 "uuid": "a5884499-5c6c-5a92-8953-5344dae4d49c", 00:40:14.726 "is_configured": true, 00:40:14.726 "data_offset": 256, 00:40:14.726 "data_size": 7936 00:40:14.726 }, 00:40:14.726 { 00:40:14.726 "name": "BaseBdev2", 00:40:14.726 "uuid": "b2ab0913-911a-5e00-b5e5-9d0325507f06", 00:40:14.726 "is_configured": true, 00:40:14.726 "data_offset": 256, 00:40:14.726 "data_size": 7936 00:40:14.726 } 00:40:14.726 ] 00:40:14.726 }' 00:40:14.726 17:36:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:40:14.726 17:36:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:40:14.726 17:36:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:40:14.726 17:36:52 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:40:14.726 17:36:52 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@751 -- # rpc_cmd bdev_raid_get_bdevs all 00:40:14.726 17:36:52 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:40:14.726 17:36:52 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:40:14.726 17:36:52 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@751 -- # jq -r '.[].base_bdevs_list[0].name' 00:40:14.726 17:36:52 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:40:14.726 17:36:52 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@751 -- # [[ spare == \s\p\a\r\e ]] 00:40:14.726 17:36:52 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@754 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:40:14.726 17:36:52 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:40:14.726 17:36:52 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:40:14.726 [2024-11-26 17:36:52.071349] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:40:14.726 17:36:52 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:40:14.726 17:36:52 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@755 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:40:14.726 17:36:52 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:40:14.726 17:36:52 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:40:14.726 17:36:52 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:40:14.726 17:36:52 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:40:14.726 17:36:52 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:40:14.726 17:36:52 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:40:14.726 17:36:52 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:40:14.726 17:36:52 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:40:14.726 17:36:52 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:40:14.726 17:36:52 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:40:14.726 17:36:52 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:40:14.726 17:36:52 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:40:14.726 17:36:52 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:40:14.726 17:36:52 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:40:14.726 17:36:52 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:40:14.726 "name": "raid_bdev1", 00:40:14.726 "uuid": "c30dcb71-ea2b-4340-880a-1b3395a140fd", 00:40:14.726 "strip_size_kb": 0, 00:40:14.726 "state": "online", 00:40:14.726 "raid_level": "raid1", 00:40:14.726 "superblock": true, 00:40:14.726 "num_base_bdevs": 2, 00:40:14.726 "num_base_bdevs_discovered": 1, 00:40:14.726 "num_base_bdevs_operational": 1, 00:40:14.726 "base_bdevs_list": [ 00:40:14.726 { 00:40:14.726 "name": null, 00:40:14.726 "uuid": "00000000-0000-0000-0000-000000000000", 00:40:14.726 "is_configured": false, 00:40:14.726 "data_offset": 0, 00:40:14.726 "data_size": 7936 00:40:14.726 }, 00:40:14.726 { 00:40:14.726 "name": "BaseBdev2", 00:40:14.726 "uuid": "b2ab0913-911a-5e00-b5e5-9d0325507f06", 00:40:14.726 "is_configured": true, 00:40:14.726 "data_offset": 256, 00:40:14.726 "data_size": 7936 00:40:14.726 } 00:40:14.726 ] 00:40:14.726 }' 00:40:14.726 17:36:52 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:40:14.726 17:36:52 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:40:15.295 17:36:52 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@756 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:40:15.295 17:36:52 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:40:15.295 17:36:52 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:40:15.295 [2024-11-26 17:36:52.539431] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:40:15.295 [2024-11-26 17:36:52.539705] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:40:15.295 [2024-11-26 17:36:52.539734] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:40:15.295 [2024-11-26 17:36:52.539776] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:40:15.295 [2024-11-26 17:36:52.557010] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006150 00:40:15.295 17:36:52 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:40:15.295 17:36:52 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@757 -- # sleep 1 00:40:15.295 [2024-11-26 17:36:52.559560] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:40:16.232 17:36:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@758 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:40:16.232 17:36:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:40:16.232 17:36:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:40:16.232 17:36:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@171 -- # local target=spare 00:40:16.232 17:36:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:40:16.232 17:36:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:40:16.232 17:36:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:40:16.232 17:36:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:40:16.232 17:36:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:40:16.232 17:36:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:40:16.232 17:36:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:40:16.232 "name": "raid_bdev1", 00:40:16.232 "uuid": "c30dcb71-ea2b-4340-880a-1b3395a140fd", 00:40:16.232 "strip_size_kb": 0, 00:40:16.232 "state": "online", 00:40:16.232 "raid_level": "raid1", 00:40:16.232 "superblock": true, 00:40:16.232 "num_base_bdevs": 2, 00:40:16.232 "num_base_bdevs_discovered": 2, 00:40:16.232 "num_base_bdevs_operational": 2, 00:40:16.232 "process": { 00:40:16.232 "type": "rebuild", 00:40:16.232 "target": "spare", 00:40:16.232 "progress": { 00:40:16.232 "blocks": 2560, 00:40:16.232 "percent": 32 00:40:16.233 } 00:40:16.233 }, 00:40:16.233 "base_bdevs_list": [ 00:40:16.233 { 00:40:16.233 "name": "spare", 00:40:16.233 "uuid": "a5884499-5c6c-5a92-8953-5344dae4d49c", 00:40:16.233 "is_configured": true, 00:40:16.233 "data_offset": 256, 00:40:16.233 "data_size": 7936 00:40:16.233 }, 00:40:16.233 { 00:40:16.233 "name": "BaseBdev2", 00:40:16.233 "uuid": "b2ab0913-911a-5e00-b5e5-9d0325507f06", 00:40:16.233 "is_configured": true, 00:40:16.233 "data_offset": 256, 00:40:16.233 "data_size": 7936 00:40:16.233 } 00:40:16.233 ] 00:40:16.233 }' 00:40:16.233 17:36:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:40:16.233 17:36:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:40:16.233 17:36:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:40:16.492 17:36:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:40:16.492 17:36:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@761 -- # rpc_cmd bdev_passthru_delete spare 00:40:16.492 17:36:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:40:16.492 17:36:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:40:16.492 [2024-11-26 17:36:53.717129] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:40:16.492 [2024-11-26 17:36:53.770816] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:40:16.492 [2024-11-26 17:36:53.770899] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:40:16.492 [2024-11-26 17:36:53.770925] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:40:16.492 [2024-11-26 17:36:53.770945] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:40:16.492 17:36:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:40:16.492 17:36:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@762 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:40:16.492 17:36:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:40:16.492 17:36:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:40:16.492 17:36:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:40:16.492 17:36:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:40:16.492 17:36:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:40:16.492 17:36:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:40:16.492 17:36:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:40:16.492 17:36:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:40:16.492 17:36:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:40:16.492 17:36:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:40:16.492 17:36:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:40:16.492 17:36:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:40:16.492 17:36:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:40:16.492 17:36:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:40:16.493 17:36:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:40:16.493 "name": "raid_bdev1", 00:40:16.493 "uuid": "c30dcb71-ea2b-4340-880a-1b3395a140fd", 00:40:16.493 "strip_size_kb": 0, 00:40:16.493 "state": "online", 00:40:16.493 "raid_level": "raid1", 00:40:16.493 "superblock": true, 00:40:16.493 "num_base_bdevs": 2, 00:40:16.493 "num_base_bdevs_discovered": 1, 00:40:16.493 "num_base_bdevs_operational": 1, 00:40:16.493 "base_bdevs_list": [ 00:40:16.493 { 00:40:16.493 "name": null, 00:40:16.493 "uuid": "00000000-0000-0000-0000-000000000000", 00:40:16.493 "is_configured": false, 00:40:16.493 "data_offset": 0, 00:40:16.493 "data_size": 7936 00:40:16.493 }, 00:40:16.493 { 00:40:16.493 "name": "BaseBdev2", 00:40:16.493 "uuid": "b2ab0913-911a-5e00-b5e5-9d0325507f06", 00:40:16.493 "is_configured": true, 00:40:16.493 "data_offset": 256, 00:40:16.493 "data_size": 7936 00:40:16.493 } 00:40:16.493 ] 00:40:16.493 }' 00:40:16.493 17:36:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:40:16.493 17:36:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:40:17.062 17:36:54 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@763 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:40:17.062 17:36:54 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:40:17.062 17:36:54 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:40:17.062 [2024-11-26 17:36:54.275004] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:40:17.062 [2024-11-26 17:36:54.275112] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:40:17.062 [2024-11-26 17:36:54.275150] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a880 00:40:17.062 [2024-11-26 17:36:54.275168] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:40:17.062 [2024-11-26 17:36:54.275425] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:40:17.062 [2024-11-26 17:36:54.275446] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:40:17.062 [2024-11-26 17:36:54.275515] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:40:17.062 [2024-11-26 17:36:54.275534] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:40:17.062 [2024-11-26 17:36:54.275549] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:40:17.062 [2024-11-26 17:36:54.275577] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:40:17.062 [2024-11-26 17:36:54.295485] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:40:17.062 spare 00:40:17.062 17:36:54 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:40:17.062 17:36:54 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@764 -- # sleep 1 00:40:17.062 [2024-11-26 17:36:54.298270] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:40:18.000 17:36:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@765 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:40:18.000 17:36:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:40:18.000 17:36:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:40:18.000 17:36:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@171 -- # local target=spare 00:40:18.000 17:36:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:40:18.000 17:36:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:40:18.000 17:36:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:40:18.000 17:36:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:40:18.000 17:36:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:40:18.000 17:36:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:40:18.000 17:36:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:40:18.000 "name": "raid_bdev1", 00:40:18.000 "uuid": "c30dcb71-ea2b-4340-880a-1b3395a140fd", 00:40:18.000 "strip_size_kb": 0, 00:40:18.000 "state": "online", 00:40:18.000 "raid_level": "raid1", 00:40:18.000 "superblock": true, 00:40:18.000 "num_base_bdevs": 2, 00:40:18.000 "num_base_bdevs_discovered": 2, 00:40:18.000 "num_base_bdevs_operational": 2, 00:40:18.000 "process": { 00:40:18.000 "type": "rebuild", 00:40:18.000 "target": "spare", 00:40:18.000 "progress": { 00:40:18.000 "blocks": 2560, 00:40:18.000 "percent": 32 00:40:18.000 } 00:40:18.000 }, 00:40:18.000 "base_bdevs_list": [ 00:40:18.000 { 00:40:18.000 "name": "spare", 00:40:18.000 "uuid": "a5884499-5c6c-5a92-8953-5344dae4d49c", 00:40:18.000 "is_configured": true, 00:40:18.000 "data_offset": 256, 00:40:18.000 "data_size": 7936 00:40:18.000 }, 00:40:18.000 { 00:40:18.000 "name": "BaseBdev2", 00:40:18.000 "uuid": "b2ab0913-911a-5e00-b5e5-9d0325507f06", 00:40:18.000 "is_configured": true, 00:40:18.000 "data_offset": 256, 00:40:18.000 "data_size": 7936 00:40:18.000 } 00:40:18.000 ] 00:40:18.000 }' 00:40:18.000 17:36:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:40:18.000 17:36:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:40:18.000 17:36:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:40:18.000 17:36:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:40:18.000 17:36:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@768 -- # rpc_cmd bdev_passthru_delete spare 00:40:18.000 17:36:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:40:18.000 17:36:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:40:18.000 [2024-11-26 17:36:55.444291] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:40:18.259 [2024-11-26 17:36:55.509948] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:40:18.259 [2024-11-26 17:36:55.510011] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:40:18.259 [2024-11-26 17:36:55.510031] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:40:18.259 [2024-11-26 17:36:55.510041] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:40:18.259 17:36:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:40:18.259 17:36:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@769 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:40:18.259 17:36:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:40:18.259 17:36:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:40:18.259 17:36:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:40:18.259 17:36:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:40:18.259 17:36:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:40:18.259 17:36:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:40:18.259 17:36:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:40:18.259 17:36:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:40:18.259 17:36:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:40:18.259 17:36:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:40:18.259 17:36:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:40:18.259 17:36:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:40:18.259 17:36:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:40:18.259 17:36:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:40:18.259 17:36:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:40:18.259 "name": "raid_bdev1", 00:40:18.259 "uuid": "c30dcb71-ea2b-4340-880a-1b3395a140fd", 00:40:18.259 "strip_size_kb": 0, 00:40:18.259 "state": "online", 00:40:18.259 "raid_level": "raid1", 00:40:18.259 "superblock": true, 00:40:18.259 "num_base_bdevs": 2, 00:40:18.259 "num_base_bdevs_discovered": 1, 00:40:18.259 "num_base_bdevs_operational": 1, 00:40:18.259 "base_bdevs_list": [ 00:40:18.259 { 00:40:18.259 "name": null, 00:40:18.259 "uuid": "00000000-0000-0000-0000-000000000000", 00:40:18.259 "is_configured": false, 00:40:18.259 "data_offset": 0, 00:40:18.259 "data_size": 7936 00:40:18.259 }, 00:40:18.259 { 00:40:18.259 "name": "BaseBdev2", 00:40:18.259 "uuid": "b2ab0913-911a-5e00-b5e5-9d0325507f06", 00:40:18.259 "is_configured": true, 00:40:18.259 "data_offset": 256, 00:40:18.259 "data_size": 7936 00:40:18.259 } 00:40:18.259 ] 00:40:18.259 }' 00:40:18.259 17:36:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:40:18.259 17:36:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:40:18.518 17:36:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@770 -- # verify_raid_bdev_process raid_bdev1 none none 00:40:18.518 17:36:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:40:18.518 17:36:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:40:18.518 17:36:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@171 -- # local target=none 00:40:18.518 17:36:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:40:18.518 17:36:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:40:18.518 17:36:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:40:18.518 17:36:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:40:18.518 17:36:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:40:18.518 17:36:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:40:18.518 17:36:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:40:18.519 "name": "raid_bdev1", 00:40:18.519 "uuid": "c30dcb71-ea2b-4340-880a-1b3395a140fd", 00:40:18.519 "strip_size_kb": 0, 00:40:18.519 "state": "online", 00:40:18.519 "raid_level": "raid1", 00:40:18.519 "superblock": true, 00:40:18.519 "num_base_bdevs": 2, 00:40:18.519 "num_base_bdevs_discovered": 1, 00:40:18.519 "num_base_bdevs_operational": 1, 00:40:18.519 "base_bdevs_list": [ 00:40:18.519 { 00:40:18.519 "name": null, 00:40:18.519 "uuid": "00000000-0000-0000-0000-000000000000", 00:40:18.519 "is_configured": false, 00:40:18.519 "data_offset": 0, 00:40:18.519 "data_size": 7936 00:40:18.519 }, 00:40:18.519 { 00:40:18.519 "name": "BaseBdev2", 00:40:18.519 "uuid": "b2ab0913-911a-5e00-b5e5-9d0325507f06", 00:40:18.519 "is_configured": true, 00:40:18.519 "data_offset": 256, 00:40:18.519 "data_size": 7936 00:40:18.519 } 00:40:18.519 ] 00:40:18.519 }' 00:40:18.519 17:36:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:40:18.778 17:36:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:40:18.778 17:36:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:40:18.778 17:36:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:40:18.778 17:36:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@773 -- # rpc_cmd bdev_passthru_delete BaseBdev1 00:40:18.778 17:36:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:40:18.778 17:36:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:40:18.778 17:36:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:40:18.778 17:36:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@774 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:40:18.778 17:36:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:40:18.778 17:36:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:40:18.778 [2024-11-26 17:36:56.059144] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:40:18.778 [2024-11-26 17:36:56.059204] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:40:18.778 [2024-11-26 17:36:56.059232] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ae80 00:40:18.778 [2024-11-26 17:36:56.059244] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:40:18.778 [2024-11-26 17:36:56.059444] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:40:18.778 [2024-11-26 17:36:56.059461] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:40:18.778 [2024-11-26 17:36:56.059513] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev BaseBdev1 00:40:18.778 [2024-11-26 17:36:56.059528] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:40:18.778 [2024-11-26 17:36:56.059542] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:40:18.778 [2024-11-26 17:36:56.059555] bdev_raid.c:3894:raid_bdev_examine_done: *ERROR*: Failed to examine bdev BaseBdev1: Invalid argument 00:40:18.778 BaseBdev1 00:40:18.778 17:36:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:40:18.778 17:36:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@775 -- # sleep 1 00:40:19.717 17:36:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@776 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:40:19.717 17:36:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:40:19.717 17:36:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:40:19.717 17:36:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:40:19.717 17:36:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:40:19.717 17:36:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:40:19.717 17:36:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:40:19.717 17:36:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:40:19.717 17:36:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:40:19.717 17:36:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:40:19.717 17:36:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:40:19.717 17:36:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:40:19.717 17:36:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:40:19.717 17:36:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:40:19.717 17:36:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:40:19.717 17:36:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:40:19.717 "name": "raid_bdev1", 00:40:19.717 "uuid": "c30dcb71-ea2b-4340-880a-1b3395a140fd", 00:40:19.717 "strip_size_kb": 0, 00:40:19.717 "state": "online", 00:40:19.717 "raid_level": "raid1", 00:40:19.717 "superblock": true, 00:40:19.717 "num_base_bdevs": 2, 00:40:19.717 "num_base_bdevs_discovered": 1, 00:40:19.717 "num_base_bdevs_operational": 1, 00:40:19.717 "base_bdevs_list": [ 00:40:19.717 { 00:40:19.717 "name": null, 00:40:19.717 "uuid": "00000000-0000-0000-0000-000000000000", 00:40:19.717 "is_configured": false, 00:40:19.717 "data_offset": 0, 00:40:19.717 "data_size": 7936 00:40:19.717 }, 00:40:19.717 { 00:40:19.717 "name": "BaseBdev2", 00:40:19.717 "uuid": "b2ab0913-911a-5e00-b5e5-9d0325507f06", 00:40:19.717 "is_configured": true, 00:40:19.717 "data_offset": 256, 00:40:19.717 "data_size": 7936 00:40:19.717 } 00:40:19.717 ] 00:40:19.717 }' 00:40:19.717 17:36:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:40:19.717 17:36:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:40:20.284 17:36:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@777 -- # verify_raid_bdev_process raid_bdev1 none none 00:40:20.284 17:36:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:40:20.284 17:36:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:40:20.284 17:36:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@171 -- # local target=none 00:40:20.284 17:36:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:40:20.284 17:36:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:40:20.284 17:36:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:40:20.284 17:36:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:40:20.284 17:36:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:40:20.285 17:36:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:40:20.285 17:36:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:40:20.285 "name": "raid_bdev1", 00:40:20.285 "uuid": "c30dcb71-ea2b-4340-880a-1b3395a140fd", 00:40:20.285 "strip_size_kb": 0, 00:40:20.285 "state": "online", 00:40:20.285 "raid_level": "raid1", 00:40:20.285 "superblock": true, 00:40:20.285 "num_base_bdevs": 2, 00:40:20.285 "num_base_bdevs_discovered": 1, 00:40:20.285 "num_base_bdevs_operational": 1, 00:40:20.285 "base_bdevs_list": [ 00:40:20.285 { 00:40:20.285 "name": null, 00:40:20.285 "uuid": "00000000-0000-0000-0000-000000000000", 00:40:20.285 "is_configured": false, 00:40:20.285 "data_offset": 0, 00:40:20.285 "data_size": 7936 00:40:20.285 }, 00:40:20.285 { 00:40:20.285 "name": "BaseBdev2", 00:40:20.285 "uuid": "b2ab0913-911a-5e00-b5e5-9d0325507f06", 00:40:20.285 "is_configured": true, 00:40:20.285 "data_offset": 256, 00:40:20.285 "data_size": 7936 00:40:20.285 } 00:40:20.285 ] 00:40:20.285 }' 00:40:20.285 17:36:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:40:20.285 17:36:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:40:20.285 17:36:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:40:20.285 17:36:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:40:20.285 17:36:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@778 -- # NOT rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:40:20.285 17:36:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@652 -- # local es=0 00:40:20.285 17:36:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:40:20.285 17:36:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:40:20.285 17:36:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:40:20.285 17:36:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:40:20.285 17:36:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:40:20.285 17:36:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:40:20.285 17:36:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:40:20.285 17:36:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:40:20.285 [2024-11-26 17:36:57.667491] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:40:20.285 [2024-11-26 17:36:57.667668] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:40:20.285 [2024-11-26 17:36:57.667691] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:40:20.285 request: 00:40:20.285 { 00:40:20.285 "base_bdev": "BaseBdev1", 00:40:20.285 "raid_bdev": "raid_bdev1", 00:40:20.285 "method": "bdev_raid_add_base_bdev", 00:40:20.285 "req_id": 1 00:40:20.285 } 00:40:20.285 Got JSON-RPC error response 00:40:20.285 response: 00:40:20.285 { 00:40:20.285 "code": -22, 00:40:20.285 "message": "Failed to add base bdev to RAID bdev: Invalid argument" 00:40:20.285 } 00:40:20.285 17:36:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:40:20.285 17:36:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@655 -- # es=1 00:40:20.285 17:36:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:40:20.285 17:36:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:40:20.285 17:36:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:40:20.285 17:36:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@779 -- # sleep 1 00:40:21.660 17:36:58 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@780 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:40:21.660 17:36:58 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:40:21.660 17:36:58 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:40:21.660 17:36:58 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:40:21.660 17:36:58 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:40:21.660 17:36:58 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:40:21.660 17:36:58 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:40:21.660 17:36:58 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:40:21.660 17:36:58 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:40:21.660 17:36:58 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:40:21.660 17:36:58 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:40:21.660 17:36:58 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:40:21.660 17:36:58 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:40:21.660 17:36:58 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:40:21.660 17:36:58 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:40:21.660 17:36:58 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:40:21.660 "name": "raid_bdev1", 00:40:21.660 "uuid": "c30dcb71-ea2b-4340-880a-1b3395a140fd", 00:40:21.660 "strip_size_kb": 0, 00:40:21.660 "state": "online", 00:40:21.660 "raid_level": "raid1", 00:40:21.660 "superblock": true, 00:40:21.660 "num_base_bdevs": 2, 00:40:21.660 "num_base_bdevs_discovered": 1, 00:40:21.660 "num_base_bdevs_operational": 1, 00:40:21.660 "base_bdevs_list": [ 00:40:21.660 { 00:40:21.660 "name": null, 00:40:21.660 "uuid": "00000000-0000-0000-0000-000000000000", 00:40:21.660 "is_configured": false, 00:40:21.660 "data_offset": 0, 00:40:21.660 "data_size": 7936 00:40:21.660 }, 00:40:21.660 { 00:40:21.660 "name": "BaseBdev2", 00:40:21.660 "uuid": "b2ab0913-911a-5e00-b5e5-9d0325507f06", 00:40:21.660 "is_configured": true, 00:40:21.660 "data_offset": 256, 00:40:21.660 "data_size": 7936 00:40:21.660 } 00:40:21.660 ] 00:40:21.660 }' 00:40:21.660 17:36:58 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:40:21.660 17:36:58 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:40:21.925 17:36:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@781 -- # verify_raid_bdev_process raid_bdev1 none none 00:40:21.925 17:36:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:40:21.925 17:36:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:40:21.925 17:36:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@171 -- # local target=none 00:40:21.925 17:36:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:40:21.925 17:36:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:40:21.925 17:36:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:40:21.925 17:36:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:40:21.925 17:36:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:40:21.925 17:36:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:40:21.925 17:36:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:40:21.925 "name": "raid_bdev1", 00:40:21.925 "uuid": "c30dcb71-ea2b-4340-880a-1b3395a140fd", 00:40:21.925 "strip_size_kb": 0, 00:40:21.925 "state": "online", 00:40:21.925 "raid_level": "raid1", 00:40:21.925 "superblock": true, 00:40:21.925 "num_base_bdevs": 2, 00:40:21.925 "num_base_bdevs_discovered": 1, 00:40:21.925 "num_base_bdevs_operational": 1, 00:40:21.925 "base_bdevs_list": [ 00:40:21.925 { 00:40:21.925 "name": null, 00:40:21.925 "uuid": "00000000-0000-0000-0000-000000000000", 00:40:21.925 "is_configured": false, 00:40:21.925 "data_offset": 0, 00:40:21.925 "data_size": 7936 00:40:21.925 }, 00:40:21.925 { 00:40:21.925 "name": "BaseBdev2", 00:40:21.925 "uuid": "b2ab0913-911a-5e00-b5e5-9d0325507f06", 00:40:21.925 "is_configured": true, 00:40:21.925 "data_offset": 256, 00:40:21.925 "data_size": 7936 00:40:21.925 } 00:40:21.925 ] 00:40:21.925 }' 00:40:21.925 17:36:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:40:21.925 17:36:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:40:21.925 17:36:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:40:21.925 17:36:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:40:21.925 17:36:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@784 -- # killprocess 89540 00:40:21.925 17:36:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@954 -- # '[' -z 89540 ']' 00:40:21.925 17:36:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@958 -- # kill -0 89540 00:40:21.925 17:36:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@959 -- # uname 00:40:21.925 17:36:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:40:21.925 17:36:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 89540 00:40:21.925 17:36:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:40:21.925 17:36:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:40:21.925 killing process with pid 89540 00:40:21.925 17:36:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@972 -- # echo 'killing process with pid 89540' 00:40:21.925 17:36:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@973 -- # kill 89540 00:40:21.925 Received shutdown signal, test time was about 60.000000 seconds 00:40:21.925 00:40:21.925 Latency(us) 00:40:21.925 [2024-11-26T17:36:59.372Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:40:21.925 [2024-11-26T17:36:59.372Z] =================================================================================================================== 00:40:21.925 [2024-11-26T17:36:59.372Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:40:21.925 [2024-11-26 17:36:59.295717] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:40:21.925 17:36:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@978 -- # wait 89540 00:40:21.925 [2024-11-26 17:36:59.295847] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:40:21.925 [2024-11-26 17:36:59.295896] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:40:21.925 [2024-11-26 17:36:59.295911] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state offline 00:40:22.188 [2024-11-26 17:36:59.624595] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:40:23.567 17:37:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@786 -- # return 0 00:40:23.567 00:40:23.567 real 0m17.785s 00:40:23.567 user 0m23.174s 00:40:23.567 sys 0m1.850s 00:40:23.567 17:37:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@1130 -- # xtrace_disable 00:40:23.567 17:37:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:40:23.567 ************************************ 00:40:23.567 END TEST raid_rebuild_test_sb_md_interleaved 00:40:23.567 ************************************ 00:40:23.567 17:37:00 bdev_raid -- bdev/bdev_raid.sh@1015 -- # trap - EXIT 00:40:23.567 17:37:00 bdev_raid -- bdev/bdev_raid.sh@1016 -- # cleanup 00:40:23.567 17:37:00 bdev_raid -- bdev/bdev_raid.sh@56 -- # '[' -n 89540 ']' 00:40:23.567 17:37:00 bdev_raid -- bdev/bdev_raid.sh@56 -- # ps -p 89540 00:40:23.567 17:37:00 bdev_raid -- bdev/bdev_raid.sh@60 -- # rm -rf /raidtest 00:40:23.567 00:40:23.567 real 12m24.678s 00:40:23.567 user 16m47.433s 00:40:23.567 sys 2m4.042s 00:40:23.567 17:37:00 bdev_raid -- common/autotest_common.sh@1130 -- # xtrace_disable 00:40:23.567 17:37:00 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:40:23.567 ************************************ 00:40:23.567 END TEST bdev_raid 00:40:23.567 ************************************ 00:40:23.567 17:37:00 -- spdk/autotest.sh@190 -- # run_test spdkcli_raid /home/vagrant/spdk_repo/spdk/test/spdkcli/raid.sh 00:40:23.567 17:37:00 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:40:23.567 17:37:00 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:40:23.567 17:37:00 -- common/autotest_common.sh@10 -- # set +x 00:40:23.567 ************************************ 00:40:23.567 START TEST spdkcli_raid 00:40:23.567 ************************************ 00:40:23.567 17:37:00 spdkcli_raid -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/spdkcli/raid.sh 00:40:23.826 * Looking for test storage... 00:40:23.826 * Found test storage at /home/vagrant/spdk_repo/spdk/test/spdkcli 00:40:23.826 17:37:01 spdkcli_raid -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:40:23.826 17:37:01 spdkcli_raid -- common/autotest_common.sh@1693 -- # lcov --version 00:40:23.826 17:37:01 spdkcli_raid -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:40:23.826 17:37:01 spdkcli_raid -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:40:23.826 17:37:01 spdkcli_raid -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:40:23.826 17:37:01 spdkcli_raid -- scripts/common.sh@333 -- # local ver1 ver1_l 00:40:23.826 17:37:01 spdkcli_raid -- scripts/common.sh@334 -- # local ver2 ver2_l 00:40:23.826 17:37:01 spdkcli_raid -- scripts/common.sh@336 -- # IFS=.-: 00:40:23.826 17:37:01 spdkcli_raid -- scripts/common.sh@336 -- # read -ra ver1 00:40:23.826 17:37:01 spdkcli_raid -- scripts/common.sh@337 -- # IFS=.-: 00:40:23.826 17:37:01 spdkcli_raid -- scripts/common.sh@337 -- # read -ra ver2 00:40:23.826 17:37:01 spdkcli_raid -- scripts/common.sh@338 -- # local 'op=<' 00:40:23.826 17:37:01 spdkcli_raid -- scripts/common.sh@340 -- # ver1_l=2 00:40:23.826 17:37:01 spdkcli_raid -- scripts/common.sh@341 -- # ver2_l=1 00:40:23.826 17:37:01 spdkcli_raid -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:40:23.826 17:37:01 spdkcli_raid -- scripts/common.sh@344 -- # case "$op" in 00:40:23.826 17:37:01 spdkcli_raid -- scripts/common.sh@345 -- # : 1 00:40:23.826 17:37:01 spdkcli_raid -- scripts/common.sh@364 -- # (( v = 0 )) 00:40:23.826 17:37:01 spdkcli_raid -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:40:23.826 17:37:01 spdkcli_raid -- scripts/common.sh@365 -- # decimal 1 00:40:23.826 17:37:01 spdkcli_raid -- scripts/common.sh@353 -- # local d=1 00:40:23.826 17:37:01 spdkcli_raid -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:40:23.826 17:37:01 spdkcli_raid -- scripts/common.sh@355 -- # echo 1 00:40:23.826 17:37:01 spdkcli_raid -- scripts/common.sh@365 -- # ver1[v]=1 00:40:23.826 17:37:01 spdkcli_raid -- scripts/common.sh@366 -- # decimal 2 00:40:23.826 17:37:01 spdkcli_raid -- scripts/common.sh@353 -- # local d=2 00:40:23.826 17:37:01 spdkcli_raid -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:40:23.826 17:37:01 spdkcli_raid -- scripts/common.sh@355 -- # echo 2 00:40:23.826 17:37:01 spdkcli_raid -- scripts/common.sh@366 -- # ver2[v]=2 00:40:23.826 17:37:01 spdkcli_raid -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:40:23.826 17:37:01 spdkcli_raid -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:40:23.826 17:37:01 spdkcli_raid -- scripts/common.sh@368 -- # return 0 00:40:23.826 17:37:01 spdkcli_raid -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:40:23.826 17:37:01 spdkcli_raid -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:40:23.826 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:40:23.826 --rc genhtml_branch_coverage=1 00:40:23.826 --rc genhtml_function_coverage=1 00:40:23.826 --rc genhtml_legend=1 00:40:23.826 --rc geninfo_all_blocks=1 00:40:23.826 --rc geninfo_unexecuted_blocks=1 00:40:23.826 00:40:23.826 ' 00:40:23.826 17:37:01 spdkcli_raid -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:40:23.826 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:40:23.826 --rc genhtml_branch_coverage=1 00:40:23.826 --rc genhtml_function_coverage=1 00:40:23.826 --rc genhtml_legend=1 00:40:23.826 --rc geninfo_all_blocks=1 00:40:23.826 --rc geninfo_unexecuted_blocks=1 00:40:23.826 00:40:23.826 ' 00:40:23.826 17:37:01 spdkcli_raid -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:40:23.826 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:40:23.826 --rc genhtml_branch_coverage=1 00:40:23.826 --rc genhtml_function_coverage=1 00:40:23.826 --rc genhtml_legend=1 00:40:23.826 --rc geninfo_all_blocks=1 00:40:23.826 --rc geninfo_unexecuted_blocks=1 00:40:23.826 00:40:23.826 ' 00:40:23.826 17:37:01 spdkcli_raid -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:40:23.826 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:40:23.826 --rc genhtml_branch_coverage=1 00:40:23.826 --rc genhtml_function_coverage=1 00:40:23.826 --rc genhtml_legend=1 00:40:23.826 --rc geninfo_all_blocks=1 00:40:23.826 --rc geninfo_unexecuted_blocks=1 00:40:23.826 00:40:23.826 ' 00:40:23.826 17:37:01 spdkcli_raid -- spdkcli/raid.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/spdkcli/common.sh 00:40:23.826 17:37:01 spdkcli_raid -- spdkcli/common.sh@6 -- # spdkcli_job=/home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py 00:40:23.826 17:37:01 spdkcli_raid -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/home/vagrant/spdk_repo/spdk/test/json_config/clear_config.py 00:40:23.826 17:37:01 spdkcli_raid -- spdkcli/raid.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/iscsi_tgt/common.sh 00:40:23.826 17:37:01 spdkcli_raid -- iscsi_tgt/common.sh@9 -- # ISCSI_BRIDGE=iscsi_br 00:40:23.826 17:37:01 spdkcli_raid -- iscsi_tgt/common.sh@10 -- # INITIATOR_INTERFACE=spdk_init_int 00:40:23.826 17:37:01 spdkcli_raid -- iscsi_tgt/common.sh@11 -- # INITIATOR_BRIDGE=init_br 00:40:23.826 17:37:01 spdkcli_raid -- iscsi_tgt/common.sh@12 -- # TARGET_NAMESPACE=spdk_iscsi_ns 00:40:23.826 17:37:01 spdkcli_raid -- iscsi_tgt/common.sh@13 -- # TARGET_NS_CMD=(ip netns exec "$TARGET_NAMESPACE") 00:40:23.826 17:37:01 spdkcli_raid -- iscsi_tgt/common.sh@14 -- # TARGET_INTERFACE=spdk_tgt_int 00:40:23.826 17:37:01 spdkcli_raid -- iscsi_tgt/common.sh@15 -- # TARGET_INTERFACE2=spdk_tgt_int2 00:40:23.826 17:37:01 spdkcli_raid -- iscsi_tgt/common.sh@16 -- # TARGET_BRIDGE=tgt_br 00:40:23.826 17:37:01 spdkcli_raid -- iscsi_tgt/common.sh@17 -- # TARGET_BRIDGE2=tgt_br2 00:40:23.826 17:37:01 spdkcli_raid -- iscsi_tgt/common.sh@20 -- # TARGET_IP=10.0.0.1 00:40:23.826 17:37:01 spdkcli_raid -- iscsi_tgt/common.sh@21 -- # TARGET_IP2=10.0.0.3 00:40:23.826 17:37:01 spdkcli_raid -- iscsi_tgt/common.sh@22 -- # INITIATOR_IP=10.0.0.2 00:40:23.826 17:37:01 spdkcli_raid -- iscsi_tgt/common.sh@23 -- # ISCSI_PORT=3260 00:40:23.826 17:37:01 spdkcli_raid -- iscsi_tgt/common.sh@24 -- # NETMASK=10.0.0.2/32 00:40:23.826 17:37:01 spdkcli_raid -- iscsi_tgt/common.sh@25 -- # INITIATOR_TAG=2 00:40:23.826 17:37:01 spdkcli_raid -- iscsi_tgt/common.sh@26 -- # INITIATOR_NAME=ANY 00:40:23.826 17:37:01 spdkcli_raid -- iscsi_tgt/common.sh@27 -- # PORTAL_TAG=1 00:40:23.826 17:37:01 spdkcli_raid -- iscsi_tgt/common.sh@28 -- # ISCSI_APP=("${TARGET_NS_CMD[@]}" "${ISCSI_APP[@]}") 00:40:23.826 17:37:01 spdkcli_raid -- iscsi_tgt/common.sh@29 -- # ISCSI_TEST_CORE_MASK=0xF 00:40:23.826 17:37:01 spdkcli_raid -- spdkcli/raid.sh@12 -- # MATCH_FILE=spdkcli_raid.test 00:40:23.826 17:37:01 spdkcli_raid -- spdkcli/raid.sh@13 -- # SPDKCLI_BRANCH=/bdevs 00:40:23.826 17:37:01 spdkcli_raid -- spdkcli/raid.sh@14 -- # dirname /home/vagrant/spdk_repo/spdk/test/spdkcli/raid.sh 00:40:23.826 17:37:01 spdkcli_raid -- spdkcli/raid.sh@14 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/spdkcli 00:40:23.826 17:37:01 spdkcli_raid -- spdkcli/raid.sh@14 -- # testdir=/home/vagrant/spdk_repo/spdk/test/spdkcli 00:40:23.826 17:37:01 spdkcli_raid -- spdkcli/raid.sh@15 -- # . /home/vagrant/spdk_repo/spdk/test/spdkcli/common.sh 00:40:23.826 17:37:01 spdkcli_raid -- spdkcli/common.sh@6 -- # spdkcli_job=/home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py 00:40:23.826 17:37:01 spdkcli_raid -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/home/vagrant/spdk_repo/spdk/test/json_config/clear_config.py 00:40:23.826 17:37:01 spdkcli_raid -- spdkcli/raid.sh@17 -- # trap cleanup EXIT 00:40:23.826 17:37:01 spdkcli_raid -- spdkcli/raid.sh@19 -- # timing_enter run_spdk_tgt 00:40:23.826 17:37:01 spdkcli_raid -- common/autotest_common.sh@726 -- # xtrace_disable 00:40:23.826 17:37:01 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:40:23.826 17:37:01 spdkcli_raid -- spdkcli/raid.sh@20 -- # run_spdk_tgt 00:40:23.826 17:37:01 spdkcli_raid -- spdkcli/common.sh@27 -- # spdk_tgt_pid=90217 00:40:23.826 17:37:01 spdkcli_raid -- spdkcli/common.sh@28 -- # waitforlisten 90217 00:40:23.826 17:37:01 spdkcli_raid -- spdkcli/common.sh@26 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 -p 0 00:40:23.826 17:37:01 spdkcli_raid -- common/autotest_common.sh@835 -- # '[' -z 90217 ']' 00:40:23.826 17:37:01 spdkcli_raid -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:40:23.826 17:37:01 spdkcli_raid -- common/autotest_common.sh@840 -- # local max_retries=100 00:40:23.826 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:40:23.826 17:37:01 spdkcli_raid -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:40:23.826 17:37:01 spdkcli_raid -- common/autotest_common.sh@844 -- # xtrace_disable 00:40:23.826 17:37:01 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:40:24.085 [2024-11-26 17:37:01.370809] Starting SPDK v25.01-pre git sha1 f7ce15267 / DPDK 24.03.0 initialization... 00:40:24.085 [2024-11-26 17:37:01.371004] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid90217 ] 00:40:24.343 [2024-11-26 17:37:01.569254] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:40:24.343 [2024-11-26 17:37:01.704657] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:40:24.343 [2024-11-26 17:37:01.704672] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:40:25.719 17:37:02 spdkcli_raid -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:40:25.719 17:37:02 spdkcli_raid -- common/autotest_common.sh@868 -- # return 0 00:40:25.719 17:37:02 spdkcli_raid -- spdkcli/raid.sh@21 -- # timing_exit run_spdk_tgt 00:40:25.719 17:37:02 spdkcli_raid -- common/autotest_common.sh@732 -- # xtrace_disable 00:40:25.719 17:37:02 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:40:25.719 17:37:02 spdkcli_raid -- spdkcli/raid.sh@23 -- # timing_enter spdkcli_create_malloc 00:40:25.719 17:37:02 spdkcli_raid -- common/autotest_common.sh@726 -- # xtrace_disable 00:40:25.719 17:37:02 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:40:25.719 17:37:02 spdkcli_raid -- spdkcli/raid.sh@26 -- # /home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py ''\''/bdevs/malloc create 8 512 Malloc1'\'' '\''Malloc1'\'' True 00:40:25.719 '\''/bdevs/malloc create 8 512 Malloc2'\'' '\''Malloc2'\'' True 00:40:25.719 ' 00:40:27.096 Executing command: ['/bdevs/malloc create 8 512 Malloc1', 'Malloc1', True] 00:40:27.096 Executing command: ['/bdevs/malloc create 8 512 Malloc2', 'Malloc2', True] 00:40:27.096 17:37:04 spdkcli_raid -- spdkcli/raid.sh@27 -- # timing_exit spdkcli_create_malloc 00:40:27.096 17:37:04 spdkcli_raid -- common/autotest_common.sh@732 -- # xtrace_disable 00:40:27.096 17:37:04 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:40:27.096 17:37:04 spdkcli_raid -- spdkcli/raid.sh@29 -- # timing_enter spdkcli_create_raid 00:40:27.096 17:37:04 spdkcli_raid -- common/autotest_common.sh@726 -- # xtrace_disable 00:40:27.096 17:37:04 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:40:27.096 17:37:04 spdkcli_raid -- spdkcli/raid.sh@31 -- # /home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py ''\''/bdevs/raid_volume create testraid 0 "Malloc1 Malloc2" 4'\'' '\''testraid'\'' True 00:40:27.096 ' 00:40:28.502 Executing command: ['/bdevs/raid_volume create testraid 0 "Malloc1 Malloc2" 4', 'testraid', True] 00:40:28.502 17:37:05 spdkcli_raid -- spdkcli/raid.sh@32 -- # timing_exit spdkcli_create_raid 00:40:28.502 17:37:05 spdkcli_raid -- common/autotest_common.sh@732 -- # xtrace_disable 00:40:28.502 17:37:05 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:40:28.502 17:37:05 spdkcli_raid -- spdkcli/raid.sh@34 -- # timing_enter spdkcli_check_match 00:40:28.502 17:37:05 spdkcli_raid -- common/autotest_common.sh@726 -- # xtrace_disable 00:40:28.502 17:37:05 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:40:28.502 17:37:05 spdkcli_raid -- spdkcli/raid.sh@35 -- # check_match 00:40:28.502 17:37:05 spdkcli_raid -- spdkcli/common.sh@44 -- # /home/vagrant/spdk_repo/spdk/scripts/spdkcli.py ll /bdevs 00:40:29.069 17:37:06 spdkcli_raid -- spdkcli/common.sh@45 -- # /home/vagrant/spdk_repo/spdk/test/app/match/match /home/vagrant/spdk_repo/spdk/test/spdkcli/match_files/spdkcli_raid.test.match 00:40:29.069 17:37:06 spdkcli_raid -- spdkcli/common.sh@46 -- # rm -f /home/vagrant/spdk_repo/spdk/test/spdkcli/match_files/spdkcli_raid.test 00:40:29.069 17:37:06 spdkcli_raid -- spdkcli/raid.sh@36 -- # timing_exit spdkcli_check_match 00:40:29.069 17:37:06 spdkcli_raid -- common/autotest_common.sh@732 -- # xtrace_disable 00:40:29.069 17:37:06 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:40:29.069 17:37:06 spdkcli_raid -- spdkcli/raid.sh@38 -- # timing_enter spdkcli_delete_raid 00:40:29.069 17:37:06 spdkcli_raid -- common/autotest_common.sh@726 -- # xtrace_disable 00:40:29.069 17:37:06 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:40:29.070 17:37:06 spdkcli_raid -- spdkcli/raid.sh@40 -- # /home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py ''\''/bdevs/raid_volume delete testraid'\'' '\'''\'' True 00:40:29.070 ' 00:40:30.095 Executing command: ['/bdevs/raid_volume delete testraid', '', True] 00:40:30.353 17:37:07 spdkcli_raid -- spdkcli/raid.sh@41 -- # timing_exit spdkcli_delete_raid 00:40:30.353 17:37:07 spdkcli_raid -- common/autotest_common.sh@732 -- # xtrace_disable 00:40:30.353 17:37:07 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:40:30.353 17:37:07 spdkcli_raid -- spdkcli/raid.sh@43 -- # timing_enter spdkcli_delete_malloc 00:40:30.353 17:37:07 spdkcli_raid -- common/autotest_common.sh@726 -- # xtrace_disable 00:40:30.353 17:37:07 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:40:30.353 17:37:07 spdkcli_raid -- spdkcli/raid.sh@46 -- # /home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py ''\''/bdevs/malloc delete Malloc1'\'' '\'''\'' True 00:40:30.353 '\''/bdevs/malloc delete Malloc2'\'' '\'''\'' True 00:40:30.353 ' 00:40:31.729 Executing command: ['/bdevs/malloc delete Malloc1', '', True] 00:40:31.729 Executing command: ['/bdevs/malloc delete Malloc2', '', True] 00:40:31.987 17:37:09 spdkcli_raid -- spdkcli/raid.sh@47 -- # timing_exit spdkcli_delete_malloc 00:40:31.987 17:37:09 spdkcli_raid -- common/autotest_common.sh@732 -- # xtrace_disable 00:40:31.987 17:37:09 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:40:31.987 17:37:09 spdkcli_raid -- spdkcli/raid.sh@49 -- # killprocess 90217 00:40:31.987 17:37:09 spdkcli_raid -- common/autotest_common.sh@954 -- # '[' -z 90217 ']' 00:40:31.987 17:37:09 spdkcli_raid -- common/autotest_common.sh@958 -- # kill -0 90217 00:40:31.987 17:37:09 spdkcli_raid -- common/autotest_common.sh@959 -- # uname 00:40:31.987 17:37:09 spdkcli_raid -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:40:31.987 17:37:09 spdkcli_raid -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 90217 00:40:31.987 17:37:09 spdkcli_raid -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:40:31.987 17:37:09 spdkcli_raid -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:40:31.987 killing process with pid 90217 00:40:31.987 17:37:09 spdkcli_raid -- common/autotest_common.sh@972 -- # echo 'killing process with pid 90217' 00:40:31.987 17:37:09 spdkcli_raid -- common/autotest_common.sh@973 -- # kill 90217 00:40:31.987 17:37:09 spdkcli_raid -- common/autotest_common.sh@978 -- # wait 90217 00:40:34.517 17:37:11 spdkcli_raid -- spdkcli/raid.sh@1 -- # cleanup 00:40:34.517 17:37:11 spdkcli_raid -- spdkcli/common.sh@10 -- # '[' -n 90217 ']' 00:40:34.517 17:37:11 spdkcli_raid -- spdkcli/common.sh@11 -- # killprocess 90217 00:40:34.517 17:37:11 spdkcli_raid -- common/autotest_common.sh@954 -- # '[' -z 90217 ']' 00:40:34.517 17:37:11 spdkcli_raid -- common/autotest_common.sh@958 -- # kill -0 90217 00:40:34.517 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 958: kill: (90217) - No such process 00:40:34.517 17:37:11 spdkcli_raid -- common/autotest_common.sh@981 -- # echo 'Process with pid 90217 is not found' 00:40:34.517 Process with pid 90217 is not found 00:40:34.517 17:37:11 spdkcli_raid -- spdkcli/common.sh@13 -- # '[' -n '' ']' 00:40:34.517 17:37:11 spdkcli_raid -- spdkcli/common.sh@16 -- # '[' -n '' ']' 00:40:34.517 17:37:11 spdkcli_raid -- spdkcli/common.sh@19 -- # '[' -n '' ']' 00:40:34.517 17:37:11 spdkcli_raid -- spdkcli/common.sh@22 -- # rm -f /home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_raid.test /home/vagrant/spdk_repo/spdk/test/spdkcli/match_files/spdkcli_details_vhost.test /tmp/sample_aio 00:40:34.517 00:40:34.517 real 0m10.945s 00:40:34.517 user 0m22.334s 00:40:34.517 sys 0m1.417s 00:40:34.517 17:37:11 spdkcli_raid -- common/autotest_common.sh@1130 -- # xtrace_disable 00:40:34.517 17:37:11 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:40:34.517 ************************************ 00:40:34.517 END TEST spdkcli_raid 00:40:34.517 ************************************ 00:40:34.517 17:37:11 -- spdk/autotest.sh@191 -- # run_test blockdev_raid5f /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh raid5f 00:40:34.517 17:37:11 -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:40:34.517 17:37:11 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:40:34.517 17:37:11 -- common/autotest_common.sh@10 -- # set +x 00:40:34.776 ************************************ 00:40:34.776 START TEST blockdev_raid5f 00:40:34.776 ************************************ 00:40:34.776 17:37:11 blockdev_raid5f -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh raid5f 00:40:34.776 * Looking for test storage... 00:40:34.776 * Found test storage at /home/vagrant/spdk_repo/spdk/test/bdev 00:40:34.776 17:37:12 blockdev_raid5f -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:40:34.776 17:37:12 blockdev_raid5f -- common/autotest_common.sh@1693 -- # lcov --version 00:40:34.776 17:37:12 blockdev_raid5f -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:40:34.776 17:37:12 blockdev_raid5f -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:40:34.776 17:37:12 blockdev_raid5f -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:40:34.776 17:37:12 blockdev_raid5f -- scripts/common.sh@333 -- # local ver1 ver1_l 00:40:34.776 17:37:12 blockdev_raid5f -- scripts/common.sh@334 -- # local ver2 ver2_l 00:40:34.776 17:37:12 blockdev_raid5f -- scripts/common.sh@336 -- # IFS=.-: 00:40:34.776 17:37:12 blockdev_raid5f -- scripts/common.sh@336 -- # read -ra ver1 00:40:34.776 17:37:12 blockdev_raid5f -- scripts/common.sh@337 -- # IFS=.-: 00:40:34.776 17:37:12 blockdev_raid5f -- scripts/common.sh@337 -- # read -ra ver2 00:40:34.776 17:37:12 blockdev_raid5f -- scripts/common.sh@338 -- # local 'op=<' 00:40:34.776 17:37:12 blockdev_raid5f -- scripts/common.sh@340 -- # ver1_l=2 00:40:34.776 17:37:12 blockdev_raid5f -- scripts/common.sh@341 -- # ver2_l=1 00:40:34.776 17:37:12 blockdev_raid5f -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:40:34.776 17:37:12 blockdev_raid5f -- scripts/common.sh@344 -- # case "$op" in 00:40:34.776 17:37:12 blockdev_raid5f -- scripts/common.sh@345 -- # : 1 00:40:34.776 17:37:12 blockdev_raid5f -- scripts/common.sh@364 -- # (( v = 0 )) 00:40:34.776 17:37:12 blockdev_raid5f -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:40:34.776 17:37:12 blockdev_raid5f -- scripts/common.sh@365 -- # decimal 1 00:40:34.776 17:37:12 blockdev_raid5f -- scripts/common.sh@353 -- # local d=1 00:40:34.776 17:37:12 blockdev_raid5f -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:40:34.776 17:37:12 blockdev_raid5f -- scripts/common.sh@355 -- # echo 1 00:40:34.776 17:37:12 blockdev_raid5f -- scripts/common.sh@365 -- # ver1[v]=1 00:40:34.776 17:37:12 blockdev_raid5f -- scripts/common.sh@366 -- # decimal 2 00:40:34.776 17:37:12 blockdev_raid5f -- scripts/common.sh@353 -- # local d=2 00:40:34.776 17:37:12 blockdev_raid5f -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:40:34.776 17:37:12 blockdev_raid5f -- scripts/common.sh@355 -- # echo 2 00:40:34.776 17:37:12 blockdev_raid5f -- scripts/common.sh@366 -- # ver2[v]=2 00:40:34.776 17:37:12 blockdev_raid5f -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:40:34.776 17:37:12 blockdev_raid5f -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:40:34.776 17:37:12 blockdev_raid5f -- scripts/common.sh@368 -- # return 0 00:40:34.776 17:37:12 blockdev_raid5f -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:40:34.776 17:37:12 blockdev_raid5f -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:40:34.776 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:40:34.776 --rc genhtml_branch_coverage=1 00:40:34.776 --rc genhtml_function_coverage=1 00:40:34.776 --rc genhtml_legend=1 00:40:34.776 --rc geninfo_all_blocks=1 00:40:34.776 --rc geninfo_unexecuted_blocks=1 00:40:34.776 00:40:34.776 ' 00:40:34.776 17:37:12 blockdev_raid5f -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:40:34.776 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:40:34.776 --rc genhtml_branch_coverage=1 00:40:34.776 --rc genhtml_function_coverage=1 00:40:34.776 --rc genhtml_legend=1 00:40:34.776 --rc geninfo_all_blocks=1 00:40:34.776 --rc geninfo_unexecuted_blocks=1 00:40:34.776 00:40:34.776 ' 00:40:34.776 17:37:12 blockdev_raid5f -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:40:34.776 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:40:34.776 --rc genhtml_branch_coverage=1 00:40:34.776 --rc genhtml_function_coverage=1 00:40:34.776 --rc genhtml_legend=1 00:40:34.776 --rc geninfo_all_blocks=1 00:40:34.776 --rc geninfo_unexecuted_blocks=1 00:40:34.776 00:40:34.776 ' 00:40:34.776 17:37:12 blockdev_raid5f -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:40:34.776 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:40:34.776 --rc genhtml_branch_coverage=1 00:40:34.776 --rc genhtml_function_coverage=1 00:40:34.777 --rc genhtml_legend=1 00:40:34.777 --rc geninfo_all_blocks=1 00:40:34.777 --rc geninfo_unexecuted_blocks=1 00:40:34.777 00:40:34.777 ' 00:40:34.777 17:37:12 blockdev_raid5f -- bdev/blockdev.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:40:34.777 17:37:12 blockdev_raid5f -- bdev/nbd_common.sh@6 -- # set -e 00:40:34.777 17:37:12 blockdev_raid5f -- bdev/blockdev.sh@12 -- # rpc_py=rpc_cmd 00:40:34.777 17:37:12 blockdev_raid5f -- bdev/blockdev.sh@13 -- # conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:40:34.777 17:37:12 blockdev_raid5f -- bdev/blockdev.sh@14 -- # nonenclosed_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json 00:40:34.777 17:37:12 blockdev_raid5f -- bdev/blockdev.sh@15 -- # nonarray_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json 00:40:34.777 17:37:12 blockdev_raid5f -- bdev/blockdev.sh@17 -- # export RPC_PIPE_TIMEOUT=30 00:40:34.777 17:37:12 blockdev_raid5f -- bdev/blockdev.sh@17 -- # RPC_PIPE_TIMEOUT=30 00:40:34.777 17:37:12 blockdev_raid5f -- bdev/blockdev.sh@20 -- # : 00:40:34.777 17:37:12 blockdev_raid5f -- bdev/blockdev.sh@707 -- # QOS_DEV_1=Malloc_0 00:40:34.777 17:37:12 blockdev_raid5f -- bdev/blockdev.sh@708 -- # QOS_DEV_2=Null_1 00:40:34.777 17:37:12 blockdev_raid5f -- bdev/blockdev.sh@709 -- # QOS_RUN_TIME=5 00:40:34.777 17:37:12 blockdev_raid5f -- bdev/blockdev.sh@711 -- # uname -s 00:40:34.777 17:37:12 blockdev_raid5f -- bdev/blockdev.sh@711 -- # '[' Linux = Linux ']' 00:40:34.777 17:37:12 blockdev_raid5f -- bdev/blockdev.sh@713 -- # PRE_RESERVED_MEM=0 00:40:34.777 17:37:12 blockdev_raid5f -- bdev/blockdev.sh@719 -- # test_type=raid5f 00:40:34.777 17:37:12 blockdev_raid5f -- bdev/blockdev.sh@720 -- # crypto_device= 00:40:34.777 17:37:12 blockdev_raid5f -- bdev/blockdev.sh@721 -- # dek= 00:40:34.777 17:37:12 blockdev_raid5f -- bdev/blockdev.sh@722 -- # env_ctx= 00:40:34.777 17:37:12 blockdev_raid5f -- bdev/blockdev.sh@723 -- # wait_for_rpc= 00:40:34.777 17:37:12 blockdev_raid5f -- bdev/blockdev.sh@724 -- # '[' -n '' ']' 00:40:34.777 17:37:12 blockdev_raid5f -- bdev/blockdev.sh@727 -- # [[ raid5f == bdev ]] 00:40:34.777 17:37:12 blockdev_raid5f -- bdev/blockdev.sh@727 -- # [[ raid5f == crypto_* ]] 00:40:34.777 17:37:12 blockdev_raid5f -- bdev/blockdev.sh@730 -- # start_spdk_tgt 00:40:34.777 17:37:12 blockdev_raid5f -- bdev/blockdev.sh@47 -- # spdk_tgt_pid=90503 00:40:34.777 17:37:12 blockdev_raid5f -- bdev/blockdev.sh@48 -- # trap 'killprocess "$spdk_tgt_pid"; exit 1' SIGINT SIGTERM EXIT 00:40:34.777 17:37:12 blockdev_raid5f -- bdev/blockdev.sh@49 -- # waitforlisten 90503 00:40:34.777 17:37:12 blockdev_raid5f -- bdev/blockdev.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '' '' 00:40:34.777 17:37:12 blockdev_raid5f -- common/autotest_common.sh@835 -- # '[' -z 90503 ']' 00:40:34.777 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:40:34.777 17:37:12 blockdev_raid5f -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:40:34.777 17:37:12 blockdev_raid5f -- common/autotest_common.sh@840 -- # local max_retries=100 00:40:34.777 17:37:12 blockdev_raid5f -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:40:34.777 17:37:12 blockdev_raid5f -- common/autotest_common.sh@844 -- # xtrace_disable 00:40:34.777 17:37:12 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:40:35.035 [2024-11-26 17:37:12.324249] Starting SPDK v25.01-pre git sha1 f7ce15267 / DPDK 24.03.0 initialization... 00:40:35.035 [2024-11-26 17:37:12.324438] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid90503 ] 00:40:35.293 [2024-11-26 17:37:12.518014] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:40:35.293 [2024-11-26 17:37:12.668010] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:40:36.671 17:37:13 blockdev_raid5f -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:40:36.671 17:37:13 blockdev_raid5f -- common/autotest_common.sh@868 -- # return 0 00:40:36.671 17:37:13 blockdev_raid5f -- bdev/blockdev.sh@731 -- # case "$test_type" in 00:40:36.671 17:37:13 blockdev_raid5f -- bdev/blockdev.sh@763 -- # setup_raid5f_conf 00:40:36.671 17:37:13 blockdev_raid5f -- bdev/blockdev.sh@279 -- # rpc_cmd 00:40:36.671 17:37:13 blockdev_raid5f -- common/autotest_common.sh@563 -- # xtrace_disable 00:40:36.671 17:37:13 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:40:36.671 Malloc0 00:40:36.671 Malloc1 00:40:36.671 Malloc2 00:40:36.671 17:37:13 blockdev_raid5f -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:40:36.671 17:37:13 blockdev_raid5f -- bdev/blockdev.sh@774 -- # rpc_cmd bdev_wait_for_examine 00:40:36.671 17:37:13 blockdev_raid5f -- common/autotest_common.sh@563 -- # xtrace_disable 00:40:36.671 17:37:13 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:40:36.671 17:37:13 blockdev_raid5f -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:40:36.671 17:37:13 blockdev_raid5f -- bdev/blockdev.sh@777 -- # cat 00:40:36.671 17:37:13 blockdev_raid5f -- bdev/blockdev.sh@777 -- # rpc_cmd save_subsystem_config -n accel 00:40:36.671 17:37:13 blockdev_raid5f -- common/autotest_common.sh@563 -- # xtrace_disable 00:40:36.671 17:37:13 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:40:36.671 17:37:13 blockdev_raid5f -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:40:36.671 17:37:13 blockdev_raid5f -- bdev/blockdev.sh@777 -- # rpc_cmd save_subsystem_config -n bdev 00:40:36.671 17:37:13 blockdev_raid5f -- common/autotest_common.sh@563 -- # xtrace_disable 00:40:36.671 17:37:13 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:40:36.671 17:37:13 blockdev_raid5f -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:40:36.671 17:37:13 blockdev_raid5f -- bdev/blockdev.sh@777 -- # rpc_cmd save_subsystem_config -n iobuf 00:40:36.671 17:37:13 blockdev_raid5f -- common/autotest_common.sh@563 -- # xtrace_disable 00:40:36.671 17:37:13 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:40:36.671 17:37:13 blockdev_raid5f -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:40:36.671 17:37:13 blockdev_raid5f -- bdev/blockdev.sh@785 -- # mapfile -t bdevs 00:40:36.671 17:37:13 blockdev_raid5f -- bdev/blockdev.sh@785 -- # rpc_cmd bdev_get_bdevs 00:40:36.671 17:37:13 blockdev_raid5f -- bdev/blockdev.sh@785 -- # jq -r '.[] | select(.claimed == false)' 00:40:36.671 17:37:13 blockdev_raid5f -- common/autotest_common.sh@563 -- # xtrace_disable 00:40:36.671 17:37:13 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:40:36.671 17:37:14 blockdev_raid5f -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:40:36.671 17:37:14 blockdev_raid5f -- bdev/blockdev.sh@786 -- # mapfile -t bdevs_name 00:40:36.671 17:37:14 blockdev_raid5f -- bdev/blockdev.sh@786 -- # jq -r .name 00:40:36.671 17:37:14 blockdev_raid5f -- bdev/blockdev.sh@786 -- # printf '%s\n' '{' ' "name": "raid5f",' ' "aliases": [' ' "fd72a0b9-5a3e-472d-abcd-e9a0bd6c28c7"' ' ],' ' "product_name": "Raid Volume",' ' "block_size": 512,' ' "num_blocks": 131072,' ' "uuid": "fd72a0b9-5a3e-472d-abcd-e9a0bd6c28c7",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "raid": {' ' "uuid": "fd72a0b9-5a3e-472d-abcd-e9a0bd6c28c7",' ' "strip_size_kb": 2,' ' "state": "online",' ' "raid_level": "raid5f",' ' "superblock": false,' ' "num_base_bdevs": 3,' ' "num_base_bdevs_discovered": 3,' ' "num_base_bdevs_operational": 3,' ' "base_bdevs_list": [' ' {' ' "name": "Malloc0",' ' "uuid": "028e4043-0a14-4e86-9c5e-9fae2ad59d4a",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' },' ' {' ' "name": "Malloc1",' ' "uuid": "59e95ae5-8a1c-42a5-b992-a33ce2a8bf06",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' },' ' {' ' "name": "Malloc2",' ' "uuid": "ee789f96-3ae2-48f5-9709-aab57fb95354",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' }' ' ]' ' }' ' }' '}' 00:40:36.671 17:37:14 blockdev_raid5f -- bdev/blockdev.sh@787 -- # bdev_list=("${bdevs_name[@]}") 00:40:36.671 17:37:14 blockdev_raid5f -- bdev/blockdev.sh@789 -- # hello_world_bdev=raid5f 00:40:36.671 17:37:14 blockdev_raid5f -- bdev/blockdev.sh@790 -- # trap - SIGINT SIGTERM EXIT 00:40:36.671 17:37:14 blockdev_raid5f -- bdev/blockdev.sh@791 -- # killprocess 90503 00:40:36.671 17:37:14 blockdev_raid5f -- common/autotest_common.sh@954 -- # '[' -z 90503 ']' 00:40:36.671 17:37:14 blockdev_raid5f -- common/autotest_common.sh@958 -- # kill -0 90503 00:40:36.671 17:37:14 blockdev_raid5f -- common/autotest_common.sh@959 -- # uname 00:40:36.671 17:37:14 blockdev_raid5f -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:40:36.671 17:37:14 blockdev_raid5f -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 90503 00:40:36.671 killing process with pid 90503 00:40:36.671 17:37:14 blockdev_raid5f -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:40:36.671 17:37:14 blockdev_raid5f -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:40:36.671 17:37:14 blockdev_raid5f -- common/autotest_common.sh@972 -- # echo 'killing process with pid 90503' 00:40:36.671 17:37:14 blockdev_raid5f -- common/autotest_common.sh@973 -- # kill 90503 00:40:36.671 17:37:14 blockdev_raid5f -- common/autotest_common.sh@978 -- # wait 90503 00:40:39.960 17:37:17 blockdev_raid5f -- bdev/blockdev.sh@795 -- # trap cleanup SIGINT SIGTERM EXIT 00:40:39.960 17:37:17 blockdev_raid5f -- bdev/blockdev.sh@797 -- # run_test bdev_hello_world /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b raid5f '' 00:40:39.960 17:37:17 blockdev_raid5f -- common/autotest_common.sh@1105 -- # '[' 7 -le 1 ']' 00:40:39.960 17:37:17 blockdev_raid5f -- common/autotest_common.sh@1111 -- # xtrace_disable 00:40:39.960 17:37:17 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:40:39.960 ************************************ 00:40:39.960 START TEST bdev_hello_world 00:40:39.960 ************************************ 00:40:39.960 17:37:17 blockdev_raid5f.bdev_hello_world -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b raid5f '' 00:40:39.960 [2024-11-26 17:37:17.209182] Starting SPDK v25.01-pre git sha1 f7ce15267 / DPDK 24.03.0 initialization... 00:40:39.960 [2024-11-26 17:37:17.209625] [ DPDK EAL parameters: hello_bdev --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid90575 ] 00:40:39.960 [2024-11-26 17:37:17.404756] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:40:40.219 [2024-11-26 17:37:17.549472] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:40:40.786 [2024-11-26 17:37:18.175687] hello_bdev.c: 222:hello_start: *NOTICE*: Successfully started the application 00:40:40.786 [2024-11-26 17:37:18.175985] hello_bdev.c: 231:hello_start: *NOTICE*: Opening the bdev raid5f 00:40:40.786 [2024-11-26 17:37:18.176018] hello_bdev.c: 244:hello_start: *NOTICE*: Opening io channel 00:40:40.786 [2024-11-26 17:37:18.176580] hello_bdev.c: 138:hello_write: *NOTICE*: Writing to the bdev 00:40:40.786 [2024-11-26 17:37:18.176735] hello_bdev.c: 117:write_complete: *NOTICE*: bdev io write completed successfully 00:40:40.786 [2024-11-26 17:37:18.176756] hello_bdev.c: 84:hello_read: *NOTICE*: Reading io 00:40:40.786 [2024-11-26 17:37:18.176809] hello_bdev.c: 65:read_complete: *NOTICE*: Read string from bdev : Hello World! 00:40:40.786 00:40:40.786 [2024-11-26 17:37:18.176829] hello_bdev.c: 74:read_complete: *NOTICE*: Stopping app 00:40:42.690 00:40:42.690 real 0m2.685s 00:40:42.690 user 0m2.162s 00:40:42.690 sys 0m0.393s 00:40:42.690 17:37:19 blockdev_raid5f.bdev_hello_world -- common/autotest_common.sh@1130 -- # xtrace_disable 00:40:42.690 17:37:19 blockdev_raid5f.bdev_hello_world -- common/autotest_common.sh@10 -- # set +x 00:40:42.690 ************************************ 00:40:42.690 END TEST bdev_hello_world 00:40:42.690 ************************************ 00:40:42.690 17:37:19 blockdev_raid5f -- bdev/blockdev.sh@798 -- # run_test bdev_bounds bdev_bounds '' 00:40:42.690 17:37:19 blockdev_raid5f -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:40:42.690 17:37:19 blockdev_raid5f -- common/autotest_common.sh@1111 -- # xtrace_disable 00:40:42.690 17:37:19 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:40:42.690 ************************************ 00:40:42.691 START TEST bdev_bounds 00:40:42.691 ************************************ 00:40:42.691 Process bdevio pid: 90622 00:40:42.691 17:37:19 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@1129 -- # bdev_bounds '' 00:40:42.691 17:37:19 blockdev_raid5f.bdev_bounds -- bdev/blockdev.sh@289 -- # bdevio_pid=90622 00:40:42.691 17:37:19 blockdev_raid5f.bdev_bounds -- bdev/blockdev.sh@290 -- # trap 'cleanup; killprocess $bdevio_pid; exit 1' SIGINT SIGTERM EXIT 00:40:42.691 17:37:19 blockdev_raid5f.bdev_bounds -- bdev/blockdev.sh@288 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/bdevio -w -s 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:40:42.691 17:37:19 blockdev_raid5f.bdev_bounds -- bdev/blockdev.sh@291 -- # echo 'Process bdevio pid: 90622' 00:40:42.691 17:37:19 blockdev_raid5f.bdev_bounds -- bdev/blockdev.sh@292 -- # waitforlisten 90622 00:40:42.691 17:37:19 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@835 -- # '[' -z 90622 ']' 00:40:42.691 17:37:19 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:40:42.691 17:37:19 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@840 -- # local max_retries=100 00:40:42.691 17:37:19 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:40:42.691 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:40:42.691 17:37:19 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@844 -- # xtrace_disable 00:40:42.691 17:37:19 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@10 -- # set +x 00:40:42.691 [2024-11-26 17:37:19.964572] Starting SPDK v25.01-pre git sha1 f7ce15267 / DPDK 24.03.0 initialization... 00:40:42.691 [2024-11-26 17:37:19.965014] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid90622 ] 00:40:42.950 [2024-11-26 17:37:20.175002] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:40:42.950 [2024-11-26 17:37:20.374040] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:40:42.950 [2024-11-26 17:37:20.374208] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:40:42.950 [2024-11-26 17:37:20.374225] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:40:43.886 17:37:21 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:40:43.886 17:37:21 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@868 -- # return 0 00:40:43.886 17:37:21 blockdev_raid5f.bdev_bounds -- bdev/blockdev.sh@293 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/tests.py perform_tests 00:40:43.886 I/O targets: 00:40:43.886 raid5f: 131072 blocks of 512 bytes (64 MiB) 00:40:43.886 00:40:43.886 00:40:43.886 CUnit - A unit testing framework for C - Version 2.1-3 00:40:43.886 http://cunit.sourceforge.net/ 00:40:43.886 00:40:43.886 00:40:43.886 Suite: bdevio tests on: raid5f 00:40:43.886 Test: blockdev write read block ...passed 00:40:43.886 Test: blockdev write zeroes read block ...passed 00:40:43.886 Test: blockdev write zeroes read no split ...passed 00:40:43.886 Test: blockdev write zeroes read split ...passed 00:40:44.145 Test: blockdev write zeroes read split partial ...passed 00:40:44.145 Test: blockdev reset ...passed 00:40:44.145 Test: blockdev write read 8 blocks ...passed 00:40:44.145 Test: blockdev write read size > 128k ...passed 00:40:44.145 Test: blockdev write read invalid size ...passed 00:40:44.145 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:40:44.145 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:40:44.145 Test: blockdev write read max offset ...passed 00:40:44.145 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:40:44.145 Test: blockdev writev readv 8 blocks ...passed 00:40:44.145 Test: blockdev writev readv 30 x 1block ...passed 00:40:44.145 Test: blockdev writev readv block ...passed 00:40:44.145 Test: blockdev writev readv size > 128k ...passed 00:40:44.145 Test: blockdev writev readv size > 128k in two iovs ...passed 00:40:44.145 Test: blockdev comparev and writev ...passed 00:40:44.145 Test: blockdev nvme passthru rw ...passed 00:40:44.145 Test: blockdev nvme passthru vendor specific ...passed 00:40:44.145 Test: blockdev nvme admin passthru ...passed 00:40:44.145 Test: blockdev copy ...passed 00:40:44.145 00:40:44.145 Run Summary: Type Total Ran Passed Failed Inactive 00:40:44.145 suites 1 1 n/a 0 0 00:40:44.145 tests 23 23 23 0 0 00:40:44.145 asserts 130 130 130 0 n/a 00:40:44.145 00:40:44.145 Elapsed time = 0.561 seconds 00:40:44.145 0 00:40:44.145 17:37:21 blockdev_raid5f.bdev_bounds -- bdev/blockdev.sh@294 -- # killprocess 90622 00:40:44.145 17:37:21 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@954 -- # '[' -z 90622 ']' 00:40:44.145 17:37:21 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@958 -- # kill -0 90622 00:40:44.145 17:37:21 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@959 -- # uname 00:40:44.145 17:37:21 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:40:44.145 17:37:21 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 90622 00:40:44.145 17:37:21 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:40:44.145 17:37:21 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:40:44.145 17:37:21 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@972 -- # echo 'killing process with pid 90622' 00:40:44.145 killing process with pid 90622 00:40:44.145 17:37:21 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@973 -- # kill 90622 00:40:44.145 17:37:21 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@978 -- # wait 90622 00:40:46.053 17:37:23 blockdev_raid5f.bdev_bounds -- bdev/blockdev.sh@295 -- # trap - SIGINT SIGTERM EXIT 00:40:46.053 00:40:46.053 real 0m3.215s 00:40:46.053 user 0m7.685s 00:40:46.053 sys 0m0.562s 00:40:46.053 17:37:23 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@1130 -- # xtrace_disable 00:40:46.053 17:37:23 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@10 -- # set +x 00:40:46.053 ************************************ 00:40:46.053 END TEST bdev_bounds 00:40:46.053 ************************************ 00:40:46.053 17:37:23 blockdev_raid5f -- bdev/blockdev.sh@799 -- # run_test bdev_nbd nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json raid5f '' 00:40:46.053 17:37:23 blockdev_raid5f -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:40:46.053 17:37:23 blockdev_raid5f -- common/autotest_common.sh@1111 -- # xtrace_disable 00:40:46.053 17:37:23 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:40:46.053 ************************************ 00:40:46.053 START TEST bdev_nbd 00:40:46.053 ************************************ 00:40:46.053 17:37:23 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@1129 -- # nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json raid5f '' 00:40:46.053 17:37:23 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@299 -- # uname -s 00:40:46.053 17:37:23 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@299 -- # [[ Linux == Linux ]] 00:40:46.053 17:37:23 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@301 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:40:46.053 17:37:23 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@302 -- # local conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:40:46.053 17:37:23 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@303 -- # bdev_all=('raid5f') 00:40:46.053 17:37:23 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@303 -- # local bdev_all 00:40:46.053 17:37:23 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@304 -- # local bdev_num=1 00:40:46.053 17:37:23 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@308 -- # [[ -e /sys/module/nbd ]] 00:40:46.053 17:37:23 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@310 -- # nbd_all=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14' '/dev/nbd15' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5' '/dev/nbd6' '/dev/nbd7' '/dev/nbd8' '/dev/nbd9') 00:40:46.053 17:37:23 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@310 -- # local nbd_all 00:40:46.053 17:37:23 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@311 -- # bdev_num=1 00:40:46.053 17:37:23 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@313 -- # nbd_list=('/dev/nbd0') 00:40:46.053 17:37:23 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@313 -- # local nbd_list 00:40:46.053 17:37:23 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@314 -- # bdev_list=('raid5f') 00:40:46.053 17:37:23 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@314 -- # local bdev_list 00:40:46.053 17:37:23 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@317 -- # nbd_pid=90683 00:40:46.053 17:37:23 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@318 -- # trap 'cleanup; killprocess $nbd_pid' SIGINT SIGTERM EXIT 00:40:46.053 17:37:23 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@316 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-nbd.sock -i 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:40:46.053 17:37:23 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@319 -- # waitforlisten 90683 /var/tmp/spdk-nbd.sock 00:40:46.053 17:37:23 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@835 -- # '[' -z 90683 ']' 00:40:46.053 17:37:23 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:40:46.053 17:37:23 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@840 -- # local max_retries=100 00:40:46.053 17:37:23 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:40:46.053 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:40:46.053 17:37:23 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@844 -- # xtrace_disable 00:40:46.053 17:37:23 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@10 -- # set +x 00:40:46.053 [2024-11-26 17:37:23.260879] Starting SPDK v25.01-pre git sha1 f7ce15267 / DPDK 24.03.0 initialization... 00:40:46.053 [2024-11-26 17:37:23.261324] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:40:46.053 [2024-11-26 17:37:23.461007] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:40:46.313 [2024-11-26 17:37:23.606383] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:40:46.881 17:37:24 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:40:46.881 17:37:24 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@868 -- # return 0 00:40:46.881 17:37:24 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@321 -- # nbd_rpc_start_stop_verify /var/tmp/spdk-nbd.sock raid5f 00:40:46.881 17:37:24 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@113 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:40:46.881 17:37:24 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@114 -- # bdev_list=('raid5f') 00:40:46.881 17:37:24 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@114 -- # local bdev_list 00:40:46.881 17:37:24 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@116 -- # nbd_start_disks_without_nbd_idx /var/tmp/spdk-nbd.sock raid5f 00:40:46.881 17:37:24 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@22 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:40:46.881 17:37:24 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@23 -- # bdev_list=('raid5f') 00:40:46.881 17:37:24 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@23 -- # local bdev_list 00:40:46.881 17:37:24 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@24 -- # local i 00:40:46.881 17:37:24 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@25 -- # local nbd_device 00:40:46.881 17:37:24 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i = 0 )) 00:40:46.881 17:37:24 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 1 )) 00:40:46.881 17:37:24 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk raid5f 00:40:47.141 17:37:24 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd0 00:40:47.141 17:37:24 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd0 00:40:47.141 17:37:24 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd0 00:40:47.141 17:37:24 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:40:47.141 17:37:24 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:40:47.141 17:37:24 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:40:47.141 17:37:24 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:40:47.141 17:37:24 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:40:47.141 17:37:24 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:40:47.141 17:37:24 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:40:47.141 17:37:24 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:40:47.141 17:37:24 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:40:47.141 1+0 records in 00:40:47.141 1+0 records out 00:40:47.141 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00324048 s, 1.3 MB/s 00:40:47.141 17:37:24 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:40:47.141 17:37:24 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:40:47.141 17:37:24 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:40:47.142 17:37:24 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:40:47.142 17:37:24 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:40:47.142 17:37:24 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:40:47.142 17:37:24 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 1 )) 00:40:47.142 17:37:24 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@118 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:40:47.400 17:37:24 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@118 -- # nbd_disks_json='[ 00:40:47.400 { 00:40:47.400 "nbd_device": "/dev/nbd0", 00:40:47.400 "bdev_name": "raid5f" 00:40:47.400 } 00:40:47.400 ]' 00:40:47.400 17:37:24 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@119 -- # nbd_disks_name=($(echo "${nbd_disks_json}" | jq -r '.[] | .nbd_device')) 00:40:47.400 17:37:24 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@119 -- # echo '[ 00:40:47.400 { 00:40:47.400 "nbd_device": "/dev/nbd0", 00:40:47.400 "bdev_name": "raid5f" 00:40:47.400 } 00:40:47.400 ]' 00:40:47.660 17:37:24 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@119 -- # jq -r '.[] | .nbd_device' 00:40:47.660 17:37:24 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@120 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock /dev/nbd0 00:40:47.660 17:37:24 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:40:47.660 17:37:24 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:40:47.660 17:37:24 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:40:47.660 17:37:24 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:40:47.660 17:37:24 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:40:47.660 17:37:24 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:40:47.660 17:37:25 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:40:47.919 17:37:25 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:40:47.919 17:37:25 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:40:47.919 17:37:25 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:40:47.919 17:37:25 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:40:47.919 17:37:25 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:40:47.919 17:37:25 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:40:47.919 17:37:25 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:40:47.919 17:37:25 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@122 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:40:47.919 17:37:25 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:40:47.919 17:37:25 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:40:47.919 17:37:25 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:40:47.919 17:37:25 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:40:47.919 17:37:25 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[]' 00:40:47.919 17:37:25 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:40:47.919 17:37:25 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:40:47.919 17:37:25 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '' 00:40:48.178 17:37:25 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@65 -- # true 00:40:48.178 17:37:25 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=0 00:40:48.178 17:37:25 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 0 00:40:48.178 17:37:25 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@122 -- # count=0 00:40:48.178 17:37:25 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@123 -- # '[' 0 -ne 0 ']' 00:40:48.178 17:37:25 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@127 -- # return 0 00:40:48.178 17:37:25 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@322 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock raid5f /dev/nbd0 00:40:48.178 17:37:25 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:40:48.178 17:37:25 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@91 -- # bdev_list=('raid5f') 00:40:48.178 17:37:25 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@91 -- # local bdev_list 00:40:48.178 17:37:25 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0') 00:40:48.178 17:37:25 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@92 -- # local nbd_list 00:40:48.178 17:37:25 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock raid5f /dev/nbd0 00:40:48.178 17:37:25 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:40:48.178 17:37:25 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@10 -- # bdev_list=('raid5f') 00:40:48.178 17:37:25 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@10 -- # local bdev_list 00:40:48.178 17:37:25 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:40:48.178 17:37:25 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@11 -- # local nbd_list 00:40:48.178 17:37:25 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@12 -- # local i 00:40:48.178 17:37:25 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:40:48.178 17:37:25 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:40:48.178 17:37:25 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk raid5f /dev/nbd0 00:40:48.178 /dev/nbd0 00:40:48.178 17:37:25 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:40:48.178 17:37:25 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:40:48.178 17:37:25 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:40:48.178 17:37:25 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:40:48.179 17:37:25 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:40:48.179 17:37:25 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:40:48.179 17:37:25 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:40:48.179 17:37:25 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:40:48.179 17:37:25 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:40:48.179 17:37:25 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:40:48.179 17:37:25 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:40:48.179 1+0 records in 00:40:48.179 1+0 records out 00:40:48.179 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000279131 s, 14.7 MB/s 00:40:48.179 17:37:25 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:40:48.179 17:37:25 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:40:48.179 17:37:25 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:40:48.179 17:37:25 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:40:48.179 17:37:25 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:40:48.179 17:37:25 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:40:48.179 17:37:25 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:40:48.179 17:37:25 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:40:48.179 17:37:25 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:40:48.179 17:37:25 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:40:48.747 17:37:25 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:40:48.747 { 00:40:48.747 "nbd_device": "/dev/nbd0", 00:40:48.747 "bdev_name": "raid5f" 00:40:48.747 } 00:40:48.747 ]' 00:40:48.747 17:37:25 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[ 00:40:48.747 { 00:40:48.747 "nbd_device": "/dev/nbd0", 00:40:48.747 "bdev_name": "raid5f" 00:40:48.747 } 00:40:48.747 ]' 00:40:48.747 17:37:25 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:40:48.747 17:37:25 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name=/dev/nbd0 00:40:48.747 17:37:25 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:40:48.747 17:37:25 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo /dev/nbd0 00:40:48.747 17:37:25 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=1 00:40:48.747 17:37:25 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 1 00:40:48.747 17:37:25 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@95 -- # count=1 00:40:48.747 17:37:25 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@96 -- # '[' 1 -ne 1 ']' 00:40:48.747 17:37:25 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify /dev/nbd0 write 00:40:48.747 17:37:25 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0') 00:40:48.747 17:37:25 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@70 -- # local nbd_list 00:40:48.747 17:37:25 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@71 -- # local operation=write 00:40:48.747 17:37:25 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:40:48.747 17:37:25 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:40:48.747 17:37:25 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest bs=4096 count=256 00:40:48.747 256+0 records in 00:40:48.747 256+0 records out 00:40:48.747 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00646176 s, 162 MB/s 00:40:48.747 17:37:25 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:40:48.747 17:37:25 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:40:48.747 256+0 records in 00:40:48.747 256+0 records out 00:40:48.747 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0344696 s, 30.4 MB/s 00:40:48.747 17:37:26 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify /dev/nbd0 verify 00:40:48.747 17:37:26 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0') 00:40:48.747 17:37:26 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@70 -- # local nbd_list 00:40:48.747 17:37:26 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@71 -- # local operation=verify 00:40:48.747 17:37:26 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:40:48.747 17:37:26 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:40:48.747 17:37:26 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:40:48.747 17:37:26 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:40:48.747 17:37:26 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd0 00:40:48.747 17:37:26 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:40:48.747 17:37:26 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock /dev/nbd0 00:40:48.747 17:37:26 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:40:48.747 17:37:26 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:40:48.748 17:37:26 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:40:48.748 17:37:26 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:40:48.748 17:37:26 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:40:48.748 17:37:26 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:40:49.007 17:37:26 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:40:49.007 17:37:26 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:40:49.007 17:37:26 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:40:49.007 17:37:26 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:40:49.007 17:37:26 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:40:49.007 17:37:26 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:40:49.007 17:37:26 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:40:49.007 17:37:26 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:40:49.007 17:37:26 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:40:49.007 17:37:26 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:40:49.007 17:37:26 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:40:49.265 17:37:26 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:40:49.265 17:37:26 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[]' 00:40:49.265 17:37:26 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:40:49.265 17:37:26 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:40:49.265 17:37:26 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '' 00:40:49.265 17:37:26 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:40:49.265 17:37:26 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@65 -- # true 00:40:49.265 17:37:26 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=0 00:40:49.265 17:37:26 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 0 00:40:49.265 17:37:26 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@104 -- # count=0 00:40:49.265 17:37:26 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:40:49.265 17:37:26 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@109 -- # return 0 00:40:49.265 17:37:26 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@323 -- # nbd_with_lvol_verify /var/tmp/spdk-nbd.sock /dev/nbd0 00:40:49.265 17:37:26 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@131 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:40:49.265 17:37:26 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@132 -- # local nbd=/dev/nbd0 00:40:49.265 17:37:26 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@134 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create -b malloc_lvol_verify 16 512 00:40:49.833 malloc_lvol_verify 00:40:49.833 17:37:26 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@135 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create_lvstore malloc_lvol_verify lvs 00:40:49.833 b5de2671-f168-4422-ac48-19c4a875033b 00:40:49.833 17:37:27 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@136 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create lvol 4 -l lvs 00:40:50.091 03f6f59b-82eb-4f32-a1a1-3e5c64486152 00:40:50.091 17:37:27 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@137 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk lvs/lvol /dev/nbd0 00:40:50.349 /dev/nbd0 00:40:50.349 17:37:27 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@139 -- # wait_for_nbd_set_capacity /dev/nbd0 00:40:50.349 17:37:27 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@146 -- # local nbd=nbd0 00:40:50.349 17:37:27 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@148 -- # [[ -e /sys/block/nbd0/size ]] 00:40:50.349 17:37:27 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@150 -- # (( 8192 == 0 )) 00:40:50.349 17:37:27 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@141 -- # mkfs.ext4 /dev/nbd0 00:40:50.350 mke2fs 1.47.0 (5-Feb-2023) 00:40:50.350 Discarding device blocks: 0/4096 done 00:40:50.350 Creating filesystem with 4096 1k blocks and 1024 inodes 00:40:50.350 00:40:50.350 Allocating group tables: 0/1 done 00:40:50.350 Writing inode tables: 0/1 done 00:40:50.350 Creating journal (1024 blocks): done 00:40:50.350 Writing superblocks and filesystem accounting information: 0/1 done 00:40:50.350 00:40:50.350 17:37:27 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@142 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock /dev/nbd0 00:40:50.350 17:37:27 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:40:50.350 17:37:27 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:40:50.350 17:37:27 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:40:50.350 17:37:27 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:40:50.350 17:37:27 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:40:50.350 17:37:27 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:40:50.609 17:37:27 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:40:50.609 17:37:27 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:40:50.609 17:37:27 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:40:50.609 17:37:27 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:40:50.609 17:37:27 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:40:50.609 17:37:27 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:40:50.609 17:37:27 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:40:50.609 17:37:27 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:40:50.609 17:37:27 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@325 -- # killprocess 90683 00:40:50.609 17:37:27 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@954 -- # '[' -z 90683 ']' 00:40:50.609 17:37:27 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@958 -- # kill -0 90683 00:40:50.609 17:37:27 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@959 -- # uname 00:40:50.609 17:37:27 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:40:50.609 17:37:27 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 90683 00:40:50.609 17:37:28 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:40:50.609 17:37:28 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:40:50.609 killing process with pid 90683 00:40:50.609 17:37:28 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@972 -- # echo 'killing process with pid 90683' 00:40:50.609 17:37:28 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@973 -- # kill 90683 00:40:50.609 17:37:28 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@978 -- # wait 90683 00:40:52.510 17:37:29 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@326 -- # trap - SIGINT SIGTERM EXIT 00:40:52.510 00:40:52.510 real 0m6.558s 00:40:52.510 user 0m8.599s 00:40:52.510 sys 0m1.782s 00:40:52.510 17:37:29 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@1130 -- # xtrace_disable 00:40:52.510 17:37:29 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@10 -- # set +x 00:40:52.510 ************************************ 00:40:52.510 END TEST bdev_nbd 00:40:52.510 ************************************ 00:40:52.510 17:37:29 blockdev_raid5f -- bdev/blockdev.sh@800 -- # [[ y == y ]] 00:40:52.510 17:37:29 blockdev_raid5f -- bdev/blockdev.sh@801 -- # '[' raid5f = nvme ']' 00:40:52.510 17:37:29 blockdev_raid5f -- bdev/blockdev.sh@801 -- # '[' raid5f = gpt ']' 00:40:52.510 17:37:29 blockdev_raid5f -- bdev/blockdev.sh@805 -- # run_test bdev_fio fio_test_suite '' 00:40:52.510 17:37:29 blockdev_raid5f -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:40:52.510 17:37:29 blockdev_raid5f -- common/autotest_common.sh@1111 -- # xtrace_disable 00:40:52.510 17:37:29 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:40:52.510 ************************************ 00:40:52.510 START TEST bdev_fio 00:40:52.510 ************************************ 00:40:52.510 17:37:29 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1129 -- # fio_test_suite '' 00:40:52.510 17:37:29 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@330 -- # local env_context 00:40:52.510 /home/vagrant/spdk_repo/spdk/test/bdev /home/vagrant/spdk_repo/spdk 00:40:52.510 17:37:29 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@334 -- # pushd /home/vagrant/spdk_repo/spdk/test/bdev 00:40:52.510 17:37:29 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@335 -- # trap 'rm -f ./*.state; popd; exit 1' SIGINT SIGTERM EXIT 00:40:52.510 17:37:29 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@338 -- # sed s/--env-context=// 00:40:52.510 17:37:29 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@338 -- # echo '' 00:40:52.510 17:37:29 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@338 -- # env_context= 00:40:52.510 17:37:29 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@339 -- # fio_config_gen /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio verify AIO '' 00:40:52.510 17:37:29 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1284 -- # local config_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:40:52.510 17:37:29 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1285 -- # local workload=verify 00:40:52.510 17:37:29 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1286 -- # local bdev_type=AIO 00:40:52.510 17:37:29 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1287 -- # local env_context= 00:40:52.510 17:37:29 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1288 -- # local fio_dir=/usr/src/fio 00:40:52.510 17:37:29 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1290 -- # '[' -e /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio ']' 00:40:52.510 17:37:29 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1295 -- # '[' -z verify ']' 00:40:52.510 17:37:29 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1299 -- # '[' -n '' ']' 00:40:52.510 17:37:29 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1303 -- # touch /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:40:52.510 17:37:29 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1305 -- # cat 00:40:52.510 17:37:29 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1317 -- # '[' verify == verify ']' 00:40:52.510 17:37:29 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1318 -- # cat 00:40:52.510 17:37:29 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1327 -- # '[' AIO == AIO ']' 00:40:52.511 17:37:29 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1328 -- # /usr/src/fio/fio --version 00:40:52.511 17:37:29 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1328 -- # [[ fio-3.35 == *\f\i\o\-\3* ]] 00:40:52.511 17:37:29 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1329 -- # echo serialize_overlap=1 00:40:52.511 17:37:29 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@340 -- # for b in "${bdevs_name[@]}" 00:40:52.511 17:37:29 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@341 -- # echo '[job_raid5f]' 00:40:52.511 17:37:29 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@342 -- # echo filename=raid5f 00:40:52.511 17:37:29 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@346 -- # local 'fio_params=--ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json' 00:40:52.511 17:37:29 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@348 -- # run_test bdev_fio_rw_verify fio_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:40:52.511 17:37:29 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1105 -- # '[' 11 -le 1 ']' 00:40:52.511 17:37:29 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1111 -- # xtrace_disable 00:40:52.511 17:37:29 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@10 -- # set +x 00:40:52.511 ************************************ 00:40:52.511 START TEST bdev_fio_rw_verify 00:40:52.511 ************************************ 00:40:52.511 17:37:29 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1129 -- # fio_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:40:52.511 17:37:29 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:40:52.511 17:37:29 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:40:52.511 17:37:29 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:40:52.511 17:37:29 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1343 -- # local sanitizers 00:40:52.511 17:37:29 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:40:52.511 17:37:29 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1345 -- # shift 00:40:52.511 17:37:29 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1347 -- # local asan_lib= 00:40:52.511 17:37:29 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:40:52.511 17:37:29 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1349 -- # grep libasan 00:40:52.511 17:37:29 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:40:52.511 17:37:29 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:40:52.511 17:37:29 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:40:52.511 17:37:29 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:40:52.511 17:37:29 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1351 -- # break 00:40:52.511 17:37:29 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:40:52.511 17:37:29 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:40:52.770 job_raid5f: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:40:52.770 fio-3.35 00:40:52.770 Starting 1 thread 00:41:04.988 00:41:04.988 job_raid5f: (groupid=0, jobs=1): err= 0: pid=90894: Tue Nov 26 17:37:41 2024 00:41:04.988 read: IOPS=11.8k, BW=46.2MiB/s (48.5MB/s)(462MiB/10001msec) 00:41:04.988 slat (nsec): min=18266, max=96030, avg=20227.58, stdev=2609.93 00:41:04.988 clat (usec): min=10, max=497, avg=135.73, stdev=49.04 00:41:04.988 lat (usec): min=30, max=531, avg=155.96, stdev=49.53 00:41:04.988 clat percentiles (usec): 00:41:04.988 | 50.000th=[ 139], 99.000th=[ 233], 99.900th=[ 338], 99.990th=[ 371], 00:41:04.988 | 99.999th=[ 465] 00:41:04.988 write: IOPS=12.5k, BW=48.7MiB/s (51.0MB/s)(481MiB/9872msec); 0 zone resets 00:41:04.988 slat (usec): min=7, max=220, avg=16.86, stdev= 3.48 00:41:04.988 clat (usec): min=59, max=728, avg=308.17, stdev=42.72 00:41:04.988 lat (usec): min=75, max=806, avg=325.03, stdev=43.73 00:41:04.988 clat percentiles (usec): 00:41:04.988 | 50.000th=[ 314], 99.000th=[ 416], 99.900th=[ 586], 99.990th=[ 644], 00:41:04.988 | 99.999th=[ 701] 00:41:04.988 bw ( KiB/s): min=42328, max=52632, per=98.77%, avg=49231.79, stdev=2394.64, samples=19 00:41:04.988 iops : min=10582, max=13158, avg=12307.95, stdev=598.66, samples=19 00:41:04.988 lat (usec) : 20=0.01%, 50=0.01%, 100=14.26%, 250=39.16%, 500=46.34% 00:41:04.988 lat (usec) : 750=0.24% 00:41:04.988 cpu : usr=98.80%, sys=0.47%, ctx=21, majf=0, minf=9781 00:41:04.988 IO depths : 1=7.7%, 2=19.9%, 4=55.2%, 8=17.3%, 16=0.0%, 32=0.0%, >=64=0.0% 00:41:04.988 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:04.988 complete : 0=0.0%, 4=90.0%, 8=10.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:04.988 issued rwts: total=118370,123011,0,0 short=0,0,0,0 dropped=0,0,0,0 00:41:04.988 latency : target=0, window=0, percentile=100.00%, depth=8 00:41:04.988 00:41:04.988 Run status group 0 (all jobs): 00:41:04.988 READ: bw=46.2MiB/s (48.5MB/s), 46.2MiB/s-46.2MiB/s (48.5MB/s-48.5MB/s), io=462MiB (485MB), run=10001-10001msec 00:41:04.988 WRITE: bw=48.7MiB/s (51.0MB/s), 48.7MiB/s-48.7MiB/s (51.0MB/s-51.0MB/s), io=481MiB (504MB), run=9872-9872msec 00:41:05.555 ----------------------------------------------------- 00:41:05.555 Suppressions used: 00:41:05.555 count bytes template 00:41:05.555 1 7 /usr/src/fio/parse.c 00:41:05.555 955 91680 /usr/src/fio/iolog.c 00:41:05.555 1 8 libtcmalloc_minimal.so 00:41:05.555 1 904 libcrypto.so 00:41:05.555 ----------------------------------------------------- 00:41:05.555 00:41:05.814 00:41:05.814 real 0m13.130s 00:41:05.814 user 0m13.012s 00:41:05.814 sys 0m1.087s 00:41:05.814 17:37:43 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1130 -- # xtrace_disable 00:41:05.814 17:37:43 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@10 -- # set +x 00:41:05.814 ************************************ 00:41:05.814 END TEST bdev_fio_rw_verify 00:41:05.814 ************************************ 00:41:05.814 17:37:43 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@349 -- # rm -f 00:41:05.814 17:37:43 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@350 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:41:05.814 17:37:43 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@353 -- # fio_config_gen /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio trim '' '' 00:41:05.814 17:37:43 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1284 -- # local config_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:41:05.814 17:37:43 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1285 -- # local workload=trim 00:41:05.814 17:37:43 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1286 -- # local bdev_type= 00:41:05.814 17:37:43 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1287 -- # local env_context= 00:41:05.814 17:37:43 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1288 -- # local fio_dir=/usr/src/fio 00:41:05.814 17:37:43 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1290 -- # '[' -e /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio ']' 00:41:05.814 17:37:43 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1295 -- # '[' -z trim ']' 00:41:05.814 17:37:43 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1299 -- # '[' -n '' ']' 00:41:05.814 17:37:43 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1303 -- # touch /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:41:05.814 17:37:43 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1305 -- # cat 00:41:05.814 17:37:43 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1317 -- # '[' trim == verify ']' 00:41:05.814 17:37:43 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1332 -- # '[' trim == trim ']' 00:41:05.814 17:37:43 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1333 -- # echo rw=trimwrite 00:41:05.814 17:37:43 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@354 -- # printf '%s\n' '{' ' "name": "raid5f",' ' "aliases": [' ' "fd72a0b9-5a3e-472d-abcd-e9a0bd6c28c7"' ' ],' ' "product_name": "Raid Volume",' ' "block_size": 512,' ' "num_blocks": 131072,' ' "uuid": "fd72a0b9-5a3e-472d-abcd-e9a0bd6c28c7",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "raid": {' ' "uuid": "fd72a0b9-5a3e-472d-abcd-e9a0bd6c28c7",' ' "strip_size_kb": 2,' ' "state": "online",' ' "raid_level": "raid5f",' ' "superblock": false,' ' "num_base_bdevs": 3,' ' "num_base_bdevs_discovered": 3,' ' "num_base_bdevs_operational": 3,' ' "base_bdevs_list": [' ' {' ' "name": "Malloc0",' ' "uuid": "028e4043-0a14-4e86-9c5e-9fae2ad59d4a",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' },' ' {' ' "name": "Malloc1",' ' "uuid": "59e95ae5-8a1c-42a5-b992-a33ce2a8bf06",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' },' ' {' ' "name": "Malloc2",' ' "uuid": "ee789f96-3ae2-48f5-9709-aab57fb95354",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' }' ' ]' ' }' ' }' '}' 00:41:05.814 17:37:43 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@354 -- # jq -r 'select(.supported_io_types.unmap == true) | .name' 00:41:05.814 17:37:43 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@354 -- # [[ -n '' ]] 00:41:05.814 17:37:43 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@360 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:41:05.814 /home/vagrant/spdk_repo/spdk 00:41:05.814 17:37:43 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@361 -- # popd 00:41:05.814 17:37:43 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@362 -- # trap - SIGINT SIGTERM EXIT 00:41:05.814 17:37:43 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@363 -- # return 0 00:41:05.814 00:41:05.814 real 0m13.399s 00:41:05.814 user 0m13.130s 00:41:05.814 sys 0m1.210s 00:41:05.814 17:37:43 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1130 -- # xtrace_disable 00:41:05.814 17:37:43 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@10 -- # set +x 00:41:05.814 ************************************ 00:41:05.814 END TEST bdev_fio 00:41:05.814 ************************************ 00:41:05.814 17:37:43 blockdev_raid5f -- bdev/blockdev.sh@812 -- # trap cleanup SIGINT SIGTERM EXIT 00:41:05.814 17:37:43 blockdev_raid5f -- bdev/blockdev.sh@814 -- # run_test bdev_verify /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:41:05.814 17:37:43 blockdev_raid5f -- common/autotest_common.sh@1105 -- # '[' 16 -le 1 ']' 00:41:05.814 17:37:43 blockdev_raid5f -- common/autotest_common.sh@1111 -- # xtrace_disable 00:41:05.814 17:37:43 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:41:05.814 ************************************ 00:41:05.814 START TEST bdev_verify 00:41:05.814 ************************************ 00:41:05.814 17:37:43 blockdev_raid5f.bdev_verify -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:41:06.073 [2024-11-26 17:37:43.360848] Starting SPDK v25.01-pre git sha1 f7ce15267 / DPDK 24.03.0 initialization... 00:41:06.073 [2024-11-26 17:37:43.361085] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid91058 ] 00:41:06.332 [2024-11-26 17:37:43.567884] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:41:06.332 [2024-11-26 17:37:43.718138] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:41:06.332 [2024-11-26 17:37:43.718165] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:41:07.269 Running I/O for 5 seconds... 00:41:09.141 10610.00 IOPS, 41.45 MiB/s [2024-11-26T17:37:47.525Z] 10733.00 IOPS, 41.93 MiB/s [2024-11-26T17:37:48.461Z] 10806.33 IOPS, 42.21 MiB/s [2024-11-26T17:37:49.405Z] 10818.75 IOPS, 42.26 MiB/s [2024-11-26T17:37:49.405Z] 10847.20 IOPS, 42.37 MiB/s 00:41:11.958 Latency(us) 00:41:11.958 [2024-11-26T17:37:49.405Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:41:11.958 Job: raid5f (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:41:11.958 Verification LBA range: start 0x0 length 0x2000 00:41:11.958 raid5f : 5.02 4882.60 19.07 0.00 0.00 39544.41 188.22 30208.98 00:41:11.958 Job: raid5f (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:41:11.958 Verification LBA range: start 0x2000 length 0x2000 00:41:11.958 raid5f : 5.02 5963.35 23.29 0.00 0.00 32327.53 511.02 24966.10 00:41:11.958 [2024-11-26T17:37:49.405Z] =================================================================================================================== 00:41:11.958 [2024-11-26T17:37:49.405Z] Total : 10845.95 42.37 0.00 0.00 35576.79 188.22 30208.98 00:41:13.862 00:41:13.862 real 0m7.712s 00:41:13.862 user 0m14.009s 00:41:13.862 sys 0m0.444s 00:41:13.862 17:37:50 blockdev_raid5f.bdev_verify -- common/autotest_common.sh@1130 -- # xtrace_disable 00:41:13.862 17:37:50 blockdev_raid5f.bdev_verify -- common/autotest_common.sh@10 -- # set +x 00:41:13.862 ************************************ 00:41:13.862 END TEST bdev_verify 00:41:13.862 ************************************ 00:41:13.862 17:37:50 blockdev_raid5f -- bdev/blockdev.sh@815 -- # run_test bdev_verify_big_io /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:41:13.862 17:37:50 blockdev_raid5f -- common/autotest_common.sh@1105 -- # '[' 16 -le 1 ']' 00:41:13.862 17:37:50 blockdev_raid5f -- common/autotest_common.sh@1111 -- # xtrace_disable 00:41:13.862 17:37:50 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:41:13.862 ************************************ 00:41:13.862 START TEST bdev_verify_big_io 00:41:13.862 ************************************ 00:41:13.862 17:37:50 blockdev_raid5f.bdev_verify_big_io -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:41:13.863 [2024-11-26 17:37:51.115638] Starting SPDK v25.01-pre git sha1 f7ce15267 / DPDK 24.03.0 initialization... 00:41:13.863 [2024-11-26 17:37:51.115809] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid91156 ] 00:41:13.863 [2024-11-26 17:37:51.306829] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:41:14.121 [2024-11-26 17:37:51.445034] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:41:14.122 [2024-11-26 17:37:51.445089] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:41:14.689 Running I/O for 5 seconds... 00:41:17.034 630.00 IOPS, 39.38 MiB/s [2024-11-26T17:37:55.418Z] 761.00 IOPS, 47.56 MiB/s [2024-11-26T17:37:56.354Z] 844.67 IOPS, 52.79 MiB/s [2024-11-26T17:37:57.290Z] 825.00 IOPS, 51.56 MiB/s [2024-11-26T17:37:57.548Z] 863.20 IOPS, 53.95 MiB/s 00:41:20.101 Latency(us) 00:41:20.102 [2024-11-26T17:37:57.549Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:41:20.102 Job: raid5f (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:41:20.102 Verification LBA range: start 0x0 length 0x200 00:41:20.102 raid5f : 5.27 433.10 27.07 0.00 0.00 7402628.11 245.76 321563.31 00:41:20.102 Job: raid5f (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:41:20.102 Verification LBA range: start 0x200 length 0x200 00:41:20.102 raid5f : 5.28 432.33 27.02 0.00 0.00 7425857.41 139.46 321563.31 00:41:20.102 [2024-11-26T17:37:57.549Z] =================================================================================================================== 00:41:20.102 [2024-11-26T17:37:57.549Z] Total : 865.44 54.09 0.00 0.00 7414242.76 139.46 321563.31 00:41:22.005 00:41:22.005 real 0m7.955s 00:41:22.005 user 0m14.577s 00:41:22.005 sys 0m0.416s 00:41:22.005 17:37:58 blockdev_raid5f.bdev_verify_big_io -- common/autotest_common.sh@1130 -- # xtrace_disable 00:41:22.005 ************************************ 00:41:22.005 END TEST bdev_verify_big_io 00:41:22.005 ************************************ 00:41:22.005 17:37:58 blockdev_raid5f.bdev_verify_big_io -- common/autotest_common.sh@10 -- # set +x 00:41:22.005 17:37:59 blockdev_raid5f -- bdev/blockdev.sh@816 -- # run_test bdev_write_zeroes /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:41:22.005 17:37:59 blockdev_raid5f -- common/autotest_common.sh@1105 -- # '[' 13 -le 1 ']' 00:41:22.005 17:37:59 blockdev_raid5f -- common/autotest_common.sh@1111 -- # xtrace_disable 00:41:22.005 17:37:59 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:41:22.005 ************************************ 00:41:22.005 START TEST bdev_write_zeroes 00:41:22.005 ************************************ 00:41:22.005 17:37:59 blockdev_raid5f.bdev_write_zeroes -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:41:22.005 [2024-11-26 17:37:59.143824] Starting SPDK v25.01-pre git sha1 f7ce15267 / DPDK 24.03.0 initialization... 00:41:22.005 [2024-11-26 17:37:59.144009] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid91259 ] 00:41:22.005 [2024-11-26 17:37:59.337846] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:41:22.264 [2024-11-26 17:37:59.476618] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:41:22.833 Running I/O for 1 seconds... 00:41:23.771 27999.00 IOPS, 109.37 MiB/s 00:41:23.771 Latency(us) 00:41:23.771 [2024-11-26T17:38:01.218Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:41:23.771 Job: raid5f (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:41:23.771 raid5f : 1.01 27977.68 109.29 0.00 0.00 4561.36 1529.17 6491.18 00:41:23.771 [2024-11-26T17:38:01.218Z] =================================================================================================================== 00:41:23.771 [2024-11-26T17:38:01.218Z] Total : 27977.68 109.29 0.00 0.00 4561.36 1529.17 6491.18 00:41:25.680 00:41:25.680 real 0m3.628s 00:41:25.680 user 0m3.084s 00:41:25.680 sys 0m0.415s 00:41:25.680 17:38:02 blockdev_raid5f.bdev_write_zeroes -- common/autotest_common.sh@1130 -- # xtrace_disable 00:41:25.680 ************************************ 00:41:25.680 END TEST bdev_write_zeroes 00:41:25.680 ************************************ 00:41:25.680 17:38:02 blockdev_raid5f.bdev_write_zeroes -- common/autotest_common.sh@10 -- # set +x 00:41:25.680 17:38:02 blockdev_raid5f -- bdev/blockdev.sh@819 -- # run_test bdev_json_nonenclosed /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:41:25.680 17:38:02 blockdev_raid5f -- common/autotest_common.sh@1105 -- # '[' 13 -le 1 ']' 00:41:25.680 17:38:02 blockdev_raid5f -- common/autotest_common.sh@1111 -- # xtrace_disable 00:41:25.680 17:38:02 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:41:25.680 ************************************ 00:41:25.680 START TEST bdev_json_nonenclosed 00:41:25.680 ************************************ 00:41:25.680 17:38:02 blockdev_raid5f.bdev_json_nonenclosed -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:41:25.680 [2024-11-26 17:38:02.831303] Starting SPDK v25.01-pre git sha1 f7ce15267 / DPDK 24.03.0 initialization... 00:41:25.680 [2024-11-26 17:38:02.831488] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid91319 ] 00:41:25.680 [2024-11-26 17:38:03.028344] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:41:25.939 [2024-11-26 17:38:03.164829] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:41:25.939 [2024-11-26 17:38:03.164941] json_config.c: 608:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: not enclosed in {}. 00:41:25.939 [2024-11-26 17:38:03.164976] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:41:25.939 [2024-11-26 17:38:03.164996] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:41:26.199 00:41:26.199 real 0m0.735s 00:41:26.199 user 0m0.460s 00:41:26.199 sys 0m0.169s 00:41:26.199 17:38:03 blockdev_raid5f.bdev_json_nonenclosed -- common/autotest_common.sh@1130 -- # xtrace_disable 00:41:26.199 ************************************ 00:41:26.199 END TEST bdev_json_nonenclosed 00:41:26.199 ************************************ 00:41:26.199 17:38:03 blockdev_raid5f.bdev_json_nonenclosed -- common/autotest_common.sh@10 -- # set +x 00:41:26.199 17:38:03 blockdev_raid5f -- bdev/blockdev.sh@822 -- # run_test bdev_json_nonarray /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:41:26.199 17:38:03 blockdev_raid5f -- common/autotest_common.sh@1105 -- # '[' 13 -le 1 ']' 00:41:26.199 17:38:03 blockdev_raid5f -- common/autotest_common.sh@1111 -- # xtrace_disable 00:41:26.199 17:38:03 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:41:26.199 ************************************ 00:41:26.199 START TEST bdev_json_nonarray 00:41:26.199 ************************************ 00:41:26.199 17:38:03 blockdev_raid5f.bdev_json_nonarray -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:41:26.199 [2024-11-26 17:38:03.615821] Starting SPDK v25.01-pre git sha1 f7ce15267 / DPDK 24.03.0 initialization... 00:41:26.199 [2024-11-26 17:38:03.615998] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid91345 ] 00:41:26.460 [2024-11-26 17:38:03.804674] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:41:26.719 [2024-11-26 17:38:03.940993] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:41:26.719 [2024-11-26 17:38:03.941142] json_config.c: 614:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: 'subsystems' should be an array. 00:41:26.719 [2024-11-26 17:38:03.941167] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:41:26.719 [2024-11-26 17:38:03.941191] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:41:26.979 00:41:26.979 real 0m0.717s 00:41:26.979 user 0m0.428s 00:41:26.979 sys 0m0.184s 00:41:26.979 17:38:04 blockdev_raid5f.bdev_json_nonarray -- common/autotest_common.sh@1130 -- # xtrace_disable 00:41:26.979 17:38:04 blockdev_raid5f.bdev_json_nonarray -- common/autotest_common.sh@10 -- # set +x 00:41:26.979 ************************************ 00:41:26.979 END TEST bdev_json_nonarray 00:41:26.979 ************************************ 00:41:26.979 17:38:04 blockdev_raid5f -- bdev/blockdev.sh@824 -- # [[ raid5f == bdev ]] 00:41:26.979 17:38:04 blockdev_raid5f -- bdev/blockdev.sh@832 -- # [[ raid5f == gpt ]] 00:41:26.979 17:38:04 blockdev_raid5f -- bdev/blockdev.sh@836 -- # [[ raid5f == crypto_sw ]] 00:41:26.979 17:38:04 blockdev_raid5f -- bdev/blockdev.sh@848 -- # trap - SIGINT SIGTERM EXIT 00:41:26.979 17:38:04 blockdev_raid5f -- bdev/blockdev.sh@849 -- # cleanup 00:41:26.979 17:38:04 blockdev_raid5f -- bdev/blockdev.sh@23 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/aiofile 00:41:26.979 17:38:04 blockdev_raid5f -- bdev/blockdev.sh@24 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:41:26.979 17:38:04 blockdev_raid5f -- bdev/blockdev.sh@26 -- # [[ raid5f == rbd ]] 00:41:26.979 17:38:04 blockdev_raid5f -- bdev/blockdev.sh@30 -- # [[ raid5f == daos ]] 00:41:26.979 17:38:04 blockdev_raid5f -- bdev/blockdev.sh@34 -- # [[ raid5f = \g\p\t ]] 00:41:26.979 17:38:04 blockdev_raid5f -- bdev/blockdev.sh@40 -- # [[ raid5f == xnvme ]] 00:41:26.979 00:41:26.979 real 0m52.308s 00:41:26.979 user 1m9.019s 00:41:26.979 sys 0m6.906s 00:41:26.979 17:38:04 blockdev_raid5f -- common/autotest_common.sh@1130 -- # xtrace_disable 00:41:26.979 17:38:04 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:41:26.979 ************************************ 00:41:26.979 END TEST blockdev_raid5f 00:41:26.979 ************************************ 00:41:26.979 17:38:04 -- spdk/autotest.sh@194 -- # uname -s 00:41:26.979 17:38:04 -- spdk/autotest.sh@194 -- # [[ Linux == Linux ]] 00:41:26.979 17:38:04 -- spdk/autotest.sh@195 -- # [[ 0 -eq 1 ]] 00:41:26.979 17:38:04 -- spdk/autotest.sh@195 -- # [[ 0 -eq 1 ]] 00:41:26.979 17:38:04 -- spdk/autotest.sh@207 -- # '[' 0 -eq 1 ']' 00:41:26.979 17:38:04 -- spdk/autotest.sh@256 -- # '[' 0 -eq 1 ']' 00:41:26.979 17:38:04 -- spdk/autotest.sh@260 -- # timing_exit lib 00:41:26.979 17:38:04 -- common/autotest_common.sh@732 -- # xtrace_disable 00:41:26.979 17:38:04 -- common/autotest_common.sh@10 -- # set +x 00:41:26.979 17:38:04 -- spdk/autotest.sh@262 -- # '[' 0 -eq 1 ']' 00:41:26.979 17:38:04 -- spdk/autotest.sh@267 -- # '[' 0 -eq 1 ']' 00:41:26.979 17:38:04 -- spdk/autotest.sh@276 -- # '[' 0 -eq 1 ']' 00:41:26.979 17:38:04 -- spdk/autotest.sh@311 -- # '[' 0 -eq 1 ']' 00:41:26.979 17:38:04 -- spdk/autotest.sh@315 -- # '[' 0 -eq 1 ']' 00:41:26.979 17:38:04 -- spdk/autotest.sh@319 -- # '[' 0 -eq 1 ']' 00:41:26.979 17:38:04 -- spdk/autotest.sh@324 -- # '[' 0 -eq 1 ']' 00:41:26.979 17:38:04 -- spdk/autotest.sh@333 -- # '[' 0 -eq 1 ']' 00:41:26.979 17:38:04 -- spdk/autotest.sh@338 -- # '[' 0 -eq 1 ']' 00:41:26.979 17:38:04 -- spdk/autotest.sh@342 -- # '[' 0 -eq 1 ']' 00:41:26.979 17:38:04 -- spdk/autotest.sh@346 -- # '[' 0 -eq 1 ']' 00:41:26.979 17:38:04 -- spdk/autotest.sh@350 -- # '[' 0 -eq 1 ']' 00:41:26.979 17:38:04 -- spdk/autotest.sh@355 -- # '[' 0 -eq 1 ']' 00:41:26.979 17:38:04 -- spdk/autotest.sh@359 -- # '[' 0 -eq 1 ']' 00:41:26.979 17:38:04 -- spdk/autotest.sh@366 -- # [[ 0 -eq 1 ]] 00:41:26.979 17:38:04 -- spdk/autotest.sh@370 -- # [[ 0 -eq 1 ]] 00:41:26.979 17:38:04 -- spdk/autotest.sh@374 -- # [[ 0 -eq 1 ]] 00:41:26.979 17:38:04 -- spdk/autotest.sh@378 -- # [[ '' -eq 1 ]] 00:41:26.979 17:38:04 -- spdk/autotest.sh@385 -- # trap - SIGINT SIGTERM EXIT 00:41:26.979 17:38:04 -- spdk/autotest.sh@387 -- # timing_enter post_cleanup 00:41:26.979 17:38:04 -- common/autotest_common.sh@726 -- # xtrace_disable 00:41:26.979 17:38:04 -- common/autotest_common.sh@10 -- # set +x 00:41:26.979 17:38:04 -- spdk/autotest.sh@388 -- # autotest_cleanup 00:41:26.979 17:38:04 -- common/autotest_common.sh@1396 -- # local autotest_es=0 00:41:26.979 17:38:04 -- common/autotest_common.sh@1397 -- # xtrace_disable 00:41:26.979 17:38:04 -- common/autotest_common.sh@10 -- # set +x 00:41:29.517 INFO: APP EXITING 00:41:29.517 INFO: killing all VMs 00:41:29.517 INFO: killing vhost app 00:41:29.517 INFO: EXIT DONE 00:41:29.517 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:41:29.776 Waiting for block devices as requested 00:41:29.776 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:41:29.776 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:41:30.713 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:41:30.713 Cleaning 00:41:30.713 Removing: /var/run/dpdk/spdk0/config 00:41:30.713 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-0 00:41:30.713 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-1 00:41:30.713 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-2 00:41:30.713 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-3 00:41:30.713 Removing: /var/run/dpdk/spdk0/fbarray_memzone 00:41:30.713 Removing: /var/run/dpdk/spdk0/hugepage_info 00:41:30.713 Removing: /dev/shm/spdk_tgt_trace.pid57063 00:41:30.713 Removing: /var/run/dpdk/spdk0 00:41:30.713 Removing: /var/run/dpdk/spdk_pid56810 00:41:30.713 Removing: /var/run/dpdk/spdk_pid57063 00:41:30.713 Removing: /var/run/dpdk/spdk_pid57303 00:41:30.713 Removing: /var/run/dpdk/spdk_pid57413 00:41:30.972 Removing: /var/run/dpdk/spdk_pid57475 00:41:30.972 Removing: /var/run/dpdk/spdk_pid57614 00:41:30.972 Removing: /var/run/dpdk/spdk_pid57638 00:41:30.972 Removing: /var/run/dpdk/spdk_pid57853 00:41:30.972 Removing: /var/run/dpdk/spdk_pid57971 00:41:30.972 Removing: /var/run/dpdk/spdk_pid58084 00:41:30.972 Removing: /var/run/dpdk/spdk_pid58211 00:41:30.972 Removing: /var/run/dpdk/spdk_pid58330 00:41:30.972 Removing: /var/run/dpdk/spdk_pid58370 00:41:30.972 Removing: /var/run/dpdk/spdk_pid58406 00:41:30.973 Removing: /var/run/dpdk/spdk_pid58484 00:41:30.973 Removing: /var/run/dpdk/spdk_pid58596 00:41:30.973 Removing: /var/run/dpdk/spdk_pid59081 00:41:30.973 Removing: /var/run/dpdk/spdk_pid59162 00:41:30.973 Removing: /var/run/dpdk/spdk_pid59241 00:41:30.973 Removing: /var/run/dpdk/spdk_pid59257 00:41:30.973 Removing: /var/run/dpdk/spdk_pid59422 00:41:30.973 Removing: /var/run/dpdk/spdk_pid59443 00:41:30.973 Removing: /var/run/dpdk/spdk_pid59614 00:41:30.973 Removing: /var/run/dpdk/spdk_pid59635 00:41:30.973 Removing: /var/run/dpdk/spdk_pid59710 00:41:30.973 Removing: /var/run/dpdk/spdk_pid59739 00:41:30.973 Removing: /var/run/dpdk/spdk_pid59809 00:41:30.973 Removing: /var/run/dpdk/spdk_pid59832 00:41:30.973 Removing: /var/run/dpdk/spdk_pid60038 00:41:30.973 Removing: /var/run/dpdk/spdk_pid60075 00:41:30.973 Removing: /var/run/dpdk/spdk_pid60164 00:41:30.973 Removing: /var/run/dpdk/spdk_pid61570 00:41:30.973 Removing: /var/run/dpdk/spdk_pid61787 00:41:30.973 Removing: /var/run/dpdk/spdk_pid61933 00:41:30.973 Removing: /var/run/dpdk/spdk_pid62587 00:41:30.973 Removing: /var/run/dpdk/spdk_pid62804 00:41:30.973 Removing: /var/run/dpdk/spdk_pid62950 00:41:30.973 Removing: /var/run/dpdk/spdk_pid63600 00:41:30.973 Removing: /var/run/dpdk/spdk_pid63933 00:41:30.973 Removing: /var/run/dpdk/spdk_pid64074 00:41:30.973 Removing: /var/run/dpdk/spdk_pid65465 00:41:30.973 Removing: /var/run/dpdk/spdk_pid65724 00:41:30.973 Removing: /var/run/dpdk/spdk_pid65869 00:41:30.973 Removing: /var/run/dpdk/spdk_pid67270 00:41:30.973 Removing: /var/run/dpdk/spdk_pid67525 00:41:30.973 Removing: /var/run/dpdk/spdk_pid67671 00:41:30.973 Removing: /var/run/dpdk/spdk_pid69067 00:41:30.973 Removing: /var/run/dpdk/spdk_pid69510 00:41:30.973 Removing: /var/run/dpdk/spdk_pid69660 00:41:30.973 Removing: /var/run/dpdk/spdk_pid71148 00:41:30.973 Removing: /var/run/dpdk/spdk_pid71415 00:41:30.973 Removing: /var/run/dpdk/spdk_pid71562 00:41:30.973 Removing: /var/run/dpdk/spdk_pid73060 00:41:30.973 Removing: /var/run/dpdk/spdk_pid73325 00:41:30.973 Removing: /var/run/dpdk/spdk_pid73475 00:41:30.973 Removing: /var/run/dpdk/spdk_pid74958 00:41:30.973 Removing: /var/run/dpdk/spdk_pid75456 00:41:30.973 Removing: /var/run/dpdk/spdk_pid75602 00:41:30.973 Removing: /var/run/dpdk/spdk_pid75751 00:41:30.973 Removing: /var/run/dpdk/spdk_pid76174 00:41:30.973 Removing: /var/run/dpdk/spdk_pid76915 00:41:30.973 Removing: /var/run/dpdk/spdk_pid77315 00:41:30.973 Removing: /var/run/dpdk/spdk_pid77999 00:41:30.973 Removing: /var/run/dpdk/spdk_pid78462 00:41:30.973 Removing: /var/run/dpdk/spdk_pid79227 00:41:30.973 Removing: /var/run/dpdk/spdk_pid79636 00:41:30.973 Removing: /var/run/dpdk/spdk_pid81605 00:41:30.973 Removing: /var/run/dpdk/spdk_pid82049 00:41:30.973 Removing: /var/run/dpdk/spdk_pid82489 00:41:30.973 Removing: /var/run/dpdk/spdk_pid84584 00:41:30.973 Removing: /var/run/dpdk/spdk_pid85075 00:41:30.973 Removing: /var/run/dpdk/spdk_pid85592 00:41:30.973 Removing: /var/run/dpdk/spdk_pid86660 00:41:30.973 Removing: /var/run/dpdk/spdk_pid86990 00:41:31.233 Removing: /var/run/dpdk/spdk_pid87933 00:41:31.233 Removing: /var/run/dpdk/spdk_pid88261 00:41:31.233 Removing: /var/run/dpdk/spdk_pid89212 00:41:31.233 Removing: /var/run/dpdk/spdk_pid89540 00:41:31.233 Removing: /var/run/dpdk/spdk_pid90217 00:41:31.233 Removing: /var/run/dpdk/spdk_pid90503 00:41:31.233 Removing: /var/run/dpdk/spdk_pid90575 00:41:31.233 Removing: /var/run/dpdk/spdk_pid90622 00:41:31.233 Removing: /var/run/dpdk/spdk_pid90879 00:41:31.233 Removing: /var/run/dpdk/spdk_pid91058 00:41:31.233 Removing: /var/run/dpdk/spdk_pid91156 00:41:31.233 Removing: /var/run/dpdk/spdk_pid91259 00:41:31.233 Removing: /var/run/dpdk/spdk_pid91319 00:41:31.233 Removing: /var/run/dpdk/spdk_pid91345 00:41:31.233 Clean 00:41:31.233 17:38:08 -- common/autotest_common.sh@1453 -- # return 0 00:41:31.233 17:38:08 -- spdk/autotest.sh@389 -- # timing_exit post_cleanup 00:41:31.233 17:38:08 -- common/autotest_common.sh@732 -- # xtrace_disable 00:41:31.233 17:38:08 -- common/autotest_common.sh@10 -- # set +x 00:41:31.233 17:38:08 -- spdk/autotest.sh@391 -- # timing_exit autotest 00:41:31.233 17:38:08 -- common/autotest_common.sh@732 -- # xtrace_disable 00:41:31.233 17:38:08 -- common/autotest_common.sh@10 -- # set +x 00:41:31.233 17:38:08 -- spdk/autotest.sh@392 -- # chmod a+r /home/vagrant/spdk_repo/spdk/../output/timing.txt 00:41:31.233 17:38:08 -- spdk/autotest.sh@394 -- # [[ -f /home/vagrant/spdk_repo/spdk/../output/udev.log ]] 00:41:31.233 17:38:08 -- spdk/autotest.sh@394 -- # rm -f /home/vagrant/spdk_repo/spdk/../output/udev.log 00:41:31.233 17:38:08 -- spdk/autotest.sh@396 -- # [[ y == y ]] 00:41:31.233 17:38:08 -- spdk/autotest.sh@398 -- # hostname 00:41:31.233 17:38:08 -- spdk/autotest.sh@398 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -c --no-external -d /home/vagrant/spdk_repo/spdk -t fedora39-cloud-1721788873-2326 -o /home/vagrant/spdk_repo/spdk/../output/cov_test.info 00:41:31.493 geninfo: WARNING: invalid characters removed from testname! 00:41:53.452 17:38:30 -- spdk/autotest.sh@399 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -a /home/vagrant/spdk_repo/spdk/../output/cov_base.info -a /home/vagrant/spdk_repo/spdk/../output/cov_test.info -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:41:55.989 17:38:33 -- spdk/autotest.sh@400 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/dpdk/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:41:58.527 17:38:35 -- spdk/autotest.sh@404 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info --ignore-errors unused,unused '/usr/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:42:00.432 17:38:37 -- spdk/autotest.sh@405 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/examples/vmd/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:42:02.338 17:38:39 -- spdk/autotest.sh@406 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/app/spdk_lspci/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:42:04.876 17:38:41 -- spdk/autotest.sh@407 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/app/spdk_top/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:42:06.784 17:38:44 -- spdk/autotest.sh@408 -- # rm -f cov_base.info cov_test.info OLD_STDOUT OLD_STDERR 00:42:06.784 17:38:44 -- spdk/autorun.sh@1 -- $ timing_finish 00:42:06.784 17:38:44 -- common/autotest_common.sh@738 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/timing.txt ]] 00:42:06.784 17:38:44 -- common/autotest_common.sh@740 -- $ flamegraph=/usr/local/FlameGraph/flamegraph.pl 00:42:06.784 17:38:44 -- common/autotest_common.sh@741 -- $ [[ -x /usr/local/FlameGraph/flamegraph.pl ]] 00:42:06.784 17:38:44 -- common/autotest_common.sh@744 -- $ /usr/local/FlameGraph/flamegraph.pl --title 'Build Timing' --nametype Step: --countname seconds /home/vagrant/spdk_repo/spdk/../output/timing.txt 00:42:07.043 + [[ -n 5254 ]] 00:42:07.043 + sudo kill 5254 00:42:07.054 [Pipeline] } 00:42:07.071 [Pipeline] // timeout 00:42:07.077 [Pipeline] } 00:42:07.091 [Pipeline] // stage 00:42:07.097 [Pipeline] } 00:42:07.114 [Pipeline] // catchError 00:42:07.125 [Pipeline] stage 00:42:07.127 [Pipeline] { (Stop VM) 00:42:07.143 [Pipeline] sh 00:42:07.430 + vagrant halt 00:42:10.013 ==> default: Halting domain... 00:42:16.615 [Pipeline] sh 00:42:16.896 + vagrant destroy -f 00:42:19.431 ==> default: Removing domain... 00:42:19.704 [Pipeline] sh 00:42:19.991 + mv output /var/jenkins/workspace/raid-vg-autotest/output 00:42:20.001 [Pipeline] } 00:42:20.019 [Pipeline] // stage 00:42:20.024 [Pipeline] } 00:42:20.040 [Pipeline] // dir 00:42:20.045 [Pipeline] } 00:42:20.059 [Pipeline] // wrap 00:42:20.064 [Pipeline] } 00:42:20.076 [Pipeline] // catchError 00:42:20.085 [Pipeline] stage 00:42:20.087 [Pipeline] { (Epilogue) 00:42:20.099 [Pipeline] sh 00:42:20.381 + jbp/jenkins/jjb-config/jobs/scripts/compress_artifacts.sh 00:42:25.671 [Pipeline] catchError 00:42:25.673 [Pipeline] { 00:42:25.687 [Pipeline] sh 00:42:25.970 + jbp/jenkins/jjb-config/jobs/scripts/check_artifacts_size.sh 00:42:26.229 Artifacts sizes are good 00:42:26.239 [Pipeline] } 00:42:26.253 [Pipeline] // catchError 00:42:26.265 [Pipeline] archiveArtifacts 00:42:26.273 Archiving artifacts 00:42:26.375 [Pipeline] cleanWs 00:42:26.388 [WS-CLEANUP] Deleting project workspace... 00:42:26.388 [WS-CLEANUP] Deferred wipeout is used... 00:42:26.395 [WS-CLEANUP] done 00:42:26.397 [Pipeline] } 00:42:26.414 [Pipeline] // stage 00:42:26.421 [Pipeline] } 00:42:26.437 [Pipeline] // node 00:42:26.443 [Pipeline] End of Pipeline 00:42:26.499 Finished: SUCCESS